Prove It

Asking for science

Michael Selik

Metrics have been popular in business for a while now. Executives know that they can’t manage what they can’t measure, so corporations around the world have been pushing for consolidated business intelligence systems that provide a single source of truth. Business intelligence is focused on providing high quality, comprehensive data that can be explored by slicing, dicing, drilling-down, rolling-up, and pivoting. People at all levels are using business intelligence systems to make reports that aggregate and visualize various key performance indicators. These reports are great for generating hypotheses, but a data scientist is not satisfied with an untested hypothesis.

How can a manager transition a team from business intelligence to data science? Someone asked me recently:

How do I make Data Science the core of our process? How do I weave it into the fabric of what we do on a daily basis… so it’s more than just running some reports for management? How, as a manager, do I integrate the underlying concepts of Data Science into our team on a deeper level?

In a way, the answer is simple: be skeptical. Don’t be satisfied with basic comparisons. Ask for signficance levels, plausible causal models, and random samples. Of course, you’re not trying to publish a paper; you’re just trying to make good decisions. Ask for a level of proof commensurate with the consequences of the decision.

Appropriate skepticism is the art of balancing priorities. If your colleagues are familiar with software engineering, then a good metaphor is testing software. New code is a hypothesis. How rigorously one should test the code depends on the consequences a mistake. Some websites simply test in production, prioritizing getting things done over getting things right. Sending a robot to Mars requires higher standards. When your team says they are too busy to experiment before executing, ask them what will happen if they’ve made a mistake.

So, the first step is to get your team to think about the magnitude of each decision. Once they’re in the habit of thinking about consequences, they should start to feel a little guilty that they’re not testing their beliefs. From there, you can gradually bring in more science to your team’s processes by asking good questions.

As part of evaluating the consequences of the team’s decisions, it’s important to quantify your current performance. That’s a hard problem in itself, especially if your team is not directly generating revenue. It’s to your advantage (career-wise) to link your team’s performance as closely to revenue as possible, but not so much that you lose control.

Start with your team’s most time-consuming activities and ask how you’ll recognize success. I have seen many teams working hard without any way of measuring the effect of their hard work. As manager, you should be skeptical of all your organizations processes.

This was my answer, and the manager replied that indeed his team’s current performance metrics did not align with his goals for the team. He set off to improve the appropriateness and accuracy of the team’s performance metrics.