It’s Your Call
A blown call costs a pitcher a perfect game. This week, it really happened and everybody felt terrible, apologies ensued and the guilty umpire felt genuine remorse and accepted full responsibility for the failed measurement. A poor measurement did not change the perfection of the real performance, a better gage, instant replay validated that, but rather the record of what happened. Those that missed this story and are evaluating the statistics of pitching performance will only have the historic data to evaluate, data that is a false witness of events. Imagine the effects of all the poor measurements in one year of major sports events. Do they change important outcomes? Do they steer rewards or punishments? How about all the stuff that goes on with gamblers in or out of Las Vegas?
Bad measurement in sports evokes big emotions, outrage, indignation and a score of aftereffects that include bragging rights. Does bad measurement in our enterprises conjure similar reactions? What are the chances that we are making decisions as a result of poor measurement, the wrong lens, an obstacle in the way, poor technology, get the picture? If so, the issue is ubiquitous. In over two decades of helping organizations with performance gaps, poor measurements have always been at play, sometimes with disastrous consequences.
The issue is not a simple one. For example:
• Do we use the data that we have and try to conjure meaning from it? Or do we start with what we want to know and then measure accordingly?
• Are we sure that the movement in the data is representative of what is actually happening within the process?
• Do different individuals or functions measure differently? Would they come up with the same value when measuring the same process?
• Does the data just not make any sense?
• How about our “calls” on what we evaluate? Do two managers reach the same conclusion about someone’s performance? If not, who is right? What are the consequences to the individual?
• Do we introduce our own bias into the measurement and evaluation?
• Do we have folks who are easier graders and those that are more demanding? Do they evoke similar or different performance?
• How much of our decision process rely on a subjective call (an opinion) versus an objective measurement (an actual number)? Do we know how often our calls are wrong?
• Do compliance requirements change how we measure performance?
• What happens when lab results are wrong? What if wrong results bring really bad news or they mask the bad news and bring good news?
• Are we ever surprised by events that would have been very visible had we measured differently?
• Does a part of the organization hide or hoard data?
• Do our customers measure our deliverables and call about problems that we should have prevented them from experiencing? What did our data say?
• Do we have our vision checked from time to time? Why is that?
• Do we ever catch how some advertisers deceive with clever use of statistics? How about in our enterprises?
• Is it safer for ourselves to call someone “safe” rather than “out” when we’re not sure, just in case? Consequences are often more severe in one direction versus the other.
• Have we ever spent a lot of money and resources on a decision made with poor data?
So, how’s our data today?