What You Don’t Know Can Hurt You

“Could have, would have, and should have”, or, “if I knew then what I know now”, or, “hindsight is 100%.” This last weekend, through a very agile collaboration between governments, agencies, public and private sectors, an attempted act of cowardly terrorism was averted; a bomb did not take innocent lives. It hinged on a decision from a former terrorist to come forward and alert Saudi authorities of the despicable deed and the cascade of information and resource deployment worthy of commendation. Of course, the media chimed in and warned that all passenger flights may be in danger of carrying explosives in the baggage holds.
The “news” is nothing new, not even news, but sufficient to trigger the domino tumble of escalating the declaration of more inspection, perhaps escalating to 100% inspection of packages and mail on flights. Does anyone feel better and safer? Test after test conducted by the Government Accountability Office (GAO) indicate that the current 100% inspections of passengers and carry-on luggage let lots of nasty stuff get through. That’s the problem with inspection … 100% doesn’t yield zero defects … even 100% after 100 % does not get us to zero, and multiple studies confirm that. In fact, the responses that succeeded this last weekend were triggered way upstream of airplane loading and depended on state-of-the art tracking by a commercial entity, UPS.
This example is one where the consequences of failure are severe. Many organizations have a balance of two dimensions when making control decisions, fear and confidence. The levels of fear of the consequences of failure drive escalating and sequential levels of inspections and checks as a comforting barrier. This strategy does not always address the causes of the failure, but rather the containment of failures at given points in the process, often relying on the last line of defense. This is a “detect and correct strategy”. It is costly in resources, frequently wasteful, and seldom fail safe. It attempts to catch a defect, the bomb, in the system before it does damage. It requires some knowledge of where to look, and where the damage is likely to occur, or a blanketing “check it all” approach. In complex systems, this approach is very likely to fail. Many of our organizations operate within complex systems.
The alternative is to invest upstream in the process, much like the example this weekend. This alternative strategy is to develop the capability to “predict and prevent”. This strategy plays more to building confidence in the process by building knowledge of the drivers and causes of the defect and look for the culprits there. Yes, it requires spending more on the front end, but decades of data confirm that it is far more effective. Inspection is often destructive and prevention can be constructive as knowledge creates focus on what matters and away from what does not.
There is often great pressure to appear to be doing something about defects. Downstream inspections are visible and they do catch some defects. However, when they fail, our customers experience the consequences. Our instincts may tell us to do more checks check more things. As we move further downstream in our inspections the compounding pressures to deliver on time will overcome our capacity and degrees of freedom to detect and correct. Careers can die at deadlines. Ask any proposal writing team.
Healthy performance is not too different than healthy living; it is typically more effective when managed before you need to go to the hospital.

Leave a Reply

Your email address will not be published. Required fields are marked *

You may use these HTML tags and attributes: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <strike> <strong>

Awesome. I have it.

Your couch. It is mine.

Im a cool paragraph that lives inside of an even cooler modal. Wins

×