Microscope

Would you rather be mistakenly told you had cancer or have a test miss that you really did?

A while back, my wife Lori and I went to friend’s birthday party where we met a number of great folks and especially Martin Young who works as a cystologist in London. Essentially, he manages various screening and testing. One dimension to screening that needs to be balanced is the incidence of ‘false positive’ results and ‘false negatives’. Of course, ideally, you wouldn’t have any false results. But in practicality, science is not yet that advanced and these tests are not yet definitive. The screens really just flag ‘potential’ or ‘candidate’ problems.

The dilemma arises when you calibrate the tests and the process of administering them. If you make them very ‘sensitive’, then you can reduce the number of ‘false negative’, but then you also increase the number of ‘false positives’. If you make them more ‘specific’ (ie. more variables need to be satisfied in order to trigger a ‘positive’ result), then you similarly increase the number of ‘false negatives’.

Where does one draw the balance? Martin’s highly conjectural speculation was that the incidence of ‘false positives’ probably introduces more costs to the system at present than the incidence of ‘false negatives’. Maybe that is a cost society is happy to pick up with reassurance that the serious problems are more likely to be identified and addressed early. But, evaluating the trade-offs is not that easy. For starters, resources invested into this ‘sensitivity’ is pretty much resources not available for other preventative programmes that could equally (or perhaps more) increase catching or avoiding problems all together. Furthermore, the cost of a false positive is not just the economic cost, but there is also a huge emotional and psychological cost. The even unconfirmed hint that someone ‘might’ have cancer or some other terrible or terminal condition is a massive blow to most people even those who try to keep in perspective until further confirmation of additional tests are done.

This balance is simply yet another example of the executive balance between upside opportunity (opportunity to catch a condition early) and downside risk (risk of missing a diagnosis). One could say that medical ‘Leaders’ would err on high ‘false positives’ (optimising identification), and medical ‘Managers’ would err on high ‘false negatives’ (optimising resources for the whole system to achieve the best overall treatment rates).

Advertisements