Monday, December 26, 2011

Sensitivity vs. Specificity

Sensitivity vs. Specificity
When it comes to medicine, nothing is carved in stone, which is one issue this blog takes with medical dramas in the mainstream media; one test a diagnosis does not always make. In these dramas, a test is ordered, i.e. HIV, the result comes back, and the audience is lead to believe that it is infallible and the storyline continues.
Unfortunately, in the real three dimensional world of flesh and blood, this is not the case. Every diagnostic procedure has its limitations. In the medical laboratory, two terms are used to define them; specificity and sensitivity.
One can ask, aren’t these two of the same thing?
Specificity is the quality or condition of being specific and being specific is sharing or being those properties of something that allow it to be referred to a particular category.
Sensitivity is the quality of being sensitive, sensitive being capable of indicating minute differences.
(Thanks to http://www.merriam-webster.com/dictionary/specific for providing the definitions of these two terms).
So how does this relate to a diagnostic test? Every diagnostic procedure has one of two outcomes; it’s either positive or negative. The question that has to be asked though is it a true positive or a true negative? And on the other side of the coin, is it a false positive or a false negative?
This is always a little earth shattering in medicine when this simple truth is blurted out, sort of like being told that Santa Claus was actually your uncle, and that there’s no Easter Bunny. Unfortunately its reality and it goes back to the observation made in the first paragraph, one test a diagnosis does not always make.
Fortunately specificity and sensitivity help bring some stability to this shaky ground. When any diagnostic procedure is developed, it has to be tested against something referred to as the ‘Gold Standard’, the diagnostic test in use that is as close to being infallible as possible.
Thanks to examples found in other Web articles, the process of how this is done can be briefly explained.
The article is from http://en.wikipedia.org/wiki/Sensitivity_and_specificity, and the example used is a screening test for colon cancer, fecal occult blood (FOB). Now when screening for colon cancer, endoscopy is the preferred test, the ‘Gold Standard’. However, because it is more expensive and time consuming, it’s not practical or realistic to do it on every patient. Testing for hidden, or occult, blood in stool specimens is an easier screening tool for colon cancer. But how good is it compared to endoscopy?
From the example in this Wikipedia article, when FOB results are compared to endoscopy, there’s some good news and some bad news.
The good news was that as far as predicting whether a patient did not have colon cancer, FOB is quite reliable. True, it’s not 100%, but its rate of 91% can give the patient and physician confidence that there is no colon cancer present. The ability of FOB to show true negatives is high. The ability of a test to confirm true false negative results is its specificity, and FOB has good specificity.
However, the bad news is that when it comes to having true positive results, FOB’s track record isn’t as stellar. Bottom line, if the FOB is positive, a endoscopy would be highly recommended? Why? Because compared to endoscopy, FOB positive results are only truly positive 67% of the time. That means that a positive result can be wrong 33% of the time. That means on average, in every 100 patients with a positive FOB , 33 of them actually don’t have colon cancer, and will need an endoscopy to confirm it. When it comes to screening for colon cancer positives, FOB is not as sensitive as endoscopy.
Sensitivity is how well a test is at determining true positives from false positives. In this example endoscopy is the more sensitive diagnostic procedure as close as possible to but never attaining 100%.
On the other side of the coin, you want that diagnostic procedure to be specific as well. An example of this can be found at http://www.bmj.com/content/308/6943/1552.full. The disease in question being liver disease, the gold standard a liver biopsy examined by a pathologist, and the test evaluated a liver scan interpreted by a radiologist.
The good news was that a liver scan was just as reliable for diagnosing liver disease as a biopsy, catching 90% of true positives. Unfortunately, it wasn’t as good at identifying patients who didn’t have liver disease. It could only accurately report 63% of patients as true negatives. That meant that if 100 patients who had normal livers were given a liver scan, 63 of them would be correctly diagnosed of having normal livers. Good for them, but the other 37 normal patients would be misdiagnosed of having liver disease.
There’s one more observation to further muddy the waters. The specificity and sensitivity of the above tests mentioned are not absolute. One day a FOB may be more sensitive, a liver scan more specific. But no matter what no diagnostic test will have both 100% sensitivity and specificity.

No comments:

Post a Comment