By William M. London
In Part 2, I explained a way to solve the problem I posed in Part 1 and introduced some key concepts in screening for asymptomatic disease. Here is the initial problem presented this time a bit more concisely by using the jargon I previously discussed:
Let’s assume that the sensitivity of screening mammography is 80%, its specificity is 90%, and the prevalence of asymptomatic breast cancer in the screened population is 1%. What is the predictive value of a result of positive (positive predictive value) and what is the predictive value of negative (negative predictive value)?
I explained why the positive predictive value in this scenario is only 7.5%, which means that only 7.5% of women getting a positive test result actually have breast cancer. I also explained why the negative predictive value in this scenario is 99.8%, which means that almost all women getting a negative test result actually don’t have breast cancer. Many of my students find these answers to be surprising–especially the answer for the positive predictive value.
In order to help readers appreciate what for many is a counterintuitive solution, I suggested that readers attempt to solve the problem assuming the same sensitivity (80%) and specificity (90%) as before, but this time with a 10% instead of a 1% prevalence of asymptomatic breast cancer in the screened population. In this post, I’ll solve the problem with the prevalence increased and discuss the importance of prevalence of asymptomatic disease to the success of a screening program.
But First, Let’s Recall the Meaning of Sensitivity and Specificity
It’s very easy to mix up what sensitivity, specificity, positive predictive value, and negative predictive value mean.
Sensitivity and specificity are measures of the validity of the screening test or procedure. Sensitivity is the measure of validity of the screening when applied to people who truly have the disease of interest (even though they don’t have symptoms). Specificity is the measure of validity of the screening when applied to people who truly do not have the disease of interest.
Sensitivity is the percentage of people with the disease of interest who get a positive test result. Take the number of true positive results. Divide that number by the total number of screened individuals who truly have the disease. Then multiply by 100 to express the fraction as a percentage. With a perfectly sensitive test (100% sensitivity), all people with the disease of interest get a positive result when screened.
Specificity is the percentage of people without the disease of interest who get a negative test result. Take the number of true negative results. Divide that number by the total number of screened individuals who truly don’t have the disease of interest. Then multiply by 100 to express the fraction as a percentage. With a perfectly specific test, all people without the disease get a negative result when screened.
In order to avoid confusing sensitivity and specificity, I tell my students to repeat this sentence until they cannot forget it: We [try to] sense what’s there and specify what’s not. (You can’t sense what isn’t there.)
But how are we supposed to know who truly has a disease and who doesn’t? That answer comes from the result of what is referred to as a “gold standard” test. Typically, the “gold standard” is what a pathologist concludes from examining excised tissue. “Gold standard” tests tend to be too invasive, costly, and inefficient for screening programs. (We could get “gold standard” results from screenings at airports if we required all passengers to disrobe, but….)
And Now Let’s Make Sure We Don’t Confuse Validity Measures with Predictive Value Measures
Predictive value is affected by the sensitivity and specificity, but it’s not the same thing. It is also affected by the prevalence of latent disease among those who are screened.
Positive predictive value is the percentage of people with positive test results who actually have the disease of interest. Take the number of true positive results. Divide that number by the total number of positive results (the true positives plus the false positives). Then multiply by 100 to express the fraction as a percentage. The higher the percentage, the more likely it is that positive results are true rather than false positive results.
Negative predictive value is the percentage of people with negative test results who actually have the disease of interest. Take the number of true negative results. Divide that number by the total number of negative results (the true negatives plus the false negatives). Then multiply by 100 to express the fraction as a percentage. The higher the percentage, the more likely it is that negative results are true rather than false negative results.
My students often confuse positive predictive value with sensitivity. As I discussed previously, that’s like confusing the probability that individuals love you if they call you with the probability that individuals will call you if they love you.
And my students often confuse negative predictive value with specificity. As I discussed previously, that’s like confusing the probability that individuals don’t love you if they don’t call you with the probability that individuals won’t call you if they don’t love you.
The New Predictive Value Problem Solved
This time we’re assuming 10% instead of 1% prevalence with no change in sensitivity and specificity. Let’s assume also that 10,000 women are screened just as we assumed last time and follow the eight steps from Part 2 using the twobytwo table.
Step 1: Create a 2 by 2 Table with Margins for Column and Row Totals
Create a 2 by 2 table (as shown below this paragraph), which has an additional column and an additional row for keeping track of column and row totals. The first two columns will represent actual breast cancer status (disease present–at an asymptomatic stage–or disease absent). The first two rows will represent the two possible test results (positive versus negative). The total number of women screened is given in the problem and is equal to 10,000. The number 10,000 is labeled in the table as the grand total of four subgroups of women: (A) women with true positive results, (B) women with false positive results, (C) women with false negative results, and (D) women with true negative results. The grand total will equal the sum of all positive and negative test results; it will also equal the number of women with asymptomatic breast cancer plus the number of women without asymptomatic breast cancer.
Disease Present  Disease Absent 
Row Totals 

Positive Test Result  A (True +)  B (False +)  (A+B)= all women with positive test results 
Negative TestResult  C (False )  D (True )  (C+D)= all women with negative test results 
Column Totals  (A+C)= all women with asymptomatic breast cancer  (B+D)= all women who don’t have breast cancer 
Grand total =(A+B+C+D)=10,000 
When a woman with asymptomatic breast cancer gets a mammography result of ‘positive,’ she is a true positive (counted in cell A); when her result is ‘negative,’ the result is a false negative (counted in cell C).
When a woman who does not have breast cancer gets a mammography result of ‘negative,’ she is a true negative (counted in cell D); when her result is ‘positive,” the result is a false positive (counted in cell B).
Step 2: Use Information Given About Prevalence to Complete the Column Totals
Since we’re assuming that the prevalence of breast cancer is 10% and 10,000 women are screened, that means that 10% of 10,000 or a total of 1,000 women have the disease. If 1,000 have the disease, then 10,0001,000 or 9,000 women don’t. 1,000 and 9,000 are now entered into the table:
Disease Present  Disease Absent 
Row Totals 

Positive TestResult  A (True +)  B (False +)  (A+B) 
Negative TestResult  C (False )  D (True )  (C+D) 
Column Totals  (A+C)=1,000  (B+D)=9,000 
Grand total =(A+B+C+D)= 10,000 
Step 3: Use Information Given About Test Sensitivity to Complete the “Disease Present” Column
Sensitivity is represented in the table as cell A divided by the sum of cells A and C. To be more concise: sensitivity = A/(A+C). Assuming 80% sensitivity, we know that the number in cell A must be 80% of the number already inserted into cell (A+C). Since A+C is is conveniently equal to 1,000, A, in this problem must be equal to 800 true positive test results. If A+C equals 1,000 and A equals 800, then C must be 1,000800=200. I now insert the numbers 800 and 200 into the table:
Disease Present  Disease Absent 
Row Totals 

Positive TestResult  A (True +) = 800  B (False +)  (A+B) 
Negative TestResult  C(False ) = 200  D (True )  (C+D) 
Column Totals  A+C=1,000  B+D=9,000 
Grand total =A+B+C+D= 10,000 
Step 4: Use Information Given About Test Specificity to Complete the “Disease Absent” Column
Specificity is equal to the number of true negative test results divided by the total number of people who actually have the disease of interest. Specificity is represented in the table as cell D divided by the sum of cells B and D. To be more concise: specificity = D/(B+D). Assuming 90% specificity, we know that the number in cell D must be 90% of the number in (B+D). Since B+D is 9,000, D must be 90% of 9,900, which equals 8,100. If D is 8,100 and B+D is 9,000, then B must be 9,0008,100, which is 900. (Also note that since the specificity is 90%, the false positives as a percentage of all the women without breast cancer must be 10%, as noted in the problem statement. Note that 10% of 9,000 is 900. Thus, we insert 900 into cell B. So here’s what we now have:
Disease Present  Disease Absent 
Row Totals 

Positive TestResult  A (True +) = 800  B (False +) =900  (A+B) 
Negative TestResult  C(False ) = 200  D (True ) =8,100  (C+D) 
Column Totals  (A+C)=1,000  (B+D)=9,000 
Grand total =(A+B+C+D)= 10,000 
Step 5. Complete the Table by Summing A+B and Then Summing C+D
We find that (A+B), which represents the total number of positive test results, equals 1,700 and that (C+D), which represents the total number of negative test results, equals 8,300. We insert 1,700 and 8,300 as shown below.
Disease Present  Disease Absent 
Row Totals 

Positive TestResult  A (True +) = 800  B (False +) = 900  (A+B) = 1,700 
Negative TestResult  C(False ) = 200  D (True ) = 8,100  (C+D) = 8,300 
Column Totals  (A+C)=1,000  (B+D)=9,000 
Grand total =(A+B+C+D)= 10,000 
Step 6: Check the Arithmetic for the Completed Table
(A+B) + (C+D) should sum together to equal (A+B+C+D), which we already know is 10,000. Since 1,700 + 8,3000 = 10,000, we have confirmed that we completed the table correctly.
Step 7: Use the Numbers in the Completed Table to Calculate the Positive Predictive Value
What percent of the women who receive a positive test result actually have breast cancer?
We know that 1,700 is the total for the “Positive Test Result” row. That represents all women who received a test result of positive. We know that 800 of these women actually had asymptomatic breast cancer (true positives). Therefore, 800 out 1,700 or 800/1,700 or 47.1% of the women with positive test results actually have breast cancer. That means 52.9% of the women with positive test results in this scenario were false positives.
In the initial problem with 1% prevalence the positive predictive value was 7.5%. Increasing the prevalence nine percentage points, in this case raised the positive predictive value thirtynine percentage points. But still more than half of the positive test results are false positives.
Step 8: Use the Numbers in the Completed Table to Calculate the Negative Predictive Value
What percent of the women who receive a negative test result actually do not have breast cancer?
We know that 8,300 is the total for the “Negative Test Result” row. That represents all women who who received a test result of negative. We know that 8,100 of these women do not have breast cancer. Therefore, 8,100 out 8,300 or 8,100/8,300 or 97.6% of the women with negative test results do not have breast cancer. Only 2.4% of the women with negative test results actually have the disease.
The increase in prevalence from 1% to 10% reduced the negative predictive value, but only slightly from 99.8% to 97.6%.
Implications
Increasing the prevalence greatly improved positive predictive value with only a modest sacrifice to negative predictive value. The scenario with higher prevalence is a better scenario for conducting screening programs. When the prevalence of asymptomatic disease is very low, the false positive rates will be unacceptably high.
The importance of prevalence is illustrated in a slide presentation created by Mary Beth Bigley, DrPHc, ANP as director of the Nurse Practitioner Program at George Washington University. One of the slides shows what happens to positive predictive value (PPV) when both sensitivity and specificity are 99% and prevalence of asymptomatic disease increases in the screened population. At a prevalence of 0.1%, the PPV is just 9.0%. A tenfold increase in prevalence to 1.0%, brings the PPV to 50%. So even with a very valid test, when prevalence is only 1%, half the positive test results will be false. The slide shows that at 5% prevalence, PPV rises to 83.9%. So even when 1 out of every 20 people screened have the disease, more than 17% of the positive test results will be false.
In planning any screening program, the potential for benefit must be weighed against the potential for harm including the harmful impact of screening on those who wind up receiving false positive test results. Weighing benefits and harms is a challenging task addressed in the U.S. Preventive Services Task Force’s draft research plan for updating its breast cancer screening recommendation. I’ll need to save a discussion of issues considered by the Task Force for another blog post. I promise not to include in it any more problems to solve with twobytwo tables.
____________________________________________________________________________________________
William M. London is a specialist in the study of healthrelated superstition, pseudoscience, sensationalism, schemes, scams, frauds, deception, and misperception, who likes to use the politically incorrect word: quackery. He is a professor in the Department of Public Health at California State University, Los Angeles; a coauthor of the college textbook Consumer Health: A Guide to Intelligent Decisions (ninth edition copyright 2013); the associate editor (since 2002) of Consumer Health Digest, the free weekly enewsletter of Quackwatch; one of two North American editors of the journal Focus on Alternative and Complementary Therapies; cohost of the Quackwatch network’s Credential Watch website; a consultant to the Committee for Skeptical Inquiry. He earned his doctorate & master’s in health education, master’s in educational psychology, baccalaureate in biological science, and baccalaureate in geography at the University at Buffalo (SUNY), and his master of public health degree from Loma Linda University. He successfully completed all required coursework toward a Master of Science in Clinical Research from Charles R. Drew University of Medicine and Science, but he has taken way too much time writing up his thesis project: an investigation of therapeutic claims and modalities promoted by chiropractors in the City of Los Angeles.