Skip to content

Positive results on rapid COVID tests aren’t very trustworthy

An article over at Recode about rapid antigen tests for COVID-19 was posted early Friday and has been up ever since. It notes that the rapid tests aren't as accurate as PCR tests:

So you might be wondering: What’s the point if rapid tests aren’t as accurate as PCR tests? Well, rapid antigen tests, which look for a specific protein on the Covid-19 virus, remain extremely effective at confirming positive cases. Put simply, if you test positive on a rapid test, you almost certainly have Covid-19. If you test negative, in some cases, you might still test positive on a PCR test, which is much more sensitive because it tests for genetic evidence of the virus. Rapid tests may not pick up positive cases in people who have been vaccinated or who have recently recovered from Covid-19, since they may produce less virus, one expert told Recode.

This is 180 degrees wrong, isn't it? The PPV (Positive Predictive Value) of most rapid tests is little better than 50-50 if you're asymptomatic and live in an area with normal prevalence of COVID. (If you have symptoms, then a positive result is pretty reliable.) In either case, if you get a positive result you should isolate and then test again the next day.

Conversely, if you get a negative result, the odds are almost 100% that you don't have COVID at that moment. Tomorrow, of course, is another story.

I wondered if I was going crazy, so I checked myself on this after reading the Recode piece. Everything I read confirms that it's a positive result that's less accurate. Can anybody suggest an explanation for this?

18 thoughts on “Positive results on rapid COVID tests aren’t very trustworthy

  1. als

    Both takes are wrong because the positive and negative predictive value depend on the pre-test probability of having the disease. During our current surge, someone who has typical symptoms has a very high pre-test probability of disease. This is because the "prevalence of disease among symptomatic people" is much higher than "the prevalence of disease among the whole population". Therefore the positive predictive value will be high and the negative predictive value will be low.

    On the other hand, "the prevalence of disease among asymptomatic people" is actually lower than "the prevalence of disease among the whole population" so for an asymptomatic person the positive predictive value is low and the negative predictive value is very high.

    You can't simply use one number for prevalence and generalize it to anyone taking the test - PPV and NPV change depending on the risk for disease of the test taker.

  2. DanK

    Yes, the 50-50 thing only applies if you test a randomised sample or the whole population and 1% prevalence . Then - and only then - false positives and actual positives are a 50/50 proposition.. If you test positive and have symptoms, you have a 99% likelihood of being positive. If you are asymptotic and test positive, you have 1% likelihood of being positive. If prevalence is 1%, the chance that the test is a false positive is 50%. If prevalence is 0, the chance of a false positive is 100%.

    Statistics is hard.

    And, yes, the Recode piece is 180 degrees wrong.

    Statistics is hard.

    1. als

      The recode piece isn't exactly 180 degrees wrong - it is essentially correct if we are talking about people with typical covid symptoms (anosmia, dry cough, sore throat, etc). But it doesn't ever specify what kinds of test takers it is talking about, which is a major oversight.

  3. sonofthereturnofaptidude

    If you're symptomatic right now, what are the chances that the symptoms are not Covid? That would factor in, too, wouldn't it?

  4. yerfdogyrag

    The other thing to understand is why there are false positives at all. The rapidtests are effectively using antibodies to show the presence of Covid proteins. If the test is positive, you have Covid proteins, but not necessarily active disease (could be old fragments). The tests do not get (for instance) flu mixed up as a positive.

    The reason for the false positives is that many people will get asymptomatic Covid and then these fragments will be around for weeks after they've recovered from the disease. The test is accurate (Covid fragments detected) but it's a false positive for active virus.

    Basically, if you get a negative rapid test on Monday and Wednesday and then show positive on Friday, you pretty much have the disease and should stay away from other people.

    1. als

      This is incorrect. False positives can occur due to test failure and also due to patient factors such as the patient having non-specific antibodies present (which essentially mimic the target covid protein), or a very viscous sample which could effectively bind to the antigen binding reagent in the test.

      1. yerfdogyrag

        You're right about that and that's why I said "pretty much" rather than "guaranteed". For instance, I did read that some children would add some fruit juice to their tests so that they'd show up positive. The actual test failures seem fairly small (just one BinaxNOW study that I saw had 3 weird positives/1000 where the line was faded).

        From a practical standpoint, taking tests before going out around other people is a reasonable approach even when the prevalence is low.

        1. als

          3/1000 is actually a lot when dealing with very low prevalence. For example, if you assume in an asymptomatic population the current prevalence is around 0.1% (1/1000), then if you test a thousand asymptomatic people, 4 will test positive and 1 will actually have the disease for a positive predictive value of 25%.

          On the other hand if testing a population with typical symptoms the current prevalence might be more like 500/1000 so the positive predictive value is very high since 503 will test positive out of a thousand and 500 will actually have the disease for a positive predictive value of 500/503 (99.4%).

          You are correct that the negative predictive value *in an asymptomatic person* is very high so taking a test before you go out is useful as long as you are asymptomatic.

  5. golack

    The confusion is between sensitivity, i.e the antigen test will only pick up about half the positives that the PCR test does, and accuracy, i.e. out of 150 antigen positives, 4 will be false positives compared to PCR. That was out of ca. 2K or 3K tested. There was an link to an article in the comments on an earlier post--and the numbers from my memory--soo.....large error bars. The study was at a testing site, with positivity ca 10-20% or so vis PCR, in under those setting PPV for the antigen test was 92%.

    Here's the thing--what are tests for? If prevalence drops, that 92% PPV from the study drops too--so maybe a single positive antigen test might only mean there's a 50-50 (or less) chance of it being a true positive. But if there are say 6 false positives our of 3000 tests, makes the math easier, that means odds of a false positive are at 0.2%.

    IF you have a large event and use rapid tests, you'll be excluding 0.2% of the population due to false positives. If prevalence is high, you'll also be excluding 0.5%, or nowadays up to 3+%, of the population due to real positives and missing just as many due to false negatives. If transmission is low, you'll be excluding 0.1% or less due to real positives and still letting that many in with false negatives.

    The argument for the false negative rate is that the antigen tests requires higher viral load to trigger a positive test, so it should be treated as infectiousness test. That's fine if your clearing an infection. Not that helpful if your infection is just taking off--i.e. you might not be infectious now but will be in a few hours.

    The question then is, is it worth it to exclude 0.2% of the population unnecessarily to exclude most of the currently infections individuals? Should be letting in people with a positive test when the PPV is at 50%, what about 10%?

    Rapid antigen tests really do have their place, even with their very low rate of false positives. From a public health perspective, they can be a quick way to assess disease progression in a community. For large venues, they can help screen out infectious individuals with a small cost due to false positives. For the individual, that means a follow up with a PCR test--ideally with a 1 day turn-around--unless symptomatic when you can just assume the positive test was real.

    WorldoMeter covid has "active cases" and projections via IHME. In places, they have ca. 7-8% of population under "active cases". (includes estimates of asymptotic cases). Their projections have daily new infections peaking about now in some hard hit places, and cresting before Feb. for the US as a whole.

  6. azumbrunn

    We should be happy with what we got. Let's stipulate that a positive result at a 1% prevalence tells you the probability you are infected is 50%. This is up from 1%. This is certainly a good enough reason to cancel a party or a dinner and stay at home for a day before re-testing. So those tests work reasonably well for the purpose of suppressing the spread of the virus (though a lot more testing would be required--plus near 100% adherence to quarantine rules!--to make an impact on the disease statistics). Put it another way: If you test school children before admitting them to class you reduce the probability of infection in the class substantially--and the magnitude of the effect goes up as the virus becomes more prevalent.

    One caveat about statistics: Some false positives are random test failure, others are due to some idiosyncrasy of the testees (such as some random protein--or any large molecule in fact--in their system that gloms on to the test-antibodies or exceedingly viscous samples). Random failures are highly unlikely to occur twice in a row but idiosyncrasies probably stay with the individual for two or even more days in a row.

    A third point: "False negatives" appear to be cases where the concentration of virus protein is below the detection level of antibody tests. The detection level of PCR is orders of magnitude better than of antibodies; maybe actually too good in some cases. The lower the concentration of virus the lower the chance of infecting another person. So a technically false negative result is still reassuring in that way: You may have the virus but you are not very dangerous (at the time the test is taken!).

  7. caburrito

    Sensitivity and Specificity are what we should be looking at: the positive predictive value and negative predicted values can be calculated from there based on the prevalence in a specific time and place, but sensitivity and sensitivity are (basically) constant (unless the testing technology improves). The takes about "positive means positive" are based on the research that the sensitivity of the rapid tests are fairly low (meaning a relatively high false negative rate) and the specificity is high (low false positive rate). This study (https://pubmed.ncbi.nlm.nih.gov/34134035/) claims the specificity could be basically 100%, which would mean there are no false positives. Here's another that has the specificity at 99.1% : https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8530784/

    That's a really big difference, though, if prevalence is low. Back of the envelope calculations say that if sensitivity is around 63.5% and specificity is 100% (as per the first link), PPV is 100% no matter what the prevalence is. If sensitivity is around 82.9% and specificity is 99.1% (as per the send link) PPV is around 77% if prevalence is 3.5%, but closer to 49% if prevalence is around 1%.

    BUT, the CDC estimates only 1 in 4 cases of COVID get reported. SO what's the real prevalence rate today, where you live? That's the real question that everything hinges on.

    All this is to reinforce the point: prevalence is really important here.

    1. caburrito

      (hate to reply to myself, but a slight edit to the above): Unless specificity really is 100%, or close to it. If specificity really is 100%, there are no false positives, and any positive result really means COVID regardless of prevalence.

      1. als

        Using the prevalence of disease in the entire population only makes sense if what we are doing is randomly testing people in the population.

        If we are instead thinking about the tests significance in individual people who take it, then we have to imagine the prevalence of disease in a population of people exactly like that person. This is what we call the pre-test probability.

        Once you estimate the pre-test probability of disease in the person you are testing, you can use sensitivity and specificity to calculate the positive and negative predictive value.

        Assuming that the prevalence of the disease in the entire population is the same as the pre-test probability of disease in an individual taking the test is why people are getting this wrong.

  8. golack

    BTW--there's a big difference between saying tests are not "trustworthy", your headline, vs. saying tests can be misused. That is also different than saying the do not provide any added value, i.e. the test result won't affect the treatment course so becomes superfluous.

    We need to be doing community wide testing to track outbreaks, along with PCR sequencing. Attacking testing as being "untrustworthy" when they are behaving as expected is very damaging. Attacking people for mis-representing what tests can do is fine--but that's not the headline.

    To deal with this, or any, pandemic requires defense in depth. Why? Because any given measure is not perfect. Attacking each step because they are not perfect, therefore should not be done, means all our defenses against Covid will collapse.
    1. Attack masking because they don't stop you from getting infected, esp. cloth masks. Of course masking is meant to stop you from spreading the disease should you be infected and not have symptoms yet. Even cloth masks help, though less so with omicron. You'll see tests showing cloth masks let particles out--but they still block you from breathing down people's necks (or directly into their face).
    2. Social distancing of 6 ft. was a number pulled out of a hat! So??? The spread of viral particles after exhalation can be complex, but the further away people are, the more time there is for the concentration of particles to be diluted/drop out/etc, and the less chance there is for transmission. Yes, 10' is better than 6, and 6 is a lot better than 3. But a rule has to be set--so 6' is fine, figuring that will keep most people at least 3' apart.
    3. Most people are not actively infected, so why bother? The point of course is to keep it that way. What was RGB's comment? My umbrella is keeping me dry in a rainstorm--but since I'm dry in a rainstorm, why do I still need my umbrella? Masks and social distancing are fairly simple mitigation measures. Even if they only block 10 to 20% of infections, that can go a long way to stopping community spread.
    4. Granted, there is some hygiene theater. Deep cleaning subway cars really won't affect covid transmission. But maybe they should still clean them anyway. Hand sanitizers everywhere was probably overkill too--but possibly helped to limit seasonal infections.
    5. Testing, especially for larger indoor events, won't be perfect but will lower the odds of a mass spreading event. A few people will be left out due to false positives, but during an active pandemic, that is to be expected. It also doesn't mean an event won't lead to outbreaks, it just means the odds of an outbreak will be lower and that a given outbreak might be limited.

    Want to get out of this pandemic--vaccinate everyone. Natural immunity doesn't really cut it--see Delta and now Omicron. Odds are everyone has been infected in the US, yet we still have waves of hospitalizations among the unvaccinated. In the mean time, we have to stick with the defense in depth. Just because we are not clapping for front line workers anymore doesn't mean they don't deserve that and more.

  9. Vog46

    Well, well well
    https://www.cnbc.com/2022/01/08/cyprus-reportedly-discovers-a-covid-variant-that-combines-omicron-and-delta.html

    A researcher in Cyprus has discovered a strain of the coronavirus that combines the delta and omicron variant, Bloomberg News reported on Saturday.

    Leondios Kostrikis, professor of biological sciences at the University of Cyprus, called the strain “deltacron,” because of its omicron-like genetic signatures within the delta genomes, Bloomberg said.

    So far, Kostrikis and his team have found 25 cases of the virus, according to the report. It’s still too early to tell whether there are more cases of the strain or what impacts it could have.

    “We will see in the future if this strain is more pathological or more contagious or if it will prevail” against the two dominant strains, delta and omicron, Kostrikis said in an interview with Sigma TV Friday. He believes omicron will also overtake deltacron, he added.

    The researchers sent their findings this week to GISAID, an international database that tracks viruses, according to Bloomberg.

    The deltacron variant comes as omicron continues its rapid spread across the globe, causing a surge in Covid-19 cases. The U.S. is reporting a seven-day average of more than 600,000 new cases daily, according to a CNBC analysis Friday of data from Johns Hopkins University. That’s a 72% increase from the previous week and a pandemic record.
    **********************************************************
    That's "INTERESTING"

Comments are closed.