Juju - I don’t think the statistics on the surveillance testing are being significantly affected by the test itself - because the percentage positive is so high and varies appropriately from areas of high infection to areas of low infection.

That being said - there are a lot of problems with using the tests we have now clinically. Most tests on the market have not gone through FDA certification (they were allowed to be released without that testing because it was an emergency). We don’t have good data on the sensitivity and specificity of those tests, and some of the claims of the manufacturers are being challenged . (Like the Roche rapid test).

ANY test will have some errors - it will either lean towards picking up all positive cases but giving some false positive results, or towards being really sure a positive is positive, but giving some false negatives. Which way you want the test to lean depends on what your purpose is. In today’s environment it may also depend on what’s available to you.

Let’s say you are taking an antibody test that leans towards not missing people who have the infection, but has say a 3% chance of giving a false positive to somebody who didn’t have CoVID (maybe due to cross-reactivity from very high levels of antibodies to a related coronavirus ). You as an asymptomatic person take the test and come back positive. They can only tell you there’s a 97% chance that you had CoVID - good, but not perfect. That’s the reality of testing.

Are they giving you the test for epidemiological reasons? Then it’s plenty good enough. If you’re taking the test to confirm that CoVID- like illness you had a month ago was really CoVID - it’s pretty likely. If you’re taking it to decide whether you can work with CoVID patients without using PPE - well, there’s that 3% chance you’re not immune, it wouldn’t be good to use it that way.

Btw SARS and CoVID are closely related and the tests likely can’t distinguish between them, but luckily very few people have had SARS so it’s not important.