I've worked on research estimating prevalence from imperfect tests, and something that concerns me about this study is that they aren't showing the error bars for their estimates. Typically, you would report a confidence interval for prevalence rather than just a point estimate, and the confidence intervals can often be fairly wide. There's two sources of uncertainty here, the assumed probabilistic nature of the diagnostic test, and uncertainty in our estimates of the sensitivity and specificity.
I think this paper by Peter J Diggle [0], gives a solid methodology. Instead of treating sensitivity and specificity as fixed values using sample estimates, you can model them as each having a beta distribution. In this case these beta distributions can be found using a Bayesian treatment of Bernoulli trials.
Amazing. Reading more carefully, as FabHK pointed out above, they aren't even applying the obvious correction. They're just reporting the positive rate of the imperfect test. I've implemented Diggle's method [0]. When I have time, I'll see if they've provided enough data to do a proper analysis, and maybe write a blog post about it or something.
I think this paper by Peter J Diggle [0], gives a solid methodology. Instead of treating sensitivity and specificity as fixed values using sample estimates, you can model them as each having a beta distribution. In this case these beta distributions can be found using a Bayesian treatment of Bernoulli trials.
[0] https://www.hindawi.com/journals/eri/2011/608719/