Yes, this is an active area of debate in evidence-based medicine (EBM). If you aggregate outcomes, administering some tests actually appears to worsen prognosis, in the sense that if you take two groups of people with identical distributions of (unknown) conditions, and test one group while not testing the other, the tested group has worse overall outcomes. For example, in some cases people have surgery for a condition that, absent the test, would have remained asymptomatic and benignly ignored. With certain kinds of conditions, negative outcomes from retrospectively unnecessary treatment are frequent enough to outweigh the cases where discovery and treatment improves outcomes, if we're talking about aggregate outcomes.
Of course, discovery does not require treatment, so you could test, find a positive, and not do anything. But EBM people tend to view idealized responses with suspicion, and some argue that taken in real-world conditions, not administering certain tests, or administering them in more restricted situations, or at least not recommending them as the default, would improve aggregate outcomes (and the data seems to support that). They would then restrict the tests to cases where testing statistically improves outcomes.
Of course, discovery does not require treatment, so you could test, find a positive, and not do anything. But EBM people tend to view idealized responses with suspicion, and some argue that taken in real-world conditions, not administering certain tests, or administering them in more restricted situations, or at least not recommending them as the default, would improve aggregate outcomes (and the data seems to support that). They would then restrict the tests to cases where testing statistically improves outcomes.