That doesn't mean what you (and possibly they) think that means.
Suppose that characteristic X (eg vitamin D deficiency) puts you at risk of doing worse at Y. If we pick a population based on how poorly they do at Y, we expect to see that X will be more common than it is in the general public. But within that population there is no reason to believe that having characteristic Y makes you do worse - it just made you more likely to do badly enough to be selected.
This can be seen quite precisely with a toy model. Suppose that we have a population evenly split between 2 subpopulations with a characteristic that varies on a normal distribution. However one population averages 1 standard deviation worse.
If we pick the bottom 5% of the population on that characteristic, we will find about an 80-20 split based on having the risk factor. The bottom 1% also has the same 80-20 split. Ditto the bottom 0.1%, 0.01%, and so on. The reason is that the sum total of how many are in the tail of the normal falls off exponentially fast, with the same exponent in both populations. So the ratio stays constant.
No amount of analysis of people in that tail will suggest that characteristic matters. But comparing the population in that tail to the general population, that standard deviation stands out like a sore thumb as a major risk factor.
Therefore the fact that about 50% of the general population has vitamin D deficiency while over 80% of those hospitalized with COVID-19 do suggests that vitamin D deficiency is a risk factor. But we should draw no conclusions from the fact that vitamin D deficiency doesn't indicate different outcomes within the population landing in the hospital.
(However other research found that patients given vitamin D once landing in the hospital were significantly less likely to wind up in the ICU or dead. We should definitely draw some conclusions from that!)
Suppose that characteristic X (eg vitamin D deficiency) puts you at risk of doing worse at Y. If we pick a population based on how poorly they do at Y, we expect to see that X will be more common than it is in the general public. But within that population there is no reason to believe that having characteristic Y makes you do worse - it just made you more likely to do badly enough to be selected.
This can be seen quite precisely with a toy model. Suppose that we have a population evenly split between 2 subpopulations with a characteristic that varies on a normal distribution. However one population averages 1 standard deviation worse.
If we pick the bottom 5% of the population on that characteristic, we will find about an 80-20 split based on having the risk factor. The bottom 1% also has the same 80-20 split. Ditto the bottom 0.1%, 0.01%, and so on. The reason is that the sum total of how many are in the tail of the normal falls off exponentially fast, with the same exponent in both populations. So the ratio stays constant.
No amount of analysis of people in that tail will suggest that characteristic matters. But comparing the population in that tail to the general population, that standard deviation stands out like a sore thumb as a major risk factor.
Therefore the fact that about 50% of the general population has vitamin D deficiency while over 80% of those hospitalized with COVID-19 do suggests that vitamin D deficiency is a risk factor. But we should draw no conclusions from the fact that vitamin D deficiency doesn't indicate different outcomes within the population landing in the hospital.
(However other research found that patients given vitamin D once landing in the hospital were significantly less likely to wind up in the ICU or dead. We should definitely draw some conclusions from that!)