I think you missed the point of my example. I was suggesting that an experiment performed on the sun showing a 95% CI of the brightness being 400-1200 lumens should result in a reasonable person believing that the probability of the Sun's brightness falling in that range is approximately zero, while the same result for a 60W light-bulb should result in a reasonable person being more than 95% certain that the bulb's brightness falls in that range.
Just like a large number of people misinterpret a P value of .01 to mean a 1% chance of the results being due to chance[1], CIs can be similarly misinterpreted.
1: A .01 P value actually means that if the null hypothesis is true, then you would get the result 1% of the time. The analogy to my above example would be that if I run an experiment and get a result that "the sun is less bright than a 60W light-bulb" with a P value of .01, it's almost certainly not true that the sun is less bright than the light-bulb, since the prior probability of the sun being less bright than a 60W light bulb is many orders of magnitude smaller than 1%.
Just like a large number of people misinterpret a P value of .01 to mean a 1% chance of the results being due to chance[1], CIs can be similarly misinterpreted.
1: A .01 P value actually means that if the null hypothesis is true, then you would get the result 1% of the time. The analogy to my above example would be that if I run an experiment and get a result that "the sun is less bright than a 60W light-bulb" with a P value of .01, it's almost certainly not true that the sun is less bright than the light-bulb, since the prior probability of the sun being less bright than a 60W light bulb is many orders of magnitude smaller than 1%.