The key thing to note is that the parameter being estimated is a fixed (but unknown) quantity. Unlike in Bayesian inference, where we assume the parameter (or our belief in it) is random. The distinction is important. We want inference about the parameter, yet because the interval is the random quantity with confidence intervals, we cannot assign probability statements to the parameter.
As a completely fabricated example, suppose the true proportion in a coin flip experiment is 40%. If my confidence interval is [.45, .65], what's the probability that .40 is in [.45, .65]? It's 1. The probability that a fixed, but unknown parameter will lie in any confidence interval will be either 0 (it isn't in the interval) or 1 (it is). The _proportion_ of times the interval contains the true parameter is the level of confidence (95%).
To your always-positive example, that procedure is not particularly weird. There's always a balancing act with CI's about length and confidence level (otherwise, I could choose all reals as my interval and get 100% confidence level). That your [0, inf) interval has 100% coverage means that you could probably shrink that interval so that it has finite upper bound without losing more than 5% confidence. Hard to say without a specific distribution in mind or mild assumptions, but an application of either Markov's or Chebyshev's Inequality would allow you to make really loose bounds with only relatively minor assumptions.
As a completely fabricated example, suppose the true proportion in a coin flip experiment is 40%. If my confidence interval is [.45, .65], what's the probability that .40 is in [.45, .65]? It's 1. The probability that a fixed, but unknown parameter will lie in any confidence interval will be either 0 (it isn't in the interval) or 1 (it is). The _proportion_ of times the interval contains the true parameter is the level of confidence (95%).
To your always-positive example, that procedure is not particularly weird. There's always a balancing act with CI's about length and confidence level (otherwise, I could choose all reals as my interval and get 100% confidence level). That your [0, inf) interval has 100% coverage means that you could probably shrink that interval so that it has finite upper bound without losing more than 5% confidence. Hard to say without a specific distribution in mind or mild assumptions, but an application of either Markov's or Chebyshev's Inequality would allow you to make really loose bounds with only relatively minor assumptions.