Assuming a truly random sample of 5000 unique, non-overlapping viewers for each experiment, the standard error for the experiments runs from .3% to .5%.
The true value for any given sample is pretty likely (> 95%) to be within +/-2 standard errors. So in this case, the difference needs to be more than 0.6%~1.0%, depending on the experiments you're comparing. In other words, these look like they're significant differences.
(For reference SE ~ sqrt(p*(1-p)/N) when N is small relative to the population size)
Just curious - did you run any tests for statistical significance?