Not sure where you got the "less than or equal to a second old"? Maybe I'm misunderstanding what you mean?
There is no single system-wide imposed sampling rate, so it's up to you to set the sampling rate based on what sort of queries you want to be able to do with good enough accuracy. We have 1:1 rate data for some things (say errors served on a particular service), while a ten or a hundred thousand to one data for other things where there are, say, tens of millions of log lines per second.
I wonder how often the data is inaccurate given the potentially low sample size?