You can never be 100% sure, but you can be as arbitrarily close to sure as you are willing to spend resources for.
If you have a random algorithm with 0% false positives and 50% false negatives, and repeated trials are independent, then technically combining a few dozen runs of the algorithm actually has the same chance of returning the wrong results as a deterministic algorithm failing due to cosmic radiation. The drop-off of failure probability is exponential in the number of trails, making it quite practical to achieve arbitrary certainty. As long as you're a few orders of magnitude more certain than the hardware, the algorithmic randomness ceases to matter.
If you have a random algorithm with 0% false positives and 50% false negatives, and repeated trials are independent, then technically combining a few dozen runs of the algorithm actually has the same chance of returning the wrong results as a deterministic algorithm failing due to cosmic radiation. The drop-off of failure probability is exponential in the number of trails, making it quite practical to achieve arbitrary certainty. As long as you're a few orders of magnitude more certain than the hardware, the algorithmic randomness ceases to matter.