A similar argument actually goes for experiments that are somehow affected by computer networks.
If scientists use grid-computing, cloud-computing, or just the plain regular Internet, there is no way to accurately reproduce results for distributed applications.
Luckily, some researchers are aware of this and there are now some projects starting to make testbeds and infrastructures to make environments where experiments can be reliably reproduced.
PlanetLab and the clouds are great ways to do non-reproducible research, because the other users in the system (where sometimes "the system" is the public Internet) create interference. Reproducible networking research either has to be simulated or has to be run on an isolated testbed such as Emulab.
The results from PlanetLab et al. may not be deterministic, but you should be able to reach the same conclusions based on repeated experiments. Otherwise, your results may be a little too fragile to form the basis of sweeping conclusions.
This may not be ideal, but it is no worse than any branch of science which is not purely digital.
NICTA and WinLab have developed a management framework called OMF for specifying and running experiments on testbed networks. One of the main goals of the project is to improve reproducibility in networking research.
If scientists use grid-computing, cloud-computing, or just the plain regular Internet, there is no way to accurately reproduce results for distributed applications. Luckily, some researchers are aware of this and there are now some projects starting to make testbeds and infrastructures to make environments where experiments can be reliably reproduced.