Reproducible is a major issue across science in general, but the difference is there's no reason why one shouldn't be able to easily re-run a defined analysis on a more recently updated data set to ask if conclusions drawn previously still hold. I actually published a side-project paper on this (in biological sciences) last year [1] - what was scary was there was such a lack of discussion surrounding this idea, despite the fact that large databases of biological data are CONSTANTLY changing and updating.
The other difference is that as far as I know, computer science is the only discipline for which industry has solved the problem of reproducibility; it's one thing to be asked to design a method to run reproducible studies of humans, it's another to ask researchers to run `git remote add https://github.com/user/repo && git push --set-upstream`. That's not asking for any support, or other effort on the researcher's part, and I frankly don't understand how the CS academic community doesn't have this as a standard when it'd be so easy to implement.
[1] Holehouse, A. S. & Naegle, K. M. Reproducible Analysis of Post-Translational Modifications in Proteomes-Application to Human Mutations. PLoS One 10, e0144692 (2015). (http://journals.plos.org/plosone/article?id=10.1371/journal....)