Hacker News new | past | comments | ask | show | jobs | submit login

Quite often it gives you a result that you can then prove directly, or check with other tools. The benefit of MMA is that it has a lot of tools and a good interface, good documentation, and a large community.

In practice, pretty much no one doing science has the expertise or time to completely verify the science they are doing - they are building on centuries of knowledge across many disciplines, and for the most part the community verifies each part as they build knowledge.

And certainly opensource does not allow the vast majority of people "to verify the science you do is correct." They'd have to check the code, the compiler, the hardware, ensure no cosmic rays flipped bits during computation, and so on.

So I'd not worry too much about the closed source vs open source nature of it. It's a solid tool that enables lots of research.




The “cosmic rays” argument, to me, is inane. It simply doesn’t practically apply and is certainly not an argument against the benefits of open source code. You’re castigating the whole practice of code review, computer-aided proofs, automated theorem proving, etc.


> The "cosmic rays" argument, to me, is inane.

Have you ever looked at the rates, or you just dismiss it without looking at it? Note that the current rate is higher than older references since the feature sizes have shrunk, and lower energy events can change bits on newer hardware.

Scientific computation, especially at the level of most researchers, is affected by cosmic ray bitflips, without question.

Since the OP was complaining about not being able to check everything ad absurdium, then this effect is certainly on the table. It's more likely to affect research than the difference between closed and open source if a researcher is ignorant of it.

It's also why good researchers, who know this is a real effect, tries to run a computation in multiple methods over different times, until they feel a consensus on the calculations is robust enough.

If you've never done it, write a program to watch memory for bit flips, and be amazed.

Here's an intro - do a back of the envelope calculation and see if you still think these events are rare enough that they don't affect common scientific work.

https://en.wikipedia.org/wiki/Soft_error#Cosmic_rays_creatin...


>You’re castigating the whole practice of code review, computer-aided proofs, automated theorem proving, etc.

No I'm not. Those are but one avenue of reducing the probability of error during computation. All of those only ensure that the code part is solid - there is an entire other world on the physical part that needs incredible engineering, noise reduction, error correction, defect mitigation, thermal issues, quantum issues, physical data decay, memory leakage, and so on.

I think by focusing only on aspects for code, you miss a large part of ensuring modern computing is accurate.


And how many people doing "science" do code review, computer-aided proofs, or automated theorem proving to verify their code is correct? Very, very, very few.


This. In theory it’s open, but in reality, people just want to get this work done mostly.

Also, if something is that important, people can and do perform the same calculations using different packages or different algorithms.

Write two algorithms in MMA, or one in MMA and one in something else, and every so often spot check a few cases by hand.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: