Hacker News new | past | comments | ask | show | jobs | submit login

I'm not sure I understand. Can you explain?

Are you saying people can not understand this data?




You know, why don't you help us out and wget some data from CERN and give them some insight into their ongoing LHC experiments.

I'm sure they could use a nudge in the right direction, and I bet a little bit of Perl or Python is all they need to solve some deep mysteries.

Oh, and you have 6,000TB of disk space to download the data from the ATLAS sensor, right?

Once you're done that there's all kinds of data regarding cancer treatments you can crunch through. I bet that's a weekend of work at the outside.


You're right. I couldn't process 6 petabytes in perl.

I'd use C for that amount of data.


Great, so you're capable of writing highly parallel cluster-scale code in C that does intensive precision numerical analysis? Most choose vectorized FORTRAN or a combination of C++ and CUDA, but hey, knock yourself out.

CERN has a 3700 core supercomputer to crunch through this kind of data. You can rent that on Amazon for about $800 an hour, so I guess you're good to go.

Sorry to be so harsh here. While there's always desirable amount of "constructive naivety" necessary to try the impossible, you need to recognize that there's considerable amounts of expertise required to process and analyze data of this complexity at scale.

This is not like a movie where six minutes of furious typing can solve any problem.


I bench marked fortran vs. x86_64 SSE extensions in C and .... C's fine.

I'd rather have local clusters than Amazon or Google "cloud" any day of the week.

Spotting a methodology bias is not that hard.

Why the heck would you need CUDA ? NVIDIA ???

C'mon man.


Are you simply trolling at this point?

If you're so confident in your ability to process this sort of data, please, post your follow-up on HN.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: