Hacker News new | past | comments | ask | show | jobs | submit login

This is the mindset that has prevented people from even attempting such projects, it is poisonous and anti-intellectual.

One thing that we almost NEVER do is with models is force them to interact with each other. This means that we are completely blind to an entire aspect of isolated models: namely that inputs may not have the structure we expect, or that their outputs don't actually have the anticipated affect on downstream models. Most current model validation is purely speculative or based only on a finite dataset. If you take two models that you think effectively represent how the world works and put them together and suddenly they no longer work, then you reveal a huge gap in our knowledge. In a sense this is the ultimate form of model validation and until you do it any interpretation you make about how models might or might not work together is complete bullshit.




I spend a lot of time working on models. This is not a "poisonous and anti-intellectual" statement, it is simple a statement grounded in experience.

There are dozens of failed projects to make different models work together. It would be such a beautiful concept if we could "snap" different models people built together like legos and watch how they interact and develop together. It is also a concept very inline with the hacker mindset where we all observe the phenomenal success of the UNIX building block approach to getting things done.

It is a great idea in theory, but in practice making models work together is incredibly difficulty. You have massive issues of scale (specifically in regards to temporal scale with process going on at very different rates) and context. This has been tried again and again. There are many software packages designed to make this easy. As a general rule, they simply do not work outside of narrowly focused domains.

What is "poisonous" to science (and science funding) is to overpromise and underdeliver. What is also "poisonous" is to ignore a long history of modeling and simulation work and the hard-earned lessons and failures gained from that work.


The only reason I say that mindset is problematic is because the way you frame your statement is "this can't be done" when what is actually the case is that we have never really tried because there are many other easier problems to solve. I don't think anyone who has spent even a limited amount of time modelling expects to be able to stick a few equations together and have it all work. It is still an open question as to whether we can capture say 95% of the variance in a system using a collection of simplified models or whether we have to go full monte carlo and study models that have the full complexity of the system they represent because you really do need every last single part and can't simplify anything.

There will be novel results no matter what happens, maybe not about c elegans, but certainly about the models themselves.




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: