Hacker News new | past | comments | ask | show | jobs | submit login

> I mean - it's more than a sellable product, the reason we're doing this is to be able to advance medicine

I get this approach for trauma care, but that's not really what we're talking about here. With medicine, how do we know we aren't making things worse without knowing how and why it works? We can focus on immediate symptom relief, but that's a very narrow window with regards to unintended harm.

> Alright and what if this is also a lot quicker to solve with AI?

Can we really call it solved if we don't know how or why it works, or what the limitations are?

Its extremely important to remember that we don't have Artificial Intelligence today, we have LLMs and similar tools designed to mimic human behaviors. An LLM will never invent a medical treatment or medication, or more precisely it may invent one by complete accident and it will look exactly like all the wrong answers it gave along the way. LLMs are tasked with answering questions in a way that statistically matches what humans might say, with variance based on randomness factors and a few other control knobs.

If we do get to actual AI that's a different story. It takes intelligence to invent these new miracle cures we hope they will invent. The AI has to reason about how the human body works, complex interactions between the body, environment, and any interventions, and it had to reason through the necessary mechanisms for a novel treatment. It would also need to understand how to model these complex systems in ways that humans have yet to figure out, if we already could model the human body in a computer algorithm we wouldn't need AI to do it for us.

Even at that point, let's say an AI invents a cure for cancer. Is that really worth all the potential downsides of all the dangerous things such a powerful AI could do? Is a cure for cancer worth knowing that the same AI could also be used to create bioweapons on a level that no human would be able to create? And that doesn't even get into the unknown risks of what an AI would want to do for itself, what its motivations would be, or what emotions and consciousness would look like when they emerge in am entirely new evolutionary system separate from biological life.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: