> Not to be too pessimistic here, but why are we talking about things like this
I also think that we merely got a very well compressed knowledge base, therefore we are far from super intelligence, and so-called safety sounds more Orwellian than having any real value. That said, I think we should take the literal meaning of what Ilya says. His goal is to build a super intelligence. Given that, albeit a lofty goal, SSI has to put safety in place. So, there, safe super intelligence
An underappreciated feature of a classical knowledge base is returning “no results” when appropriate. LLMs so far arguably fall short on that metric, and I’m not sure whether that’s possibly an inherent limitation.
So out of all potential applications with current-day LLMs, I’m really not sure this is a particularly good one.
Maybe this is fixable if we can train them to cite their sources more consistently, in a way that lets us double check the output?
I also think that we merely got a very well compressed knowledge base, therefore we are far from super intelligence, and so-called safety sounds more Orwellian than having any real value. That said, I think we should take the literal meaning of what Ilya says. His goal is to build a super intelligence. Given that, albeit a lofty goal, SSI has to put safety in place. So, there, safe super intelligence