Hacker News new | past | comments | ask | show | jobs | submit login

> Within thirty years, we will have the technological means to create superhuman intelligence.

Blackwell.

> o Develop human/computer symbiosis in art: Combine the graphic generation capability of modern machines and the esthetic sensibility of humans. Of course, there has been an enormous amount of research in designing computer aids for artists, as labor saving tools. I'm suggesting that we explicitly aim for a greater merging of competence, that we explicitly recognize the cooperative approach that is possible. Karl Sims [22] has done wonderful work in this direction.

Stable Diffusion.

> o Develop interfaces that allow computer and network access without requiring the human to be tied to one spot, sitting in front of a computer. (This is an aspect of IA that fits so well with known economic advantages that lots of effort is already being spent on it.)

iPhone and Android.

> o Develop more symmetrical decision support systems. A popular research/product area in recent years has been decision support systems. This is a form of IA, but may be too focussed on systems that are oracular. As much as the program giving the user information, there must be the idea of the user giving the program guidance.

Cicero.

> Another symptom of progress toward the Singularity: ideas themselves should spread ever faster, and even the most radical will quickly become commonplace.

Trump.

> o Use local area nets to make human teams that really work (ie, are more effective than their component members). This is generally the area of "groupware", already a very popular commercial pursuit. The change in viewpoint here would be to regard the group activity as a combination organism. In one sense, this suggestion might be regarded as the goal of inventing a "Rules of Order" for such combination operations. For instance, group focus might be more easily maintained than in classical meetings. Expertise of individual human members could be isolated from ego issues such that the contribution of different members is focussed on the team project. And of course shared data bases could be used much more conveniently than in conventional committee operations. (Note that this suggestion is aimed at team operations rather than political meetings. In a political setting, the automation described above would simply enforce the power of the persons making the rules!)

Ingress.

> o Exploit the worldwide Internet as a combination human/machine tool. Of all the items on the list, progress in this is proceeding the fastest and may run us into the Singularity before anything else. The power and influence of even the present-day Internet is vastly underestimated. For instance, I think our contemporary computer systems would break under the weight of their own complexity if it weren't for the edge that the USENET "group mind" gives the system administration and support people!) The very anarchy of the worldwide net development is evidence of its potential. As connectivity and bandwidth and archive size and computer speed all increase, we are seeing something like Lynn Margulis' [14] vision of the biosphere as data processor recapitulated, but at a million times greater speed and with millions of humanly intelligent agents (ourselves).

Twitter.

> o Limb prosthetics is a topic of direct commercial applicability. Nerve to silicon transducers can be made [13]. This is an exciting, near-term step toward direct communcation.

Atom Limbs.

> o Similar direct links into brains may be feasible, if the bit rate is low: given human learning flexibility, the actual brain neuron targets might not have to be precisely selected. Even 100 bits per second would be of great use to stroke victims who would otherwise be confined to menu-driven interfaces.

Neuralink.

---

https://justine.lol/dox/singularity.txt




>> > Within thirty years, we will have the technological means to create superhuman intelligence.

> Blackwell.

I'm fucking sorry but there is no LLM or "AI" platform that is even real intelligence, today, easily demonstrated by the fact that an LLM cannot be used to create a better LLM. Go on, ask ChatGPT to output a novel model that performs better than any other model. Oh, it doesn't work? That's because IT'S NOT INTELLIGENT. And it's DEFINITELY not "superhuman intelligence." Not even close.

Sometimes accurately regurgitating facts is NOT intelligence. God it's so depressing to see commenters on this hell-site listing current-day tech as ANYTHING approaching AGI.


> Oh, it doesn't work? That's because IT'S NOT INTELLIGENT.

Ok, let's run this test of "real intelligence" on you. We eagerly await to see your model. Should be a piece of cake.


> an LLM cannot be used to create a better LLM

By that logic most humans are also not intelligent.


You didn't read him correctly; he's not saying Blackwell is AGI. I believe that he's saying that perhaps Blackwell could be computationally sufficient for AGI if "used correctly."

I don't know where that "computationally sufficient" line is. It'll always be fuzzy (because you could have a very slow, but smart entity). And before we have a working AGI, thinking about how much computation we need always comes down to back of the envelope estimations with radically different assumptions of how much computational work brains do.

But I can't rule out the idea that current architectures have enough processing to do it.


I don't use the A word, because it's one of those words that popular culture has poisoned with fear, anger, and magical thinking. I can at least respect Kurzweil though and he says the human brain has 10 petaflops. Blackwell has 20 petaflops. That would seem to make it capable of superhuman intelligence to me. Especially if we consider that it can focus purely on thinking and doesn't have to regulate a body. Imagine having your own video card that does ChatGPT but 40x smarter.


I think there's a big focus on petaflops and that it may have been a good measure to think about initially, but now we're missing the mark.

If a human brain does its magic with 10 petaflops, and you have 1 petaflop, you should be able to make an equivalent to the human brain that runs at 1/10th of the speed but never sleeps. In other words, once you've reached the same order of magnitude it doesn't matter.

On the other hand, Kurzweil's math really comes down to an argument that the brain is using about 10 petaflops for inference, but it also is changing weights and doing a lot more math and optimization for training (which we don't completely understand). It may (or may not) take considerably more than 10 petaflops to train at the rate humans learn. And remember, humans take years to do anything useful.

Further, 10 petaflops may be enough math, but it doesn't mean you can store enough information or flow enough state between the different parts "of the model."

These are the big questions. If we knew the answers, IMO, we would already have really slow AGI.


Yes I agree there's a lot of interesting problems to solve and things to learn when it comes to modeling intelligence. Vernor Vinge was smart in choosing the wording that we'd have the means to create superhuman intelligence by now, since no one's ever going to agree if we've actually achieved it.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: