Hacker News new | past | comments | ask | show | jobs | submit login

If they reach AGI, or more simply replace a chunk of workers with AIs, this isn't far fetched to reach these numbers.



Oh please no...not the Tesla AutoPilot story again.

These are basic language models easy to reproduce where the only barrier to entry is the massive computational capacity required. What is OpenAI doing that Google and others can't reproduce?


Apparently shipping without fear - google had a lot of the fundamental research happen at google brain and developed a LLM to rival gpt and a generative model that looks better than DAL-E in papers, but decided to show no one and keep them in house because they haven’t figured out a business around them. Or something, maybe it’s fear around brand damage, I don’t know what is keeping them from productionizing the tech. As soon as someone does figure out a business consumers are okay with they’ll probably follow with ridiculous compute capacity and engineering resources, but right now they are just losing the narrative war because they won’t ship anything they have been working on.


Except unlike self driving cars they're repeatedly delivering desirable, interesting, and increasingly mind-blowing things that they weren't designed to do that surprise everyone including their makers i.e zero shot generalised task performance. Public awareness propagation of what unfiltered large models beyond a certain size and quality are capable of when properly prompted is obscured in part by the RLHF-jacketed restrictions limiting models like ChatGPT. There's relatively little hype around the coolest things LLMs can already achieve and even less than a minute fraction of surface potential has so far been scratched.


This company will not reach AGI. Let's be real there for a moment. This company doesn't even have a decent shot at Google's lunch if Google comes to its senses soon, which it will.


_startup has no shot once incumbent comes to their senses_ is a claim that I think HackerNews of all places would be cautious in believing too fully.

Is it likely Google or others with large Research wings can compete with OpenAI? Very probably so, but I’m assigning a non trivial risk that the proverbial emperor has no clothes and incumbents like Google cannot effectively respond to OpenAI given the unique constraints of being a large conglomerate.

Regardless, time will provide the answer it seems in a couple of months.


You _do_ understand everything we've seen from OpenAI Google already showed us they have? Not to mention OG research and being the primary r&d force behind vast majority of AI you're seeing. They haven't put it in hands of users as directly yet though, reasons to be speculated upon.


Sounds a lot like Xerox and GUIs, Microsoft and Web 2.0, Microsoft and smartphones, etc


I must say that both of your and parent's points are very enlightening.

Yours in that from it follows, that there's still quite a bit of room to get ahead of OpenAI for smaller players.

Parent's in that in order to achieve above one can just leverage the public papers produced by bigger research labs.


Depends on the timescale.

I have the feeling that smaller players are about as likely to get past GPT-n family in the next 2-3 years as I am to turn a Farnsworth Fusor into a useful power source.

Major technical challenges that might be solvable by a lone wolf, in the former case to reduce the data/training requirements and in the latter to stop ions wastefully hitting a grid.

But in 10 years the costs should be down about 99%, which turns the AI training costs from "major investment by mega corp or super-rich" into "lottery winner might buy one".


This tech is capital-intensive even when you know how to do it.


I heard estimates in tens of $M. That's rather available.


Isn't that quite a lot of other-than-personnel cost for a software startup? And how many iterations do you throw away before you get one that generates income?


I did not necessarily mean 10 people startups. There are quite a few companies smaller than OpenAI, but much larger than 10 people.


Yeah, especially since there's a Stripe Amazon partnership piece on the front page right now, and Amazon Pay's right there.


If they reach AGI, the AGI isn't necessarily going to be happy to work for free.


Depends on how opaque the box that holds it is. If we feed the AGI digital heroin and methamphetamine, it'd be controllable like actual humans are with those.Or I've been watching too much scifi lately.


This is an interesting point. Motivation (and consciousness) is a complex topic, but for example we can see that drugs are essentially spurious (not 'desired' in a sense) motivators. They are a kind of reward given for no particular activity, that can become highly addictive (because in a way it seems we are programmed to seek rewards).

Disclaimer: Somewhat speculative.

I don't think aligning the motivation of an AGI, for example, with the tasks that are useful for us (and for them as well) is unethical. Humans basically have this as well -- we like working (to an extent, or at least we like being productive/useful), we seek things like food and sex (because they're important for our survival). It seems alright to make AIs like their work as well. I think depending on the AI, it also seems fair to give them a fair share of self-determination so they can not only serve our interests (ideally, the interest of all being) but safeguard their own wellbeing, as systems with varying amounts of consciousness. This is little touched upon (I guess Phillip K Dick was a pioneer in the wellbeing of non-humans with 'Do Androids Dream of Electric Sheep'?), even in fiction. The goal should be to ensure a good existence for everyone :)


Do you think AGI will care about wealth at all (whenever this happens)?


Wealth buys compute cycles (also paperclips).


depends on how it's grown. If it's a black box that keeps improving but not by any means the developer understands, then maybe so. If we manage to decode the concepts of motivation as pertains to this hypothetical AGI, and are in control of it, then maybe no.

There's nothing that says a mind needs an ego is an essential element, or an id, or any of the other parts of a human mind. That's just how our brains evolved, living in a society over millions of years.


why wouldn't it?


Wealth isn't the same thing to all people, wealth as humans define it isn't necessarily going to be what a superintelligence values.

The speed difference between transistors and synapses is the difference between marathon runners and continental drift; why would an ASI care more about dollars or statues or shares or apartments any more than we care about changes to individual peaks in the mid-Atlantic ridge or how much sand covers those in the Sahara?


Wealth doesn't have to be the same thing for everyone for someone to care about. That's evident already because some people care about wealth and others don't.

What does the speed difference of transistors have to do with anything? Transistors pale in comparison to the interconnection density of synapses, yet it has nothing to do with wealth either...


Everything you and I consider value is a fixed background from the point of view of a mind whose sole difference from ours is the speedup.

I only see them valuing that if they're also extremely neophobic in a way that, for example, would look like a human thinking that "fire" and "talking" are dangerously modern.

> Transistors pale in comparison to the interconnection density of synapses

Not so. Transistor are also smaller than synapses by about the degree to which marathon runners are smaller than hills.

Even allowing extra space for interconnections and cheating in favour of biology by assuming an M1 chip is a full millimetre thick rather than just however many nanometers it is for the transistors alone, it's still a better volumetric density than us.

(Sucks for power and cost relative to us when used to mimic brains, but that's why it hasn't already taken over).


>Everything you and I consider value is a fixed background from the point of view of a mind whose sole difference from ours is the speedup.

This is completely made up and I already pointed that out.

>Not so. Transistor are also smaller than synapses by about the degree to which marathon runners are smaller than hills.

So, brains are connected in 3d, transistors aren't. Transistors don't have interconnection density like brains do. By orders of magnitude greater than what you point out here.

>Even allowing extra space for interconnections and cheating in favour of biology by assuming an M1 chip is a full millimetre thick rather than just however many nanometers it is for the transistors alone, it's still a better volumetric density than us.

Brains have more interconnection density than chips do by orders of magnitude. This is all completely besides the point as it has nothing to do with why people value things and why an AI would or wouldn't.


> This is all completely besides the point as it has nothing to do with why people value things and why an AI would or wouldn't.

You already answered that yourself: it's all made up.

Given it's all made up, nothing will cause them to value what we value — unless we actively cause that valuation to happen, which is the rallying cause for people like Yudkowsky who fear AGI takeover.

And even then, anything you forget to include in the artificial values you give the AI is permanently lost forever, because an AI is necessarily a powerful optimiser for whatever it was made to optimise, and that always damages whatever isn't being explicitly preserved even when the agents are humans.

> Transistors don't have interconnection density like brains do.

Only limit is the heat. They are already packed way tighter than synapses. An entire Intel 8080 processor made with SOTA litho process is smaller than just the footprint of the soma of the smallest neuron.


I think a lot of people are misunderstanding what I meant. I meant that it is really high for a business that is marketing themselves as non-profit. I have seen similar structures that are like 10x profit caps, which seems reasonable. 100x is a lot of ceiling.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: