Hacker News new | past | comments | ask | show | jobs | submit login

Further evidence of my justified true belief that the world will rapidly divide between those who already had programmatic control over their computers, now infinitely more productive through direct access to the GPT APIs (to say nothing of the firms that will use LLMs only internally, trained on their own codebase for example) and those who believe GPT=ChatGPT, which will slowly become but another conduit for ads through plugins. It's quite sad when even Nature magazine becomes a spokesperson for the latter group, who will continue to believe that LLMs are a racket on the order of Bitcoin.



I haven’t been able to get GPT-4 API access months after applying and being a paid subscriber.

The world right now seems to be divided more into “have GPT-4 API/ don’t have GPT-4 API”.


With more and better models released, getting access to OpenAI's API is becoming less interesting.

What is interesting is having access to GPUs (cloud or real hardware). The world is divided in "has access to enough GPU power/don't has access to enough GPU power".

My prediction is that this will get worse for a while.


"Better"?

Get out of town.


Would you care to explain what you mean?

I meant 40B Falcon is better than 65B LLaMA in many benchmarks and we will see even better models released with time.

What is wrong with that?


That is fair, your post left it a bit ambiguous if you meant better in reference to GPT-4 or not.

Competitors aren't even at GPT 3.5.


When I use ChatGPT, I find anything but GPT4 unusable; when I'm programming against the API, I actually tend to find myself using GPT3.5. I still haven't had a chance to experiment with the opensource LLMs, but that's my project for next month.


I have preferred GPT3.5. GPT4 is slow and verbose, I only resort to this model when all my prompts in 3.5 didn't give the expected result.


Same here. Instead, I get to pay for ChatGPT and have people tell me it’s inferior.


They just can't scale to the demand, that's their main issue right now.


Maybe, but I can't help but notice the "programmatic control" is over someone else's computer. Those who care the most about having programmatic control over their computers are also those who would rather send their prompts to their own GPU rather than an Azure server farm. I believe there will be a programming revolution built on the foundation of LLMs, but it won't really take off until we can use local LLMs for the bulk of the processing.


And with computers there was a divide between those willing to learn to program and those using the applications that were programmed...whats your point? The majority of society needs a pretty big level of abstraction to be willing to use something.


Yes I think I have a similar sentiment. Articles like these would do better to take the approach of

"Oh shit, something huge is happening and we might not be intelligent enough to see the ramifications, but here's a humble attempt"

rather than

"Ha, the techies have done something they think is impressive again. It's certainly interesting, but as usual they're exaggerating it and failing to think from a nuanced human perspective. However us journalists are trained in that sort of thing, so we can help out here, and we know you readers will be all too familiar with the way those techies can only think like computers lol."


> now infinitely more productive

How much more productive do you feel you are coding with an LLM? As another HN user said, to me it's like a talking dog -- incredible yet useless.


I feel it gives me about a 30% lift on mechanical tasks and a 60% lift on learning / unblocking in areas of ambiguity. I use GPT for the mechanical tasks and ChatGPT for the learning tasks. An example of the learning would be “explain to me the use of Box::pin in the context of rust futures” or some such, but also sometimes some common idiom I’m brain farting on. Searching Kagi will yield the answers, just more slowly and deeply embedded in some document or stackoverflow answer vomit requiring lot of wasted effort that fully distracts me from my flow. The fact I can ask follow up questions on areas of ambiguity is useful. When it hallucinates it generally means I’m in an area that’s either undefined as of yet, or is really niche. The nice thing about programming is hallucination feedback is basically instant so I then pull out Kagi and research a bit, and maybe 90% of the time it’s just not possible.

There has been some work done on generating code in a feedback cycle to winnow out hallucinations and it seems to work fairly well [1]- I think 99% of the challenges LLM face are primarily related to a lack of constraint, optimization, agency, and solver feedback. As they get integrated into a system with the ability to inform and constrain and guide using classic AI techniques their true value will be attainable. But they’re pretty useful even today.

N.b., I’m a 32 year veteran distinguished engineer level at FAANG and adjacent firms that programs daily.

1 https://voyager.minedojo.org


I use GPT to write code sometimes instead of importing a library. For instance a function to breadth first traverse a directed acyclic graph and slice it into levels. Another one to find a node in a nested graph using partial paths. I could have written those functions, but GPT4 does it correctly and faster.


>incredible yet useless.

I almost always begin my tasks with ChatGPT, by asking it for a framework. "Design a x request that gets me results in form of this JSON, and now display it on y table according to this criteria"

It gives me framework, I adjust and expand upon it. Works wonderfully.

I even have a 'bot' running on one of the IRC channels that was almost 100% self written by ChatGPT.


> to me it's like a talking dog

That's if you're explicitly asking it questions. Have you sat down and coded with Copilot offering suggestions as you went? It's honestly incredibly helpful, especially when leveraging a new language or stack.


From my personal experience, it's quite useful but not as much as a lot of people thought it would be. The ability solving 'simple' questions is great. I used it more as a smarter google.


One simply needs to think more laterally about the dog.


The second group is going to be using chat interfaces in every app whether they like it or not in under 2 years. I think it'll be a net benefit, boomers (and everyone else too) will finally get their wish of just telling the computer what to do.




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: