Further evidence of my justified true belief that the world will rapidly divide between those who already had programmatic control over their computers, now infinitely more productive through direct access to the GPT APIs (to say nothing of the firms that will use LLMs only internally, trained on their own codebase for example) and those who believe GPT=ChatGPT, which will slowly become but another conduit for ads through plugins. It's quite sad when even Nature magazine becomes a spokesperson for the latter group, who will continue to believe that LLMs are a racket on the order of Bitcoin.
With more and better models released, getting access to OpenAI's API is becoming less interesting.
What is interesting is having access to GPUs (cloud or real hardware). The world is divided in "has access to enough GPU power/don't has access to enough GPU power".
My prediction is that this will get worse for a while.
When I use ChatGPT, I find anything but GPT4 unusable; when I'm programming against the API, I actually tend to find myself using GPT3.5. I still haven't had a chance to experiment with the opensource LLMs, but that's my project for next month.
Maybe, but I can't help but notice the "programmatic control" is over someone else's computer. Those who care the most about having programmatic control over their computers are also those who would rather send their prompts to their own GPU rather than an Azure server farm. I believe there will be a programming revolution built on the foundation of LLMs, but it won't really take off until we can use local LLMs for the bulk of the processing.
And with computers there was a divide between those willing to learn to program and those using the applications that were programmed...whats your point? The majority of society needs a pretty big level of abstraction to be willing to use something.
Yes I think I have a similar sentiment. Articles like these would do better to take the approach of
"Oh shit, something huge is happening and we might not be intelligent enough to see the ramifications, but here's a humble attempt"
rather than
"Ha, the techies have done something they think is impressive again. It's certainly interesting, but as usual they're exaggerating it and failing to think from a nuanced human perspective. However us journalists are trained in that sort of thing, so we can help out here, and we know you readers will be all too familiar with the way those techies can only think like computers lol."
I feel it gives me about a 30% lift on mechanical tasks and a 60% lift on learning / unblocking in areas of ambiguity. I use GPT for the mechanical tasks and ChatGPT for the learning tasks. An example of the learning would be “explain to me the use of Box::pin in the context of rust futures” or some such, but also sometimes some common idiom I’m brain farting on. Searching Kagi will yield the answers, just more slowly and deeply embedded in some document or stackoverflow answer vomit requiring lot of wasted effort that fully distracts me from my flow. The fact I can ask follow up questions on areas of ambiguity is useful. When it hallucinates it generally means I’m in an area that’s either undefined as of yet, or is really niche. The nice thing about programming is hallucination feedback is basically instant so I then pull out Kagi and research a bit, and maybe 90% of the time it’s just not possible.
There has been some work done on generating code in a feedback cycle to winnow out hallucinations and it seems to work fairly well [1]- I think 99% of the challenges LLM face are primarily related to a lack of constraint, optimization, agency, and solver feedback. As they get integrated into a system with the ability to inform and constrain and guide using classic AI techniques their true value will be attainable. But they’re pretty useful even today.
N.b., I’m a 32 year veteran distinguished engineer level at FAANG and adjacent firms that programs daily.
I use GPT to write code sometimes instead of importing a library. For instance a function to breadth first traverse a directed acyclic graph and slice it into levels. Another one to find a node in a nested graph using partial paths. I could have written those functions, but GPT4 does it correctly and faster.
I almost always begin my tasks with ChatGPT, by asking it for a framework. "Design a x request that gets me results in form of this JSON, and now display it on y table according to this criteria"
It gives me framework, I adjust and expand upon it. Works wonderfully.
I even have a 'bot' running on one of the IRC channels that was almost 100% self written by ChatGPT.
That's if you're explicitly asking it questions. Have you sat down and coded with Copilot offering suggestions as you went? It's honestly incredibly helpful, especially when leveraging a new language or stack.
From my personal experience, it's quite useful but not as much as a lot of people thought it would be. The ability solving 'simple' questions is great. I used it more as a smarter google.
The second group is going to be using chat interfaces in every app whether they like it or not in under 2 years. I think it'll be a net benefit, boomers (and everyone else too) will finally get their wish of just telling the computer what to do.