It’s churn because every new model may or may not break strategies that worked before.
Nobody is designing how to prompt models. It’s an emergent property of these models, so they could just change entirely from each generation of any model.
IMO the lack of real version control and lack of reliable programmability have been significant impediments to impact and adoption. The control surfaces are more brittle than say, regex, which isn’t a good place to be.
I would quibble that there is a modicum of design in prompting; RLHF, DPO and ORPO are explicitly designing the models to be more promptable. But the methods don’t yet adequately scale to the variety of user inputs, especially in a customer-facing context.
My preference would be for the field to put more emphasis on control over LLMs, but it seems like the momentum is again on training LLM-based AGIs. Perhaps the Bitter Lesson has struck again.
People are trying to design how to prompt, but it’s very different in both implementation and result than designing a programming language or a visual language, ofc.
It's interesting that despite all these real issues you're pointing out a lot of people nevertheless are drawn to interact with this technology.
It looks as if it touches some deep psychological lever: have an assistant that can help to carry out tasks that you don't have to bother learning the boring details of a craft.
When I was a kid I messed with computers because they were new, fun, and interesting. At the time I never realized they'd be my source of a living in the future.
Current AI has brought back a lot of that wonder and interest for me, and I'm sure the same is true for a lot of other computer nerds.
I’d consider myself one of those nerds. I’ve been in love with programming since I was 9 (20+ years ago now)
I was mystified by LLMs a couple years ago. But after really understanding how they work and running into their limitations, a lot of that sheen was lost.
There’s not a ton of interesting technology happening with LLM, more a ton of interesting math. (Math, especially linalg, is not the part of computer science I, personally, fell in love with.)
The outputs of LLMs, unlike programming languages, is pretty random and trial and error based. There’s never any real skill or expertise being built by playing with these tools. My control over the output isn’t as direct or understandable as with programming.
There’s no joy of discovery, only joy of getting the slot machine to give me what I want once in a while.
I’ve regained a lot of that wonder, recently, by doing graphics programming and learning lisp. Going against industry trends in my recreational programming has helped the field feel fresh to me.
Regardless, I don’t think the extreme minority of people who are truly nerdily passionate about tech are the “a lot of people” OC or I was talking about.
Unless your business is customer service reps, with no ability to do anything but read scripts, who have no real knowledge of how things actually work.