1. Many difficult tasks can be decomposed to trivial tasks.
2. Automating away well-defined trivial tasks frees up more time for figuring out the difficult tasks.
3. If you're running solo, being able to ask for alternative perspectives or to validate ideas - even if that's against a compressed archive of Reddit answers - is still valuable.
4. How is this better or worse than making progress through asking questions on X/Reddit/mailing lists/IRC/Usenet/your local library? I'll tell you: it doesn't irritate other people as much, and it's likely a lot more efficient. I get it, "I spent 20 minutes with an LLM and made a one-page HTML website!" doesn't sound impressive, until you compare it - about a year ago - "I spent two days going through awful ad-laden tutorials and made a one-page HTML website!".
The big cognitive gap people might be running into - and I think you might as well - is that people think these LLM tools are there to replace people and to do things that weren't possible before. They specifically aren't. They're tools to do stuff we know how to to do, but a bit faster.
Nobody is making explicit claims of magic. Nobody - other than breathless media - thinks this will replace people doing the jobs they do today. It's just going to make it a bit faster and quicker to get some stuff done.
They're better screwdrivers, not replacement geniuses.
May I ask, have you tried to do something difficult with it? Have you concluded it's impossible, or are you guessing that it isn't due to a lack of signal? I think it might actually be impossible, but I still think they're valuable tools for the reasons above.
> 4. How is this better or worse than making progress through asking questions on X/Reddit/mailing lists/IRC/Usenet/your local library? I'll tell you: it doesn't irritate other people as much, and it's likely a lot more efficient. I get it, "I spent 20 minutes with an LLM and made a one-page HTML website!" doesn't sound impressive, until you compare it - about a year ago - "I spent two days going through awful ad-laden tutorials and made a one-page HTML website!".
I think this is the money-shot. LLM's (specifically ChatGPT) have helped me debug weird issues, and help get started with new technologies / libraries where searching for the issues on Google did not yield (good) results.
> Many difficult tasks can be decomposed to trivial tasks.
Could you or anyone provide examples of this?
Because in my experience, it's not true. I've found that difficult tasks may have parts that are trivial, but always have a truly difficult core, which is what makes them difficult in the first place.
Whereas there are plenty of long/boring tasks that can be decomposed into individual trivial tasks, but the whole point is that nobody's calling them difficult though. Just long and boring.
But maybe I'm misunderstanding, so I'd love any counterexamples.
I'm going to ask you how you go about solving those difficult cores, and you prove my hypothesis wrong. :-)
I'm going to suggest hard problems are solved by breaking them down into a set of smaller problems.
Perhaps a hypothesis or two and some experiments that need designing. Or perhaps you need to understand a problem from different perspectives, or research whether something similar exists in a different domain.
Aristotle, Euclid, Copernicus, Galileo, Darwin, Tesla, Edison, Turing, Einstein... not one of them had an entire solution to a hard problem revealed in a single gulp. Every single one of them took small iterative steps and needed to break a problem down and approach each part of it as an individual problem in its own right.
You might be doing an experiment nobody has ever done before to test a hypothesis that nobody has ever considered in human history, but LLMs can still help you determine if the hypothesis is framed correctly, understand if your experiment has parallels elsewhere in the knowledge its exposed to, research how to validate your results, and help you with a ton of other small mundane tasks needed to do good science. Doesn't make it a scientist, just makes itself useful to scientists.
Likewise, you might not know how to write a Phoenix LiveView application to predict in-play odds of your favourite sports team and identify value in online sports books (trust me, this is a hard problem), but it can help break that down into the individual pieces, let you get started with small utility functions that you can build on, and work with you to make you more productive. Doesn't replace the work you need to do as an engineer, just makes itself useful to you as an engineer.
> Aristotle, Euclid, Copernicus, Galileo, Darwin, Tesla, Edison, Turing, Einstein... not one of them had an entire solution to a hard problem revealed in a single gulp. Every single one of them took small iterative steps and needed to break a problem down and approach each part of it as an individual problem in its own right.
I don't think that's true at all. To the contrary, it was a ton of thinking and experimentation and then getting really lucky with major flashes of insight -- which is the very opposite of breaking something down into tractable parts.
Einstein coming up with general relativity wasn't something that he did, or anybody could have done, by gradually breaking the problems of gravity down into parts that were then straightforward to solve. That's not how his discovery worked at all.
Difficult problems are difficult precisely because they can't be solved in an easy straightforward way of breaking them down. They seem quite impossible to solve until you try a bunch of things, sometimes for years/decades, throwing stuff at the wall, and you hope you get lucky. But many times (usually?) you don't. That's what makes them difficult.
... is both breaking things down into small steps, and something that an LLM can help with.
> They seem quite impossible to solve until you try a bunch of things, sometimes for years/decades, throwing stuff at the wall
Do you see what you did there? That's breaking things down and trying lots of small things.
What you seem to think I'm suggesting - which I'm not - is that solving hard problems is linear once it's fragmented.
I'm suggesting hard problems are only solvable through fragmentation - breaking them apart - but I'm not in any way suggesting that this means they're solvable through a simple linear thinking process. Fragmentaiton and linearity are not the same thing.
If anything, LLMs can help make non-linear thinking more efficient by getting you out of small areas of focus, and as I said originally helping you explore a problem through metaphor, different perspectives, different domains, and so on.
> > They seem quite impossible to solve until you try a bunch of things, sometimes for years/decades, throwing stuff at the wall
> Do you see what you did there? That's breaking things down and trying lots of small things.
No it's not, that's my whole point.
If you're trying to find a filament that will work for a commercial light bulb, then testing out 1,000 materials is not breaking anything down to solve the problem. Instead, it's trial and error. They're literally the opposite approaches.
Some problems are solvable through fragmentation. Many others are just fundamentally not, and these obviously tend to be the more difficult ones.
> If anything, LLMs can help make non-linear thinking more efficient by getting you out of small areas of focus
That's literally the opposite of breaking things down into small steps. So now I don't even know what you're arguing anymore. But also, I don't see how LLM's help that in the least. I have not seen any examples of LLM's demonstrating "non-linear thinking". They literally work, well, with linear token prediction -- one token at a time.
Perhaps you could provide an example of a hard problem that was solved that wasn't done so through breaking a problem down into small pieces.
No academic paper ever written, invention ever created or piece of art I can think of or heard of (including general relativity, which was referenced earlier in thread), was solved by just sitting there and thinking about the big hard problem and solving the big hard problem in one go.
The closest I can think of what you're referring to is an accidental discovery, so I think that's where our lines are crossed.
At its core I still don't buy that LLMs are useless in the context of solving hard problems. You disagree. Time and experience will tell, but it seems more likely that successes will be attributed to them than not.
Just wait until Chat-GPT is ad laden. Type your question, watch an 8-second ad, see half your answer, watch a 16 second ad see the rest of your answer.
If and when that happens, either we switch to the open source models that are rapidly catching up, or it keeps our interest despite the adverts by consistently improving and staying ahead of the open source models.
2. Automating away well-defined trivial tasks frees up more time for figuring out the difficult tasks.
3. If you're running solo, being able to ask for alternative perspectives or to validate ideas - even if that's against a compressed archive of Reddit answers - is still valuable.
4. How is this better or worse than making progress through asking questions on X/Reddit/mailing lists/IRC/Usenet/your local library? I'll tell you: it doesn't irritate other people as much, and it's likely a lot more efficient. I get it, "I spent 20 minutes with an LLM and made a one-page HTML website!" doesn't sound impressive, until you compare it - about a year ago - "I spent two days going through awful ad-laden tutorials and made a one-page HTML website!".
The big cognitive gap people might be running into - and I think you might as well - is that people think these LLM tools are there to replace people and to do things that weren't possible before. They specifically aren't. They're tools to do stuff we know how to to do, but a bit faster.
Nobody is making explicit claims of magic. Nobody - other than breathless media - thinks this will replace people doing the jobs they do today. It's just going to make it a bit faster and quicker to get some stuff done.
They're better screwdrivers, not replacement geniuses.
May I ask, have you tried to do something difficult with it? Have you concluded it's impossible, or are you guessing that it isn't due to a lack of signal? I think it might actually be impossible, but I still think they're valuable tools for the reasons above.