Hacker News new | past | comments | ask | show | jobs | submit login
GPT-4 prediction: It won’t be very useful (nostalgebraist.tumblr.com)
30 points by astrange on Jan 8, 2023 | hide | past | favorite | 28 comments



"Burgeoning technology does not have practical application"

What a brave and unique prediction.


Morse code, mathematics, playing chess, string reversal - these are all things that a computer can already do, but GPT isn't good at. So why not combine GPT and something else, so it can make some kind of external call to a service and have that do some computation for it?


Because GPT doesn't "know" what you're asking it to do. If you want it to be able to call out to a chess engine when you ask it "Solve this chess problem" then we're back in the old world of grammars and trying to parse sentence structure.


It's anecdotal, but in my usage GPT "knows" way better how to formalize a problem than the actual factual answer. If I ask for a list of animal names that end with the string "ar", the answer is nonsense, but if I ask for a python script that check if a string end with the substring "ar", the answer is correct. Same with simple math problems and many other questions. I was wondering the same thing, if it would be possible to use some kind of deterministic confirmation as a part of GPT training.


Confident predictions about something anyone outside has little clue about. Looks like clickbait by extreme contrarian view.


The critical targets keep moving, yet the models keep improving. What gives?

Where's the wall? We don't appear to have hit one yet. The stuff just keeps getting more and more mind blowing by the month^W week. I'd hate to make confident predictions against AI when the shape of the graph keeps going up and to the right at an exciting, almost alarming, pace.

As an aside, this kind of analysis strikes me as what advanced aliens must think about human life on earth.


GPT-3 is trained by looking at approximately all text in the world once. So the wall is when you run out of text.

And I don't think it's really gotten better recently. ChatGPT is not new, you just didn't know about it before. (Also, they're paying for it. You wouldn't enjoy using it nearly as much at full price.)


Humans become intelligent after reading / hearing several orders of magnitude fewer words than GPT-3 has access to.

There's dozens of ways GPT-4 could be improved over GPT-3 without needing more training data. Eg, through reinforcement learning (talking to itself), tweaking the neural architecture, spending more time training, etc.


No because multi-modal learning is next. A human can learn what a rock is after N=1 only because we have the visual/tactile of the rock (combined with our intuitive understanding physics etc) to go with the text label. We can immediately guess how it fits into our world from that visual/tactile. GPT needs very large N to grasp the concept of a rock because it requires a lot of textual associations to figure that concept out.


It is next conceptually, but it’s not proven you can do it till someone’s done it. Other rumors suggest whatever GPT4 is, it’s not multimodal.

I don’t believe large models are great multimodal demonstrations either, insofar as being large just lets you memorize different modalities side by side without necessarily integrating them.


There's practically unlimited amounts of text. Including what is generated by itself. Also intelligence isn't the same thing as input.


I don't think that training a GPT on GPT output is likely to be helpful. If nothing else, it's going to be like a human getting stuck in a filter bubble - it's not going to improve correspondence with reality.


I’m curious whether there is a future coming where we praise researchers for any excellent, new, and novel research that can be added to the knowledge dataset.


There was a chatgpt before November of last year?


ChatGPT is just InstructGPT with prompt engineering to make the model behave like it's having a conversation. There was probably some additional fine-tuning done to make it self aware (think of the "I am a chat bot trained by OpenAI" spiel), but personally it has comparable performance to text-davinci-003 which was released Nov of last year (Nov 2022).


It was called text-davinci-002 on OpenAI playground. This is a less-than-revolutionary training improvement and a better UI.

ChatGPT is basically text-davinci-003 and was trained in early 2022 but not released then.


Why can't something just be cool and an amazing because it exists and does something we've never seen before?


It can. It is.

I suspect I’m coming across as pedantic and you’re saying “why can’t...” as a way to shake the OP free from the intellectual chains of being a critic, but it’s important enough to me that we normalize openly celebrating cool and amazing things.


But GPT-3 is already useful to me lol.


can you go into details? do you have a business use case you are exploring, developing?


It's useful as a learning tool. I can ask it how to do almost anything, or how almost anything works.

When I want to learn how to do something new in a programming language, I ask ChatGPT and it explains in detail with a first pass example in code.


Can you quantify that utility in how many dollars/month you would pay for access to it?


If I had to quantify, I'd say maybe on the order of ~$10/month, like a media subscription.

Hard to say for sure, and hard for me to predict where the market would price a service like ChatGPT long-term.


No, just personal use. I use it to help explain relationships between ideas or concepts I’m not familiar with, especially with economics and biology. It does an excellent job helping me understand what certain words mean in certain contexts.

With the caveat being I don’t assume the information it tells me is always accurate. I usually do a Google search to confirm something new but without ChatGPT I wouldn’t have known what to search for.

For now it mostly helps me understand the context behind something I’m learning about. Saves me like 10 minutes of extra googling.


Phew. My morse code business is safe for now.


The third GPT smear article on the front page today. Interesting.


I think the article about GPT-Chat being unlikely to replace search engines is right though.

I think that LLMs are something entirely new and calling them a “Google killer” is a failure of imagination.


I think you’re entirely underestimating how far Google has fallen. ChatGPT has answered a lot of the frustrations I have in trying to use Google the last few years.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: