Hacker News new | past | comments | ask | show | jobs | submit login
Ask HN: Is the world ready for the inevitable ChatGPT rug pull?
21 points by alexfromapex on Aug 16, 2023 | hide | past | favorite | 16 comments



/r/localllama is working on it. So are every enterprise. Nobody wants OpenAI to own the space centrally nor train the model that could undermine their business, not even OpenAI wants that.

There is a global race on to acquire vector math hardware (GPU, TPU, etc) and its manufacturing capabilities. LLMs at their best are intimate and deserve first-class isolation, and doing so on owned hardware is the best way to guarantee that.

OpenAI enjoys a market leading position and may for some time, but this technology is the last thing that makes sense to be centralized in the long term. For now, it’s just impractical for most to run a 1.7 trillion parameter model on hardware they own, but OpenAI is catalyzing a movement of hundreds of thousands people who have been able to run finely-tuned 13 billion parameter with usable results on consumer hardware- in the past few months.

I don’t worry much about an OpenAI rug pull. I don’t worry as much about AI revolution happening too quickly either, not at this pace. Should I?


I think v3.5 is already a lot worse than earlier this year and v4 is basically the only option if you want a reliable answer.

This coincided with the update that made v4 the default option for new chats for Plus subscribers (of which I am one), drastically increased its speed, and removed the hourly message limits.


I hang around the OpenAI forum every day. The common sentiment this month is the opposite - ChatGPT v4 is worse, and v3.5 is actually better for many use cases. 3.5 handles roughly double the input tokens, so it's a lot better for programming and seems to be better at holding long conversations too.

There still are message limits too, it's just double.


so I am not crazy, 3.5 seems much dumber than before. It used to be able to solve coding problems and have a 75% working solution, now it spits out literal nonsense or doesnt even try.


While I agree that 3.5 seems to be getting worse, I haven’t had it spit out “literal nonsense”. It’s still pretty decent as spitting out code for me.

What’s an example prompt that will induce nonsense?


"the sudden and intentional removal of liquidity or funds from a project" - kind of a stretch from that definition but I think OP is saying the value of ChatGPT api calls is going to zero? Because soon every PC will be able to run the exact same LLM?


Bing Chat has already shown that injecting advertising into generated content is possible, if paid API usage and ChatGPT Plus subscriptions can't keep the bills paid then there's a whole bunch of enshittification levers that can be pulled to make money: ads, quotas, additional in-page upsells, and locking it down entirely. It's just a matter of keeping the value proposition slightly ahead of competitors.

ChatGPT has the lion's share of name recognition and media buzz but every one of the competitors in this space is probably just waiting for them to misstep so they can step into the spotlight.


Elaborate.


I think many readers here are familiar with the coined term "enshittification" which describes a common life cycle of tech products where they eventually are modified to prioritize shareholders instead of their users. It has happened to software like Docker Desktop, Reddit, etc. where a few years go by and everyone is using the software and then, when the ostensible vendor lock-in is at a peak, the shareholders decide the product will be hard for customers to divest from and they want to commercialize the lock-in by charging for what was previously free. If that happens to ChatGPT, now that many content creators are legally protecting their content from becoming training data, would there be any viable free or low cost alternatives?


I have a question that comes at this from the other end: What kind of LLM or 'AI' would content creators intentionally allow access to their content?

I think this question is important because it gives the decision back to content creators, and that's where I think it should be, as a point of principle.

The alternative - and this is what I think will happen - is that LLM scrapers will just start to ignore the instructions to go away once it becomes clear that no content creators want their work to be scraped for nothing in return. So 'AI' users will continue to get free stuff, but at the expense of everyone they stole from.


ChatGPT free was always just a demo. Plus carries significant features - plugins, a much more powerful model. The code interpreter is well worth $20/month for certain groups.

The API is the other part of the funnel. It's been quite revenue generating from the beginning. Enshittification is more likely to happen to companies with terrible ARPU, like Reddit or X.

Then again Sam Altman was on Reddit board, and during that time, defended that it was fine for Reddit to lose money, because they still had plenty of money in the bank.


Open-source/open-weight models have been getting better and better. I don't think I would start a business which relied entirely on ChatGPT (starting a business based on somebody elses platform is always going to be a high-risk choice) but for people including it in their business I don't think the risk for ChatGPT is higher than for any other vendor.


I assume: Cool stuff for free usually starts having some kind of cost suddenly, and people are usually surprised when it happens?


ChatGPT API is not free.


I expect than in 3 years at this rate, most people will just be accessing LLMs via AWS. Many use GPT through Azure now. ChatGPT could have as many clones as Flappy Bird.


Who is the carpetbagger?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: