Though if you're an expert in a subject you ask it about, you can see when it transitions from "knowledge of the subject" to "Your uncle who made a living hustling pool and can tell a story made of nothing but whoppers of lies, sounding like the most honest person you've ever met."
It's reasonable to assume it does the same thing over subjects you know less about.
It's really bizarre we ended up with the largest LLM deployment being around something it really doesn't do that well at (knowledge retrieval).
Hopefully soon we'll see it reading the story itself, stripping out irrelevant ads/SEO, and summarizing it.
I actually have a dream of one day going back to a flip phone where I'll get an AI call summarizing emails or messages that are important, as well as any relevant news stories where I can ask for more detail if needed.
The Internet is amazing, but honestly I don't know that the degree to which I personally engage with it is the ideal cost/benefit as opposed to using an intermediary.
> I actually have a dream of one day going back to a flip phone...
Do it. Sonim makes some absolute brick chonks of phones these days.
> ...where I'll get an AI call summarizing emails or messages that are important, as well as any relevant news stories where I can ask for more detail if needed.
I just try to not have unimportant emails come in, though eBay can't be convinced to turn off their damned shipping notifications (or I've not found how to do it). The 1st of the year is a very good time to start wrangling your inbox - every email that comes in is either of value, or you figure out how to eliminate it. Unsubscribe from everything you don't really care about.
And news summary emails are a thing. I route them and various Substack newsletters to my Kobo via Pocket.
> The Internet is amazing, but honestly I don't know that the degree to which I personally engage with it is the ideal cost/benefit as opposed to using an intermediary.
You get a lot of the benefits with far less use than most people assume sane.
I check email via mail clients, when my desktop computers are on. It doesn't come to my phone. I'm off instant messengers most evenings. And I turn my cell phone off at night (which also leads to, on the Sonim XP3+, about two weeks of useful battery life).
But with user generated content like this thread other people can criticize each other and discuss things. If someone is spectacularly wrong then chances are that will be pointed out.
ChatGPT is essentially an echo chamber between you and ChatGPT.
That's a personal problem, not a technology problem. If you let yourself be brainwashed by your weird uncle, alt news, or ChatGPT, that's gotta be on you.
The point is that if I ask about topics I don't know about, then by definition I can't judge what is or isn't true. That is also the case on HN where I'm reading a discussion about a topic I don't know about, but that on HN there are some signals such as votes and replies to comments, whereas on ChatGPT there are no external signals. Is this perfect? Of course not. Does it usually work kind of okay? I'd say it does.
For books or sites or most things these signals exist too; I can look up what others have said about it.
Even your uncle at a birthday party has some controls because other family members can call him out on hos bullshit, or they can later tell "psst, what Jack told you is a load of rubbish", or whatever.
There is no "generic review" of ChatGPT because it tells every single person someone else. There is no shared content we're both looking at.
ChatGPT very much works in a substantially different way than most other public sources of information.
> For books or sites or most things these signals exist too; I can look up what others have said about it.
That's no different from ChatGPT. The control on books at least is a publisher, but that is not really the case for a website. That's why fake news is a real issue.
And your uncle may talk to you in private. Or maybe it's your parents around the family dinner table. There are many sources of dangerous misinformation, not just ChatGPT.
Education and critical thinking is the real solution.
Let's not lose track of what this discussion is about: it's not about whether your uncle might be feeding you bullshit, it's about whether ChatGPT is a good source to learn about new topics you're not knowledgable on. Your crackpot uncle is not the baseline here – a well-reviewed book (or documentary series, or podcast, or whatever) is.
That there are crackpot books out there is besides the point; I can trivially find out which the good books and crackpot books are. With ChatGPT you need to verify every single reply it gives you.
And there is nuance here. Take Horrible Histories. I like Horrible Histories – it's fun, and generally the history is accurate. But it's not perfect and has mistakes, and there's a ton of lists enumerating these mistakes, so you can safely watch the show and then read these lists to ensure you're not believing a mistake.
There is no "historical inaccuracies in ChatGPT" list. It would be impossible to construct such a list. You need to verify every single reply it gives you, and every detail of that not just the general gist.
General critical thinking helps you surprisingly little because many of the signals of reliability that you usually rely on are lacking. Thinking you can trust ChatGPT on topics you're not already fairly knowledgable on and can't be easily empirically tested (e.g. math, programming to some degree) is a good way to get fooled.
I'm not primarily talking about random crap from random people; I'm talking about making a conscious effort to learn more about a specific topic. I really don't know how to explain this better than I already did. You keep trying to turn this in to a "people on random sites saying stupid stuff" vs. ChatGPT thing, but that's not what this is about. But even there, of course you shouldn't believe random stuff on the internet, but at the same time it does have some mechanisms for correction that ChatGPT lacks.
The problem is that you are outsourcing that information curation to a bot that is 100% happy to feed you bullshit - and you have no recourse and no way to check because it doesn't tell you how did it arrive at the information it is feeding you.
Also that information is by definition old. For something it doesn't matter, for many other things, like current news, it very much does.
So while the attraction of having a personal "butler" doing the hard work for you is great, no doubt, it is ultimately no better than getting one's information from Facebook - you have ceded control and get whatever the creators of the system deem you worthy of having. Including political and other agendas. Just look at the censorship the Baidu bot is applying and the entire "jailbreaking" industry around escaping the artificially imposed boundaries on these bots, for good or bad reasons.
And ultimately even ChatGPT and its ilk won't help us any when the web they scrape will be filled with machine-generated spam and crap drowning out all the useful information. Garbage in, garbage out applies even to these robots. They aren't magic.
Agreed. I think this is a good argument towards development of open LLMs.
If you're old enough, similar arguments existed about "open / free internet" and we're seeing some of the consequences of having it be controlled by mega corps.
That's naive. There are open LLMs already. The problem is not the LLMs but that data they are being trained on is going to be increasingly spammy, scammy garbage.
The LLMs don't have any sort of magical way of filtering that out. So if you train an LLM on spam, hoaxes and similar garbage you will get recommendations to drink bleach as a cure for covid. And that applies regardless of whether the bot is trained by a megacorp or someone in their garage using open source network and tools.
For now. How long do you think it'll really take before it's also pitching you products? We might have a golden period, kind of like when streaming and Netflix first arrived, but that will surely give way to enshittification and ads.
Lol ChatGPT is the biggest boost for scammers out there, and what's gonna happen in the future when the only thing these LLMs train on is other AI-spewed crap?