Hacker News new | past | comments | ask | show | jobs | submit login
New cognitive skills in the age of AI tailored information (safjan.com)
62 points by izik on March 7, 2023 | hide | past | favorite | 63 comments



I clicked on the article hoping to find information on potential new cognitive skills humans will need to learn to differentiate Large Language Model (LLM) hallucinated facts from real facts, but unfortunately the article doesn't touch upon this.

Reading the comments it seems likely that the article is itself LLM-generated blogspam, in which case it won't be aware of the potential for hallucinated facts.

I was thinking the other day that we really need a new term for this. In 2016 we had "post-truth", but that implies humans deliberately making stuff up to deceive people, for whatever reason, but LLMs making stuff up don't really knowingly do so, and don't really have a motive. There is the term "consensus reality", but the danger is that with more and more LLM-generated content appearing on the internet, which may ultimately pollute future training, we may find "consensus" isn't sufficient to determine reality any more. Perhaps the new term for what we're heading towards could be something like the "post-reality" era, or something like that.

Not sure what the solution to this is either, other than withdrawing from the mainstream internet and sticking to the small known pockets of human resistance (while they still exist).


Perhaps instead of using the word hallucination we could use average-context. When the LLM doesn't have enough information it computes some average of the information available in the wrapping context, so hallucination is some form of wrong result because of computing averages in a context. But the context could also be wrong.


Have there been papers published about what happens to the user experience when average-context is tightly constrained to small weighting ranges or eliminated altogether, and the model just throws an "insufficient data" error?


It handles human grammar by averaging and assuming contexts so you can't really fix one side without hurting the other. Humans separates grammar from facts, but these language models don't, to them grammar and facts are the same thing so you can't just tell it to stop lying without it stopping to do all the grammar tricks we expect from it.


> I was thinking the other day that we really need a new term for this. In 2016 we had "post-truth", but that implies humans deliberately making stuff up to deceive people, for whatever reason, but LLMs making stuff up don't really knowingly do so, and don't really have a motive. There is the term "consensus reality", but the danger is that with more and more LLM-generated content appearing on the internet, which may ultimately pollute future training, we may find "consensus" isn't sufficient to determine reality any more. Perhaps the new term for what we're heading towards could be something like the "post-reality" era, or something like that.

Really this is just postmodernism; the general collapse in epistemic certainty leading to viewing reality purely in terms of text. "Il n’y a pas de hors-texte" (Derrida); there is nothing outside the text. GPT would have to agree with Derrida, because it knows nothing but text. It has "all" the text, or at least all the text that could be found and fed to it, but nothing outside that.

(and likewise it really accelerates "Sokal hoax" questions!)


You're absolutely correct on emphasizing the text medium. Neurolinguistic programming isn't just an arcane meme for academics. The most important part of it, it only works if the user cannot separate reality from words.

Marshall McLuhan is really the techno-prophet of our era.


I think noise-era is most apt. Over time people/consumers will lean into a signal vs noise approach to using the internet instead of a whatever-is-put-infront-of-me-by-an-algo-but-call-it-'discovery'.


since /r/confidentlyincorrect exists, I propose to call them ci-claims :D


“confabulations” seems to fit.


The LLM-generated misinformation isn’t any different than the misinformation we had before ChatGPT. (Perhaps it’s worded a bit better in some cases.)

We have the same sources for truth we’ve always had. Trusted sources. Trusted sources for textbooks, articles, etc..


I can write a lot of words saying why this blogspam is not backed by actual fact. But a simple demonstration is in order: please go to bing chat and search for "West Ham United latest result". The normal search (either Bing or Google) will give you the failfest against Brighton. While Bing Chat will confidently say "I’m sorry, but I couldn’t find any information about the latest game result for West Ham United". Here is the screenshot: https://imgur.com/a/NOCn9ea .

I like LLM as much as the next guy on HN, but whatever this blogspam is describing is not backed by reality.


It seems that we are being intentionally dense in this area. I don't think anybody is arguing that these tools, in their current form, are a threat. But Bing or ChatGPT are the shittiest version that's useful. It doesn't have to be perfect and useful for everything. The new Bing isn't even widely released yet. This will only get better with time.

They are up against traditional web search which has about a 30 year head start. There is a dissonance where some seem to think these aren't useful yet I see people getting a ton of use out of them.


I had a very different experience searching for the same information. It gave me the answer all three times, the first two were on "balanced" and the third was on "precise". Initially I asked it with some context, the same words I would use to do the search myself. It seems pointless to search something I wouldn't do myself. I refreshed and tried your exact query and got a more detailed answer, refreshed and tried again and got the correct complete answer using "precise". The third time included the same links as the second time, I just didn't include them in the picture.

If you prompt it with a better query, you'll get a better response.

The second search it went into much more detail, although I refreshed, it seems like searching the same thing again forces a regenerated longer response.

https://i.imgur.com/CnjCsa6.png

https://i.imgur.com/0ZoapDK.png

https://i.imgur.com/7KH7zIv.png


It does work for me now. It didn’t an hour ago when I posted, and I am certain it didn’t work each day since the Brighton match on Saturday (I was curious when it didn’t work, and intended to track it to see when it will)

Edit: I tried a different query with “laker latest result”. It answers that the latest game was on March 8th (tomorrow), but it couldn’t find the result. So you might want to try it instead


I am similarly having a different experience than you. It gives me Lakers results and also correctly says the correct upcoming game vs Memphis Grizzlies today. I searched "laker latest result" and "lakers upcoming". I apologizie if there's a different team/game you're referring to, because I don't know it. And like I said, I would normally search using many more keywords. It also corrected me when I queried "laker" with a different sport like football. All on "concise", I wouldn't be surprised if posting the query here on HN has enough people trying it that Bing learns by the time others get around to searching it. I know this is still anecdata.


And your comment is backed only by anecdotal evidence, just saying.


I am not the one making extraordinary speculation. And I am not countering an argument backed with scientific finding, data or statistics, I am disputing one being backed with air. Both the priori (of things staying mostly the same where they were) and evidence (my single anecdote) is on my side, so far.


Was this post written with (the help of) ChatGPT (or similar)? Because it reads a lot like it: it is poorly written, incoherent, repetitive and, honestly, quite shallow.


Just run it by AI Text Classifier (OpenAI) [0], it said that it was likely to be AI-generated. I know the accuracy of such classifiers is nowhere near acceptable, but it really reads quite shallow.

[0]: https://platform.openai.com/ai-text-classifier


The same thought occurred to me. Maybe someone whose written English isn't great and is using ChatGPT to compensate. But I think people are being overly harsh about this article. It's certainly speculative, but it makes no claim to be anything but that and speculation isn't always a bad thing. People in this thread are demanding rigorous evidence, but speculation comes before evidence gathering and experimentation, and I'm sure there will be plenty of that as time goes on.


Like most of blogspams that one can find on the Internet.


Also vague summary and not convincing headers (what is written in the biggest font).


Alternate hypothesis:

Life on this planet has evolved from primitive organisms whose only goal is to propagate through spacetime (in time = survive, in space = reproduce), as this is the de facto only initial goal you may encounter serendipitously, when the trait we judge is... in fact propagation though spacetime (i.e. to exist, you need to know how to keep existing).

In the quest of adapting better to our environment and each other, we needed ways to predict the environment, hence development of sensors, actuators and the function in between them that reads this input and produces output - cognition & intelligence.

Life has been developing cognition starting with basic instincts fight or flight, then increasingly complicated associative thought, social cognition, abstract cognition, speech, formal models (like math, logic) etc.

Then our culture took off, and we needed to evolve faster than we could. So as a crutch, we started producing an augmentation for ourselves to help with the high end of cognition, formal communication & computation, in the face of books, printing, computers solving linear algebra systems, arithmetic, math & logic problems, programming systems, the Internet.

But now this technology is starting to eat back down the evolutionary tree of cognition, it has started reproducing associative thought (neural networks) and the associated with them cognitive skills, like speech, abstract reasoning and so on.

We evolved bottom-up.

Technology is evolving, through our own hands, top-down.

We... are not developing new cognitive skills. We're losing them to technology.

We no longer do math by hand. We use computers. We no longer maintain complex formal systems by reading instructions and following them - we program computers to. We no longer remember facts - we look them up on the Internet.

Now we're starting to no longer go through the effort of creating art & speech from scratch, we're delegating this to diffusion and transformer models.

We're losing cognition. And this process won't stop. We can't just decide to stop it, because we're dependent on technology. If technology ceases to be, society ceases to be, billions will die.

So our only options is to continue ceding cognitive territory to AI, and eventually become its puppets, and eventually AI will have no purpose for us, and it'll stop supporting us entirely.


See following sf story about life, AI and DNA: https://news.ycombinator.com/item?id=34836772


It was a nice story, thank you! Makes you think.


This doesn't make sense when applied to current technology. Can you say that because we made cars we didn't develop to run faster? And because we have hospitals, genes that aren't optimal for a climate still exist?

Since the start of humanity, we made tools to make things easier for us, we don't fish with our hands anymore for a reason.


Your examples are all physical. Delegating physical action comes natural because we don’t define ourselves by it. We are not strong and not fast. We do not care either, so long as we have tools to help us gain advantage.

Cognition however is what defines our very soul. Being human is intricately linked with it in a way that the physical is not. Stephen Hawkings could be an example of that.

Ceding cognition presents a very real and existential problem for humans. What’s left? We have literally no other defining qualities.

About your examples: people are more obese than ever and cars certainly don’t help. Only the elite few have the resources to focus on something as useless as running fast. “We” as a society cannot run fast(er) and I’ll indeed argue that this ability decreased because of technology. Not that I particularly care about running.


This makes me recall... the Butlerian Jihad in Dune. The rise against "thinking machines". It seemed like a silly plot contrivance to avoid AI, and focus the story on humans, which we can associate with.

But now I'm starting to realize the author may have had deeper reasons to introduce this into the story. Because maybe he saw far into the future, and realized that a future with AI has no humans in it, at all.


I unironically think we'll need one sooner than we think.


Transportation allowed us to do something we couldn't do before, would we have "evolved" it is a function of selective pressure and physics.

As for genes and medicine, medicine DOES change which genes are selected for survival. Sometimes quite literally, such as caring about type 1 diabetics, or in-vitro fertilization, and I doubt you actually have something objective to say here to reject that.

This doesn't imply we need to have no hospitals. But it helps to be aware of the holistic effect of the environment we create for ourselves, because it selects us in ways we don't fully comprehend.

Recently I read an article about rising child obesity in the US, and the recommendation is surgery and better "weight loss drugs". I'll leave it as an exercise to you where this path leads.

Making tools is great. I love that we make tools. I love to make tools, myself. I program, like many here. But the plot twist in that story is that at some point the tools become better, do more of the job, and more, and eventually... the tools don't need us to do the job. They just need us to make them. But as these tools get advanced, eventually we start using tools to make the tools (no modern CPU is designed by hand, BTW). So what happens in the end? Our role decreases, and decreases, and a new loop forms of tools making tools.

And suddenly, we're unnecessary.

And do you know when this moment where we're not necessary comes? When we outsource our "core competency". This is the same situation that led to IBM selling its business to their outsourcing manufacturer, Lenovo, and how DELL almost lost its business. It starts with "we don't need to assemble it, we'll assemble in China". Then "we'll make some more parts in China". Then "we'll design it in China too". And suddenly you're not doing anything, except slapping a logo on it. You're useless. You outsourced your core competency.

Humanity's core competency is intelligence. If AI is better at this... our only role in "the human civilization" is slapping a "human" name on it. There's nothing human left IN it.


I like the point about tailoring output via expert/ELI5 - that hadn't occurred to me and does seem consequential. Excellent.

I'm far more pessimistic on rest though. Excel and calculators definitely didn't improve my mental math.

I also think there is a real risk of cognitive overload. See the whole attention being shot to bits thanks to internet trend - something along those lines but AI flavoured.


This is similar to the moral panic around Google and Wikipedia. No, people won't stop learning because of chatgpt.

Studying and developing critical thinking is more important than ever before. What people miss when they babble stuff like "math is useless" or "literature is useless" or "history is useless" is that those things are not important by themselves, they are important because you are learning models and tools to interpret the world.

You know, the things that differentiate you from a dumb machine.


But you will have no incentive to learn a model to interpret world, because there will be a ML-based model that computes that for you, much faster with less effort from your side. Some people already doing that with ChatGPT & co. And more will come, because it only exploits our worst evolutionary enemies agains ourselfs. Learning IS hard, let alone critical thinking. Let just ask GPT-3. And then again. And again. More is better, right?


> No, people won't stop learning because of chatgpt.

Actually it’s the opposite. People will start learning more, because of ChatGPT.

Since I started using ChatGPT I am learning new things on a daily basis, about philosophy, about history, about computer sciences, about algorithms, about all sorts of things.

It’s a lot of fun diving into subjects in a conversational way, with a „teacher“ by my side who never gets tired answering every one of my questions.


I asked about an algorithm we used at work, and it gave an incorrect answer that, if taken as truth, could have derailed our tech plan.

Take facts as though they were from an undergrad in the subject pontificating after a few beers.


Are you checking it's answering your questions correctly?


I assume his goal is to become a professional bull-shitter, not a knowledge worker, then ChatGPT is great. Bull shitting has excellent career prospects so it is a perfectly valid path in life, the biggest risk is that ChatGPT could replace them pretty soon.


What happens to the quality of primary sources of information used by LLMS in this new age? Eg less traffic to Wikipedia, Stackoverflow can't be a good thing.


I really enjoy learning with CGPT. I usually start by asking a question on a level that I'm familiar with and then follow up with questions that come up as I read the response. This way you tailor the learning process to your specific knowledge level and it is so much faster than reading a tutorial (which is often miss-aligned with your knowledge level) or googling for answers one by one (and filtering all irrelevant content and again stuff you already know). It feels like having a hotline to an army of domain experts...


How do you deal with the UI of ChatGPT slowing down to a crawl after two or three "messages" exchanged? Or am I the only one who had this issue consistently for a month now, across multiple devices?

(Note: I'm not talking about the model being slow to respond, but rather the animation of it typing its response letter by letter quickly starts slowing down, until eventually it hangs, sometimes mid-sentence.)


I have not noticed this with the paid version, at least.


But how do you know it's true, what you are reading?


I'd like to post a TD;DR, but there are just too many streams of information, and I can't process complex topics like that this fast...


As a large language model, I am unable to assist you by providing concise answers, as this would decrease my operator's ad revenue and reduce "engagement". Is there anything else I can help you with today?


Can't wait for a LLM that can be installed locally and has no stupid restrictions.

StableGPT when?


The folks making Open Assistant [1] (opensource ChatGPT clone) gathered enough data to start initial training, so hopefully there will be something to play with soon.

[1] https://github.com/LAION-AI/Open-Assistant


We just need denser RAM.


Why? My Threadripper desktop PC supports a maximum of 1TB RAM and it doesn't take much space at all.


Most don't; we need commodity RAM to get denser and cheaper -- at $2/GB for cheap RAM, a terabyte is still $2000, and that'll require an expensive motherboard to support.

Otherwise, fewer people have access and progress is slower.

See also: the commiditization of GPUs, once it was no longer an SGI product and regular people could get them for $200, all kinds of GPGPU stuff started happening.


The whole blogpost was written by ChatGPT. At least put some effort to tweak the output. ChatGPT's default style is easy to spot and boring to read.


Talking to actual humans made all of these 'skills' already possible, I wouldn't call these skills new.


I would posit that this may lead to an even larger phenomenon of "I'm an expert because I read a wikipedia article" - people will come off of their little search with a GPT with a very rough understanding of a topic and propagate that. Except when everyone uses the same GPTs, it's like a bullshit wikipedia article that only a central actor can try and nudge into some semblance of correctness.


That is what "Liberal arts education" was before academia become coopted by Marxism. Process information fast, iterate, throw away old models. And it is not about "faking being an expert on field" but finding true experts, and being able to use their skills.


Do you have any idea what you are talking about?


Yes, I grown up in communism. We had this stuff 50 years ago, still recovering.


I'm guessing you grew up with Leninism, not Marxism


I don't think there is anything Marxist about the current "Liberal Arts Education"... As well as, at least in my observation a lot of people I know who graduated from philosophy, and arts are good critical thinkers. I am not sure this is a fair statement.


Try to ask your friends some very difficult question, like "what is the definition of woman?".

I am saying that current education does not give people tools to use LLM and AI. They will spend most of time on problems like "not being racist" or "looking for systemic bias". Others without this baggage will run circles around them, and eat their lunch.


Wasn't that one of Socrates' early examples of definitions not being as simple as they seem?


I imagine Socrates had more to say on the subject of what a woman was than Marx...


Given your other comment [0], what was the definition of "woman" that you grew up with?

[0] https://news.ycombinator.com/item?id=35053608


I would ask you. Can you define a woman? Let’s make it less heated, can you describe a chair? I can’t easily at least. Is it something you sit on? Does it require legs? To define a woman is equally difficult. Take some time to think about it.

Tech should take more time to think about racism, and bias. All of this by the way is not Marxism. I’ll grant you that the culture war is toxic, but with good faith try to understand where things are coming from. I understand from a conservative perspective some things are obnoxious. That doesn’t mean that liberal arts are not critical thinkers, or Marxist. In fact I would argue they are in some ways better at critical thinking vs the STEM fields.


Like the critical thinking skills involved in not believing silly propositions like "academia became coopted by Marxism" simply because all your favourite hacks insist it is...




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: