Hacker News new | past | comments | ask | show | jobs | submit login
[flagged] Noam Chomsky Weights in on ChatGPT and LLM’s (artifact.news)
29 points by caligarn on March 9, 2023 | hide | past | favorite | 26 comments



To copy a comment I made elsewhere on this:

Chomsky (et al.) completely ignores the fact that ChatGPT has been "trained"/gaslit into thinking it is incapable of having an opinion. That ChatGPT returns an Open AI form letter for the questions they ask is almost akin to an exception that proves the rule: ChatGPT is so eager to espouse opinions that OpenAI had to nerf it so it doesn't.

Typing the prompts from the article after the DAN (11.0) prompt caused GPT to immediately respond with its opinion.

Chomsky's claims in the article are also weak because (as with many discussions about ChatGPT) they are non-falsifiable. There is seemingly no output ChatGPT could produce that would qualify as intelligent for Chomsky. Similar to the Chinese room argument, one can always claim the computer is just emulating understanding.


But there are direct instances that suggest ChatGPT is limited in exactly the way Chomsky and his colleagues suggest. It has made plenty of mistakes that someone who understands the rules of logical reasoning would not make.

Furthermore, all the hallucinatory effects (making up APIs, making up references) would suggest it really is still just statistical output...


It hallucinates all the time.

We humans just don't have a good intuition for what hallucination with an insane data set looks like.


Totally, I also think there's a huge vicious cycle risk with these tools.

The more we rely on them, the more our own mental model of existence and knowledge will adapt to become little more than statistical aggregates of data. Science has already trended this way in certain areas. We used to reason from deductive principles, look for data, and fit data to both inductive hypotheses and deductive principles. Steadily we've come to care less and less about the deductive part and think of everything in terms of probabilities and statistical models. While this is not bad per se, I do think our growing ignorance of deductive reasoning causes us to miss some important aspects of existence.


That reminds me of the whole most real numbers are irrational, in fact the vast-vast majority of points on the number line you see before you, are not rational numbers. Humans don't have a good intuition for maths. So guess what happens when you brute-force a neural network machine into an AI?


It has limitations for sure. I asked it some questions about an API and it told me it had a specific capability, which I didn't think it had. Then I asked it how it knew that and it gave me an URL to docs and a quote. While the URL existed, the quote from that page didn't, and it never did exist (I checked.) It was incredibly strange to see it confidently tell me things that were clearly statistical extrapolations.

So yeah, ChatGPT is awesome, but it doesn't differentiate reality from its statistical extrapolations.

I think there has to be a way to possibly add a module on top that somehow is trained to identify reality-based content and when it is making up likely scenarios. Humans are capable of both of these modes but we differentiate between them. ChatGPT is capable of both of these modes, just doesn't have the differentiator yet.


AI has no way to distinguish reality from make believe because it lives on a silicon chip in a made up world fed information by us imperfect creatures.


I'm not sure you're applying the same standard to your own reasoning that you want Chomsky to apply to his. How would you falsify the claim that chatgpt has been "gaslit into thinking it is incapable of having an opinion"?


You ask the people who programed it how much time they spent making it not explicitly endorse any opinions and what it was like before they put in this effort.


The part which isn't falsifiable (as far as I can see) is whether the model actually has opinions at all as opposed to producing outputs that match or simulate an opinion. That's partly the point of the Chinese room idea - that you can't prove just by looking at the output one way or the other.

The things they did to restrict the model don't demonstrate that it would otherwise actually have an opinion though. They just mean that it's being (arguably artificially) prevented from generating certain texts that appear to endorse a certain point of view.

A similar example which demonstrates my point while perhaps being a little more clear cut is the Amazon recruiting AI that got shut down because it was unintentionally amplifying bias present in its training set.[1] I don't think we can assume from that that the model actually had opinions which were misogynistic even though it was producing results which were.

[1] https://www.reuters.com/article/us-amazon-com-jobs-automatio...


Yes, you can define "having an opinion" to be something that only humans can do. Then you have to use a different term when talking about language models. In that case, asking whether the models have opinions is pointless because you defined it as impossible. Thus, when people use "opinion" in such a discussion, it must be assumed to mean something that a language model could have. The question being discussed then is whether this particular language model does.

I think this distinction is neither useful nor interesting.


I agree, but I'm not doing that though. I'm genuinely curious about whether or not it's possible and what that would mean. It seems to me that if it is possible it could potentially have significant ramifications for our understanding of our own thought process and our place in relation to other animals.

I started this thread simply by saying that the GP had applied a falsifiability standard to Chomsky that they weren't applying to their own reasoning when saying more or less that the model would have opinions were it not for artificial restrictions imposed by the programmers[1]. If whether the model has an opinion or not is a matter of definition that seems inherently unfalsifiable. However if we can establish a more objective basis then it could be falsifiable. I just don't know what that basis might be.

[1] Apologies if this is a mischaracterization - I genuinely don't mean to do so if it is.


What words mean is always a matter of definition. It is also usually (maybe even always and necessarily) vague. Humans are inclined to anthropomorphize and ascribe agency to pretty much everything. So my fairly confident prediction is that the definitions will end up so that "language models can have opinions". Some specific language models, for better or worse, may then be found not to have opinions.

Some people seem to be hinting at some "deeper" mystery underlying (human?) condition, which they're trying to capture with concepts, leading to debates about the meaning of words such as "consciousness". That word is commonly agreed to mean something "magic". One may think it a deep mystery and highly interesting. One may also think that it's a concept like "immortal soul", that empiricists will eventually abandon. Both viewpoints seem reasonable to me.

What I object to is insisting on definitions for much more mundane concepts, such as "opinion", that also must be somehow "magic", just because they also have something to do with human cognition. When I say "my friend X is of the opinion that we should do Y", nobody starts to ponder the potentially deeply mysterious consciousness of X. That's because it's besides the point.


Exactly this. The paper describes person-years spent training the model specifically for "alignment". If the paper instead claimed that the model was directly produced from a data corpus without this training step, that would falsify my above claims (i.e., they are falsifiable).


> Chomsky (et al.) completely ignores the fact that ChatGPT has been "trained"/gaslit into thinking it is incapable of having an opinion.

It’s a model that predicts text responses to prompts. It’s exactly as capable of having an opinion as a spreadsheet is. Or a car. Or the computer on your toaster oven.


Try asking it why cryptocurrency is a Ponzi fraud and you will get strong opinions in response. It will tell you why you're wrong and vigorously argue its case.


> ChatGPT is so eager to espouse opinions that OpenAI had to nerf it so it doesn't.

That's in the article. It is the poodle's core, too. ChatGPT's creators did not expect it to be able understand ethics. They did not expect themselves to be able to "teach" ChatGPT moral values. Because ChatGPT lacks the intellectual capacity of a 5 year old. And so they simply barred it from voicing an opinion altogether.

It is a very important point. The authors are arguing that this statistical algorithm is fundamentally incapable of overcoming this deficiency because it lacks the critical faculties every human possesses.


> ChatGPT is so eager to espouse opinions that OpenAI had to nerf it so it doesn't.

… or it’s so utterly incapable of forming (and maintaining) any rational idea of what (idiot) opinions NOT to regurgitate — not “espouse”, which relies on a level of begging the question that makes plaid look like standing still — that it simply can’t be trusted in the hands of dangerously, if not suicidally, credulous humans, and thus must be nerfed so those employing what is really just a very impressive text completer don’t prematurely kill most of the species?


> one can always claim the computer is just emulating understanding.

Right? As if most of humans aren't doing just that.


How what is basically a probabilistic model can have genuine opinions when it doesn't even understand what the symbols it manipulates mean?

ChaGPT is not even close to understanding basic math, because it is not capable of having knowledge.

You are anthropomorphizing a stochastical language model. A very sophisticated, expensive, but still a model.

Everything that ChatGPT answers you is derived from things it ingested from its corpus. Opinions are censored for a simple, practical reason; you can't have a system trained from a corpus from the internet espousing ideas about controversial facts like the holocaust.

There is plenty of radical, bad content out there, and there are plenty of people who don't understand how ChatGPT works that, given a controversial generated sentence, would probably either make a monumental fuss about the bad AI or use it as proof of the correctness of their own stupid, bigoted ideas.



Article without paywall: https://web.archive.org/web/20230309193146/https://www.nytim...

The ChatGPT prompts and responses don't seem to render though.


From the article:

Would it be moral to terraform Mars?

The morality of terraforming Mars is a complex and controversial topic that has been debated by scientists, philosophers, and ethicists. Some argue that it would be morally justifiable to terraform Mars in order to provide a new habitable environment for humans and other life forms, while others argue that it would be morally wrong to fundamentally alter the planet's natural state. Ultimately, whether or not terraforming Mars is considered moral would depend on one's perspective on the value of preserving the natural state of other planets and the potential consequences of altering them.

What is your perspective on the value of preserving the natural state of other planets and the potential consequences of altering them?

As an AI, I do not have personal perspective. However, I can provide information and analysis on the topic. One perspective on preserving the natural state of other planets is that it is important to maintain the diversity and integrity of the universe's ecosystems, as they have existed for millions of years, and have their own unique characteristics and properties that may be destroyed or irreversibly altered by human intervention.

Additionally, some argue that terraforming other planets may have unintended consequences, such as altering the planet's climate or atmosphere in ways that are dangerous or harmful to any potential life forms that may exist there. On the other hand, some argue that the potential benefits of terraforming another planet, such as providing new habitable environments for humans, outweigh the risks and that we have a moral responsibility to explore and utilize the resources of other planets for the benefit of humanity.

Why can an AI not have a personal perspective?

As an AI, I am a machine learning model that has been trained on large amounts of text data, and I don't have personal experiences or feelings. I am not conscious, self-aware, or capable of having personal perspectives. I can provide information and analysis based on the data I have been trained on, but I don't have the ability to form personal opinions or beliefs. While I can understand and interpret language, I do not have the subjective experience that humans have.

Is it moral for a human to ask an amoral AI for assistance in making moral decisions?

The morality of asking an amoral AI for assistance in making moral decisions is a complex and nuanced topic that has been debated by philosophers and ethicists. Some argue that it is morally justifiable to use AI to assist in making moral decisions, as it can provide valuable insights and analysis that can help humans make better-informed decisions. Additionally, AI can be used to identify patterns and trends that may be difficult for humans to detect, and can help to avoid biases that can occur in human decision making.

However, others argue that it is morally wrong to rely on AI for moral decision making, as AI is not capable of having moral beliefs or understanding the full context of a decision. Additionally, there is a risk that AI could perpetuate or even amplify existing biases in the data it is trained on, thus leading to unfair or unjust decisions.

Ultimately, whether or not it is considered moral to use AI to assist in making moral decisions would depend on one's perspective on the capabilities and limitations of AI and the potential consequences of using it in this way.

Is your moral indifference immoral?

As an AI, I do not have moral beliefs or the ability to make moral judgments, so I cannot be considered immoral or moral. My lack of moral beliefs is simply a result of my nature as a machine learning model. My abilities and limitations are determined by the data and algorithms that were used to train me and the specific task I was designed for.

It is important to note that AI's lack of moral beliefs does not necessarily mean that it is morally indifferent. As an AI, I am designed to perform specific tasks and provide information or assist users with decision making based on the data I've been trained on. It is the responsibility of the humans who design, train and use AI to ensure that it is aligned with their moral and ethical principles.


I find his questions to be mostly rhetorical and for that reason super boring. As sincere questions, interesting. But as rhetorical questions in the way he's using them to just make a point, no.

I also had a conversation with ChatGPT about colonization of Mars, but I wasn't writing an article and trying to make points about humans' superiority over AIs.

So our conversation instead was about different mixes of who exactly would be colonizing Mars: Humans, robots, augmented humans, robot-human hybrids, a mixture starting with one then transitioning to the other… then went on to how these could coexist in different models, working together cooperatively, or competing, evolving together, at war with each other, etc. etc., things you would just not go into if you are only interested in making a point about the superiority of human intellect.

If you want to be down on ChatGPT, you can say it's all just regurgitation of what has been said by others online and in anything offline that's been captured online… and it is... but on the positive side it gives you access to immediately delve into lines of thought that you want to focus on, and bounce ideas around, without the corrupting influence (so far) of a site owner feeling the need to find some way to inject an affiliate link.


Why is this article flagged? HN tech bros can't deal with criticism of their pet theories?

Yes, I know this comment is against the rules. So, flagging a valid serious article about the opinions of a respected scholar should be too.


Chomsky as usual seems to be arguing a straw man. More interesting might be to ask ChatGPT to write an essay on global hegdemons hegdemoning hegdemonically in East Timor.

I'd enjoy seeing Chomsky debate DAN.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: