Hacker News new | past | comments | ask | show | jobs | submit | jahsome's comments login

Is anyone in the know, aside from mainstream media (god forgive me for using this term unironically) and civillians on social media claiming LLMs are anything but word calculators?

I think that's a perfect description by the way, I'm going to steal it.


I think it's a very poor intuition pump. These 'word calculators' have lots of capabilities not suggested by that term, such as a theory of mind and an understanding of social norms. If they are a "merely" a "word calculator", then a "word calculator" is a very odd and counterintuitively powerful algorithm that captures big chunks of genuine cognition.


Do they actually have those capabilities, or does it just seem like they do because they're very good calculators?


There is no philosophical difference. It's like asking if Usain Bolt is really a fast runner, or if he just seems like it because he has long legs and powerful muscles.


I think that's a poor a comparison, but I understand your point. I just disagree about there being no philosophical difference. I'd argue the difference is philosophical, rather than factual.

You also indirectly answered my initial question -- so thanks!


What is the difference?


I'm not sure I'm educated (or rested) enough to answer that in a coherent manner, certainly not in a comment thread typing on mobile. So I won't waste your time babbling.

I don't disagree they produce astonishing responses but the nuance of why it's producing that output matters to me.

For example, with regard to social mores, I think a good way to summarize my hang up is that my understanding is LLMs just pattern match their way to approximations.

That to me is different from actually possessing an understanding, even though the outcome may be the same.

I can't help but draw comparisons to my autistic masking.


They’re trained on the available corpus of human knowledge and writings. I would think that the word calculators have failed if they were unable to predict the next word or sentiment given the trillions of pieces of data they’ve been fed. Their training environment is literally people talking to each other and social norms. Doesn’t make them anything more than p-zombies though.

As an aside, I wish we would call all of this stuff pseudo intelligence rather than artificial intelligence


I side with Dennett (and Turing for that matter) that a "p-zombie" is a logically incoherent thing. Demonstrating understanding is the same as having understanding because there is no test that can distinguish the two.

Are LLMs human? No. Can they do everything humans do? No. But they can do a large enough subset of things that until now nothing but a human could do that we have no choice but to call it "thinking". As Hofstadter says - if a system is isomorphic to another one, then its symbols have "meaning", and this is indeed the definition of "meaning".


Yup. The same way "we're" being snarky for harmless absent-minded mistakes for no good reason.


This comment is pure useless snark.


That's the point. I'm using snark to demonstrate how useless using snark is.

Clever, eh?


So you admit you're being dumb on purpose.

Go away.


Without evaluating the merits of your statement, I find it funny you start out implying others are close-minded, but end up rather close-minded yourself.


Considering how many other solutions that are all trying to solve distributed systems issues but avoiding just using Erlang or Elixir I think the statement has merit.


Exactly, I've commented many times on here that at least half the show hn posts (like Hatchet from yesterday) are just overlycomplex implementations of features that have existed on the beam for decades. Or at least already in the Erlang/Elixir ecosystem (Oban, for example).


Proof that the only thing that matters about any software is whether it's easy for beginners to get into. What's wrong with Erlang that in ten years it's seen so little adoption?


10 years? You mean 38 years. And Elixir (a language on top of Erlang) has had solid adoption and is very easy for beginners.

https://joyofelixir.com/


I don't think it's even possible for a user to down-vote a direct reply to their own comment.

I can't see the post you're complaining about; but based on the tone of the rest of your messages, maybe you were just off topic or rude-atop-the-soapbox and others flagged you?


[flagged]


If you believe so contact the HN mods, it's very frowned upon to use comments to accuse people of brigading.


I had never heard of Human Design until now. It looks like absolutely maddening junk to me. To each their own but nonsense pulp like that is just about the exact opposite of what draws me to hn, and avoid almost all other forms of social media.

Full disclosure: I loathe astrology, to a disproportionate and somewhat irrational extent.

Thanks for the book rec though. As a "fan" of James Randi, that seems interesting.


As someone who gets sick often on flights, I'd like to know the same.


If you wouldn't mind elaborating, what are your issues with the article? I'm curious what you specifically identify as an indication of low quality.


Yeah, I should have explained more in the my initial comment.

I didn't understand the term "Modern" in the title. This exploit is as old as they get on the web, so I was expecting maybe some tool-chain attack or something on the React stack.

And then in the conclusion: > In this article, we explored an incredible project

I didn't feel the was explored the project.

> We’ve also explored XSS attacks and discussed how they work.

This is the only thing the article did so the "also" through me off.

These are just little things that set off my AI spidey-sense.


The point is the statementay or may not be accurate. From a journalistic perspective, unless Cox provided evidence or the author was able to otherwise independently verify the claim, it's a claim, not a fact. The comment is a good suggestion.


Yes. Many, many people.

Friendliness goes a long way towards quelling suspicion. I think it's rather safe to say unassuming kindness can disarm most anyone who isn't a cutthroat, Type A personality.

I'm autistic as hell and I can still see how people would fall for it.


OK, I can understand that.

Perhaps I was surprised just because I'm older and have decades of experience working for various companies, so I've learned better.


Are you really not sure why anyone would think that way? Do you really believe every single person is (or should be) utterly cynical?

In a lot of orgs, an HR rep is the first person a new hire will interact with and IME they're usually very helpful, and kind -- at that point. It's not hard to see why someone unfamiliar with corporate politics or structure would see HR favorably if that's their only experience.

I think my point is, HR reps can be rather deceptive, and in some extreme cases, deliberately so. So I understand your point from a logical perspective, but thankfully, the reality is logic doesn't drive everyone.

There's a certain portion of the population who isn't skeptical of kindness, and accepts it at face value.

Chalk it up to naivety or youthful/willful ignorance, if you must. Whatever the reason there are folks who choose to see the good in people, and not constantly question their motives.

As a reformed cynic, I'd really recommend giving it a try. I personally find it less depressing way to walk though the world.


I do generally try and assume the best out of people. Most people aren't assholes, they're perfectly nice, and they probably aren't out to rob you or ruin your day.

Corporations are not people. A corporation is basically a superorganism made up of people, but functions differently. The HR person is probably a genuinely decent human, but fundamentally they still have to do their job, which is (like basically all of us) dictated by their higher-ups. Their job isn't to be your best friend, it's to make the company more money.

Ideally, it is by resolving the problem. Sometimes they need to fire someone, and in order to fire someone without the risk of a lawsuit (in a lot of jurisdictions) they kind of have to substantiate the case. Usually they and the manager will put you into some kind of "performance improvement plan" or some kind of "attitude coaching" so that they can pretend that they tried to work with you, but they will also just try and look for you to slip up and mention when you did something unkosher.

Generally by the time HR is called, it's too late, they've already decided to fire you and they're just going through the motions.

I'm sure most of the HR people are lovely humans, but that's just orthogonal to the point.


There is nothing cynical or about "corporate politics", or about any moral judgement including kindness in my previous comment.

It was purely factual and about duties (as in duties of an employee).

I think it is when people mix up all the moral concept you mention with facts and legal duties that people indeed get indeed confused.

HR may absolutely appear friendly and helpful, and they can really be so because it is their job to help you succeed as an employee, that is to say to deliver value for the company. It is not their job to help you act against the company. In fact it is their job, as it is for all employees, to work in the company's best interests (that's called fiduciary duty).

Perhaps another issue is that some, perhaps naive, people do not understand the nature (in the factual and legal sense) of employment.


But can't you see how the distinction between helping you succeed as an employee and a person might be murky for someone less experienced in the corporate world?

For many people, if someone is helping them and being extremely nice, they're not going to question why; they're just gonna sidle up with them and enjoy the time with their new friend.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: