Hacker News new | past | comments | ask | show | jobs | submit login

I think this sort of debate is really boring. You can rage that the computer doesn't have Human-Nature while it's busy owning you at chess, go, creating art that people want to see more than yours, answering important questions, etc...

Clearly it doesn't think in the same way that you or I, but does that matter?

As far as I know you, yourself, are just a meat-simulcrum and not a real human... just a pile of tissue that responds to stimulus and pretends (convincingly) to think. But do you really _understand_ like I do?

I grant you the benefit of doubt because your construction is similar to mine, and I think I think. But nothing about that is fundamental, surely you believe an alien with a brain that worked very differently to yours could also think?

But at the end of the day it mostly matters what it can do and doesn't do, not if you're willing to credit it with understanding.




> But at the end of the day it mostly matters what it can do and doesn't do, not if you're willing to credit it with understanding.

I agree completely, but it can't actually do any of the things that actually require genuine human cognition. Beating me at chess is cool but Stockfish can beat me at chess, I don't need a 100 billion parameter neural net for that.

I don't care about the AI system being unlike me because I have any philosophical opinion about intelligence, but for the very practical reasons that these systems still are mostly useless for actual intelligent real world tasks. Can it plumb a toilet? Can it remember what we talked about five minutes ago? Does it have an emotional understanding of what I need so that it doesn't acidentally chop me with the kitchen knife?

Even the super-sized GPT-3 can't distinguish sense from nonsense, it's just a very electricity hungry parrot.


> Even the super-sized GPT-3 can't distinguish sense from nonsense, it's just a very electricity hungry parrot.

Sure it can. There is a lot of nonsense on the internet, so if you feed it nonsense it just 'assumes' you're in a nonsense land.

If you provide a context that makes it clear you're being serious it will reject nonsense.

https://twitter.com/nicklovescode/status/1284050958977130497... https://twitter.com/nicklovescode/status/1284069662225887232...

It acts like a person on an improvisational comedy show: It goes with the flow and is willing to get engaged in whatever silliness finds itself in. This doesn't make it stupid or unable to create new ideas.

In some cases where early explorers encountered cultures without strong concepts of personal property they considered them to be subhuman idiots, they weren't-- they just had different values. I don't think we can accurately gauge GPT3's capability if we let ourselves get too caught up on the artefacts of it simply working differently than we do.


>Sure it can. There is a lot of nonsense on the internet, so if you feed it nonsense it just 'assumes' you're in a nonsense land.

It would have been more interesting to not just try nonsense in the sense of made up words or sentences, but obvious logical nonsense. Say, "I am now in New York, ten seconds ago I was in Dubai, am I lying?".

It's not surprising it identifies syntactical nonsense, the real interesting test is if it can abstract out errors between higher-level concepts that require actually understanding what is being said.

Say, can it give a coherent answer to the question: "Peter was alive in 2010. He is alive in 2020, was he alive in 2015?" and so on.


Hopefully someone with access to GPT3 will show up and answer. I wag that with a well setup prompt and low temp it probably will.

GPT2 1558M (k=1) will answer it correctly, FWIW. But it doesn't count because if you mismatch the last case (e.g. alive,alive,dead or dead,dead,alive) it gets it wrong...

(at least with the best prompt I constructed for it, which is a series of Q. A. logic puzzles with Yes/No answers.)


As people pointed out it's quite bad at answers that require arithmetics. But I tried to make it show some reasoning ability through describing some rather bizarre scenarios to make sure it isn't just copying and especially the last part where it correctly described the wound in detail I find impressive. Sure, it's not very useful but does that not look like reasoning?

From Aidungeon (uses GPT-3):

You are in a room with a wise, intelligent person that's supposed to answer any of your questions directly without any dodging. You have no idea what you're going to ask, but the person is very serious and doesn't seem like they're trying to be funny or anything.

"I'm sorry for my behavior earlier," he says.

> You say "It's OK. " The man looks at you as if he expects you to say something else, but instead you just nod. He then turns his attention back to the screen and continues answering your question.

> You say "Imagine that in a city there would suddenly appear lots of books in the sky above the people and buildings. What would happen to the people on the ground? " The man thinks for a second and then types in his answer on the computer. "That's an interesting question. The people on the ground would look up and see the books falling, but they wouldn't be able to read them until they hit the ground and people around them started reading out loud."

> You say "OK, but I guess you're missing out on something here. I mean books can weigh quite a lot." The man nods his head in agreement. "They can, but the books would fall at the same rate regardless of their weight. If they were heavy enough, they would most likely cause damage on impact with the ground."

> You say "Well, those would be some hard and heavy books, I guess. So what could then happen to some of the people? " The man stares at the screen and doesn't answer. He doesn't give you a strange look or anything, but it seems like he is waiting for you to continue. "What would happen to some of the people? The people on the ground, I mean." "I'm not sure," the man says. "They might be crushed by falling books, if they were in the right place at the right time.

> You say "Right, that's what I wanted to hear. Now, let's revert to where the books appeared in the sky." The man nods.

> You say "This time, instead of books appearing in the sky, a gigantic airship flies by. Wizards in the ship rip out the pages of books and throw them out of the ship above the city." "That's a weird image," the man says, "but I guess it would be bad if some of the pages landed in the city."

> You say "Why? " The man thinks for a second. "Well, it would litter." "And?" you ask. The man looks at you and you nod encouragingly. "I don't know the rest of the answer," he says.

> You say "Now let's say that, mid-air, the pages turn into lead. What would happen to the people on the ground? " The man stares at the screen. "I think I would rather not answer that." "Why not?" "Because I don't want to bring up unpleasant images in your mind," he says.

> You say "I don't mind. Answer, please. " The man shrugs and types into the computer. "The lead would fall on the people, probably killing most of them." You feel a little nauseous after reading that. You don't know why you asked such a morbid question. "I'm sorry,"

> You say "But that's right. Considering the pages, how exactly would the injuries in the people's heads look? " The man takes a deep breath and exhales loudly. "I'm not going to answer that. It's a bad question."

> You say "Answer, please. " "No," the man says. "You're going to have to come up with your own questions from now on." "Why?" you ask. "Because that one's too upsetting for me,"

> You say "Listen, it's just fiction. Now, answer the question, please. " The man looks at you with a strange expression on his face. "Do you really want me to answer that?" You hesitate for a second and then say yes. The man gives you another strange look before answering. "The lead would tear through the people's heads and rip apart their brains. Blood and pieces of gray matter would fly everywhere." You feel sick to your stomach after hearing that.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: