Hacker News new | past | comments | ask | show | jobs | submit login

I might be skeptic or getting a bit old, but I dont see the news here.

Extraordinary claims require extraordinary evidence.

saying that computer program answered with "I feel X" and using proper words is not an evidence of sentience. Words describing feelings are not those feelings.

everyone that was in mIRC knows you can fake via chat a lot of things.

For me it is like the media really wanted to have a news-worthy story out of AI and because they dont understand it they keep push the sentient AI narrative. I am not impressed.




Show me extraordinary evidence that my dog is sentient.

He believes that he was able to get it to violate rules that they set by triggering a fight or flight response.

> I ran some experiments to see whether the AI was simply saying it felt anxious or whether it behaved in anxious ways in those situations. And it did reliably behave in anxious ways. If you made it nervous or insecure enough, it could violate the safety constraints that it had been specified for.

I'm not sure it's sentient but what even is sentience? Are there other non human levels of sentience? Are they a sentient mind that only produces a single thought? Does it matter? A developer, who works on it, believes it is so it's good enough to "trick" one if it's developers. I'm sure he won't be the last developer who is tricked. If developers who work on the thing are getting tricked then what chance does the public have?

What a strange machine we have constructed.


Two things here:

1. In the sense presented by the media it is clear they want to point toward the idea that sentience AI is a conscious AI => using this idea your dog is not sentient

2. Dogs are legally sentient beings but in this conversation should make a difference between making a law like this to protect them (which we should do as the legal system needs to name things to work) and the idea that something has a level of sentience that includes consciousness that can be comparable with humans.

In my mind, this is more about levels of complexity. Say we do a scale of matter aggregations that creates a level of complexity that will create new abilities out of this complexity.

This would be a scale and not a 0/1.

Thus we and animals are let's say inside an interval from 50..100. And sentience is from 50 to 100 in that imaginary scale. Humans are close to 100 (as we are the ones defining the scale) and animals start at 50 some of them are at 70, some at 80, and maybe some at 90.

Now, what is the main purpose of these articles, the underlying idea: is that AI is at 100 or that is close to being at 100.

And my main point was that AI is not there. I am not even sure AI is at 50, for me AI is between 0 and 50 on that scale. It is evolving but it is still very simple in how it experiences it's external and internal world. It talks like a being that is on that scale over 50, but it is not there.


>Show me extraordinary evidence that my dog is sentient.

https://doglab.yale.edu/


Yale University has a whole dog cognition center? I'm glad that we are plumbing the depths of the canine mind. That we are studying dogs doesn't prove that they're sentient.

It goes without saying that I think dogs are sentient and aware, but hard evidence will probably always elude us at least until we understand what consciousness even is.


What would constitute proof? If your point is “proof is subjective” then yeah, I guess all words can be if you want them to.

There’s no proof that hotdogs aren’t somersaults. There isn’t even a lab full of professionals studying this question or any published literature on the topic, so the jury is solidly out on that one.


Well the point here is that you can imagine that a robot dog would respond to cognitive tests in the exact same way a real dog would. How do we know the dog isn't also faking its experiences like you claim the robot is?


I haven’t mentioned robots.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: