Hacker News new | past | comments | ask | show | jobs | submit login

Worth noting that in the limit the distribution P(human did/said A | context B) is a complete definition of human behavior. If you could model this perfectly, that would be a perfect model of a human ie identical.



Exactly.

And the real problem is the way humans decide sentience in the first place - by how much the machinery acts as if it were sentient. There is no other information - just our perception. If we imagine two agents, A and B, one sentient and one insentient, but both acting identically, we theoretically couldn't decide which one is which.

So whether or not there is sentience in a machine then becomes a question as unanswerable as whether there is something that exists outside the universe we can perceive. We cannot know what we cannot possibly perceive.


Actually more like P(fraction of humans sampled did this | given context X). Which deprives the model of viewpoints, cultures, thought patterns, means of verbal communication not ostensibly on the internet. Given the ills and change in human behavior that came with the advent of social media, I think at best it would be a distorted model of humanity; and at worst, a morally shambolic one.


That's the brute force approach anyway. Even if you do take an inordinate amount of data to sample from you'll likely get something that's woefully impractical to operate, even if it produces vaguely human like responses.

We do on the other hand know for a fact it's possible to run an instance of consciousness in a volume of about a liter that consumes like 20 watts (aka your average human brain), so there's something probably wrong with our general approach to the matter. GPT-3 already uses about twice as many parameters as our organic counterparts do, with much worse results. And it even doesn't have to process a ridiculously large stream of sensor data and run an entire body of muscle actuators at the same time.


> GPT-3 already uses about twice as many parameters

This isn't accurate. GPT-3 has 175B parameters. The human brain has ~175B cells (neurons, glia, etc.) The analog to GPT-3's parameter count would be synapses, not neurons, where even conservative estimates put the human brain at several orders of magnitude larger. It's likely that >90% of the 175B could be pruned with little change in performance. That changes the synapse ratios since we know the brain is quite a bit sparser. In addition, the training dataset is likely broader than the majority of Internet users. Basically, its not an apples-to-apples comparison.

That said, I agree that simply scaling model and data is the naive approach.


OPT-6.7B is good, but not even close to GPT-3.

If you can get GPT-like performance out of a 17B model, you should publish that.


I’m referring to post-training pruning not smaller models. This is already well-studied but it’s not as useful as it could be on current hardware. (Deep learning currently works better with the extra parameters at training time).

Retrieval models (again, lots of published examples: RETRO, etc.) that externalize their data will bring the sizes down by about that order as well.


I agree that RETRO is cool. I think you might be stretching it a bit with the applicability, but I take your point.


Is evolution not brute-force?


Naturally, but it takes a few billion years. Not sure about you but I don't really feel like waiting.


LLMs for the most part learn P(someone on internet said A | internet specific context B) given ginormous amounts data. There’s no other type of A, B with that much training data at hand.


Exactly right.

But extremely serious scientists, very smart people, are still drawing epicycles on blackboards studying “consciousness”.


Pretending consciousness doesn't exist or that it has no function I think reflects poorly on someone who studies human behavior.


I’m saying that the idea of a bright line between the emergent behavior of a dolphin and a human is very pre-Copernicus.

Studying, even measuring the capabilities of an animal is science.

Justifying a soul is the purview of spirituality, not science. (Nothing against spirituality, I have a spiritual life, I just don’t confuse it with science).


I strongly disagree. Pure behaviorism is just willful blindness. Consciousness is a real phenomenon, as any person not philosophically committed to denying its existence can tell you. It's front and center of our experience of human cognition. It would be quite strange for it to serve no function in the human mind.

Yeah, it's hard to quantify and isolate and experiment on, but that just speaks to either current limitations of human science, or possibly to limitations that cannot be surpassed. Given how much mileage certain philosophical movements have gotten out the common intuition that emerged during the Elightenment that everything is scientifically tractable, I understand the resistance to accepting these limitations and opening the door to all of the philosophical consequences of that intuition failing. But sorry, reality doesn't care about your philosophical attachments.


You strongly disagree that other intelligent, social, creative animals are built along similar lines to Homo sapiens?

You really think that we’re a special case, that a difference in degree has become a difference in kind?

I personally experience a feeling that I’m conscious subjectively, but I have no evidence that I’m any more or less motivated by pleasure or pain or community than a dolphin is.

Where do we draw the line? What’s the acid test for “yup now we’re dealing with consciousness”?


I don't mean to suggest that animals don't also have consciousness, or that it's not important to explaining their behavior too.


Why not go the other way and just admit that Descartes gave us the Cartesian plane (among other things) but was at best a product of his time with: “I think therefore I am”.

Descartes was a genius, but he was no Alan Turing, and Alan-fucking-Turing got it wrong on the most famous thing named after him (among the lay population at least). The Turing Test was a great idea, but it’s now trivially useless.

Humans are special to (mostly) themselves and (substantially) other humans.

They are not special to the universe. We’ve had this argument, it was called the “Inquisition” at least once, and we eventually cleared up once and for all what celestial body rotates around the bigger one.


Your position sounds much more religious and dogmatic than those held by the people you are arguing against.


Is that true? Ie if you have something that looks too average too often it’s rather unlikely.


Didn't we try behaviorism out until in the 50's we decided this notion of "context" was either too small or too intractably expansive to be useful, let alone explanatory, and that it was necessary to start thinking about internal cognitive processes instead?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: