Hacker News new | past | comments | ask | show | jobs | submit login
Humans who are not concentrating are not general intelligences (srconstantin.wordpress.com)
461 points by jseliger on Feb 26, 2019 | hide | past | favorite | 217 comments



One of the things are about fakes is that they evolve over time.

Believe it or not, an incident in 1917 involving Arthur Conan Doyle, inventor of Sherlock Holmes, is instructive.

The "Cottingley Fairies" were imagined when a couple of teenage girls took photographs of themselves with pictures of fairies. [1]

The thing that is important is that, to my eyes and I think to a typical person of this era, these photos of girls with cutouts of fairies look like ... exactly that. When I first saw these pictures, I couldn't believe anyone could be fooled by them. But this period, circa 1917, was a period when photos only recently had appeared and so only recently had photo fakes. So the skill to spot the difference had only recently appeared.

Which is to say, I'm pretty sure the author is correct that deal of the OpenAI text generated isn't intelligent text generation but stuff with enough of the markers of "real" text that people might not notice it if they weren't paying attention.

Moreover, I strongly suspect that if this sort of sham were to become more common, the average person's ability to notice it would increase significantly.

[1] https://en.wikipedia.org/wiki/Cottingley_Fairies


>The thing that is important is that, to my eyes and I think to a typical person of this era, these photos of girls with cutouts of fairies look like ... exactly that. When I first saw these pictures, I couldn't believe anyone could be fooled by them. But this period, circa 1917, was a period when photos only recently had appear and so only recently had photo fakes. So the skill to spot the difference had only recently appeared.

Tons of people believe in crude "ufo" and "bigfoot" and "chupacabras" and "lock ness" and such photos, well into today though.


People believe in the Falkirk Wheel, even though it's clearly bad CGI!


Loch Ness.


Nitpick, but, isn’t Loch Ness just the name of the lake?


How do you know the lake is real?


I've seen it. It could have been a very convincing fake, but.. if that doesn't count, then nothing does.


How do I know you're not in on the conspiracy.


Maybe it was just an oversized puddle, and not actually a lake.


The fake water there is very cold.


Amazingly no, the lake is called Ness, Loch just means "lake" in the old regional dialect.


And Frankenstein was the name of the doctor.


You can think of Frankenstein's monster as the doctor's son, so they would also be a Frankenstein.


Ohh nice one :)


That's probably right, but as with the fairy pictures the technology to make the fakes is also advancing, and no one can tell a whether a picture that's been made recently is fake or not just by eye.

As fake text becomes more common the tools to make it will become more advanced to the point where we can't tell it apart from the real thing.


Surely the only worry about 'fake text' is scale? People have been able to write down lies for thousands of years.


Personalization of the fake text to precisely match what the reader is most susceptible to is probably a bigger problem, especially if the bad actor is able to target small groups (say, politicians?). An AI writer that could write what someone will believe, in the style they're most open to, using their personal information and a history of exactly what they've already seen, would be very hard to resist.

Couple that with scale and it'd be game over for distributing written information across the internet. No one would be able to believe anything they see online any more.

Although, weirdly, that actually sounds like a decent use case for a blockchain.


Well, most "high-value" groups like politicians, journalists, billionaires and such are targets right now in the sense that intelligence agencies and private opportunists have their information and trying to use text to influence them. The AI we're talking about isn't as good as human and so it's not going produce things that even as well tuned as a people currently do - since the method involves just emulating normal text, the AI is, at best, going to become nearly as "good as average".

But it's reasonable to say this could do a bit of damage to "moderate value targets". Given that you already some portion of retirees today "infected" with fake-news obsessions. Not only would have personalized spam/social-engineer but you could train the AI further on what worked once you even had a lowish success-rate.

All that said, it seems like the OpenAI text generator would not be such a customized social-engineering-constructor. Rather, such a thing would have to be trained by the malicious actors themselves, who have their own data about what works. So the now-always-in-the-background question, OpenAI's shyness to release code justified, seems like still a no.


Of course, any AI that is sufficiently advanced to sway both public and personal opinion will probably also be able to mount a 51% attack against whatever blockchain we expect to refute its lies.


Not so: 1) because the abilities of AI don't scale in that manner and 2) because, unlike in a true decentralized blockchain, there are centralized trust sources that you can use to verify the content. This is really a better use case for keybase rather than blockchain.


Scale is a big problem, though.

If it's cheaper to automatically create noise than it is to automatically remove it - public debate in internet becomes impossible.


Posting per Post 1 Cent


That's a startup idea.

Reddit where each account has bitcoin wallet connected. Every comment/post/upvote costs like 0.1 cent, every upvote on your comments/posts gives you 0.09 cent.

The rest is used for running the website (so no adds).


That seems like it's going to put a HEAVY incentive on the echo chamber effect though.

If it costs me real money to have an opinion that runs contrary to the herd, I'm not going to spout my opinion regardless of whether that opinion is factual and accurate.

That whole thing seems dangerous to me for some reason that I can't pin down.


It would cost a tiny amount to state your opinion as an individual, but spamming an opinion would be more expensive. A problem is that deep pockets would allow you to fake wide support for a minority position, and that most of the money would just circle within an opinion group.

Ultimately I think we will come around to the idea of verified digital identities almost everywhere. You could still have an AI agent spam in your name (or pseudonym), but you could not pretend to be multiple people.


Then remove the "get money for upvotes" aspect.

I can see politicians using the service as a propaganda channel, but they already do the same with free services, and this way at least it would cost them something.


It's not going to cost you real money to have a contrary opinion. Just 0.1cents. You just won't get that 0.1 cent back if it's contrary. I didn't see any -0.09 for down just +0.09 for up. Or you could make it net with 0 minimum.


It's one of the direction explored by status.im (full disclosure I work there) with tools such as visibility stake and tribute to talk


Could you say more? Is it already available?


The app itself is already available in beta on Android. It is an ethereum client for mobile, and it includes a messenger that uses whisper, a gossip protocol to transmit data (at least for now), which provides darkness and encryption.

The first iteration of Tribute to Talk will be pretty basic with a simple transactional model. A pays B to start talking to B, B can block A at any time. But the smart contract developers are working on more sophisticated schemes for the future.

Here are two related discussions on our discuss forums: Visibilty Stake for Public Chat Room Governance https://discuss.status.im/t/visibilty-stake-for-public-chat-...

PRBS protocol proposal - An incentivized Whisper like protocol for status https://discuss.status.im/t/prbs-protocol-proposal-an-incent...

If you want more precise answers don't hesitate to post there, Ricardo loves to discuss these topics


In browser crypto mining would allow this to work well too.


Yes,and like with junk food ( junk = fake here ) the solution is provenance. In the same way we should not consume food without some idea of it's provenance we need to verify provenance of the information we consume.


Very relevant article [0]. The second picture of the article is a real eye opener on the obviousness of fakes.

https://www.theregister.co.uk/2019/02/25/ai_roundup/


It reminds me of this: https://en.wikipedia.org/wiki/Turing_test#Na%C3%AFvet%C3%A9_...

I remember reading once that a machine had finally passed the Turing test, but when I looked in detail at what some of the judges on the panel had thought was a human talking, I realized how subjective the test was.


Right now people talk to bots every day, in the form of customer support.

People's modern perceptions about bots are much more evolved than when the test was first theorized, so now it is the time to do an actual Turing test.

It will probably fail, but we are surely close to the point where an AI will actually pass it.

My guess is we are 10 years away from that moment. It will be like the movie "Her".



I don't think that's true at all. People can rarely tell the difference between an article without facts or sources and a well sourced article. It's only going to get worse when the bots can generate convincing 'fake news'.


Of course this is more or less what the NPC meme is about, insinuating that others are on autopilot and swayed by fake news.


That’s a really good point. Thanks for the food for thought.


It'll be interesting to see which evolves faster.


I would bet my entire net worth on it being machines.


Does that include saleable organs?


Those won't be worth much when the machines take over.


This article and the Overcoming Bias post it links to ("Better Babblers") remind me of when I was in high school and was reading stuff like Gödel, Escher, Bach and Dawkins's The Selfish Gene (the two first books I ordered after mom allowed me to use Amazon) while also learning online about Wittgenstein and statistical text generation (Markov chains). All this led to a kind of crisis of identity because in their own ways all those things point to a deconstruction of the self.

That sounds hokey, but to explain briefly: GEB indicates that your self (your consciousness of being someone) is a swirling self-referential symbolic process (a "strange loop"); Dawkins indicates that your self is a kind of evolved meme whose function in nature is to further your family of genetic replicators; Wittgenstein indicates that your self is a habitual user of language where deep meaning is not as important as social function; and Markov chains indicate that your self's use of language can be modeled at least to a rough approximation by extremely simple statistics.

So I clearly remember wondering "Am I just a kind of slightly more advanced Markov chain?"

I think this is also the unsettling core question of Blade Runner: are we also artificial?

I wonder what theologians might say about this question.


> So I clearly remember wondering "Am I just a kind of slightly more advanced Markov chain?"

If you listen to small children's babbling, they sound exactly like little Markov chains. As they start to get older, their 'next()' function is informed more and more by semantic connections, reasoning, chains of association etc. until they're talking as people, not just like people.


I'm looking forward to observing this more closely in the near future. :)

We tend to think of language as separate from the rest of life, maybe because it's so transportable, but in a way it's strange to imagine an intelligence that only deals in language, and not even the language of "its own species."

A baby babbles I guess for fun but also because it's part of the process of playing with the world to learn to cope with it and to become an effective person. So talking, walking, eating, etc, are all part of the same general activity of life, and they all have their own forms of "grammar."

The semantic connections and associations go all across embodied life; you can't really use human language without being a person who also sees, moves, eats, loves, etc.

Wittgenstein's Philosophical Investigations starts from the first paragraph by quoting Augustine:

> ‘When they (my elders) named some object, and accordingly moved towards something, I saw this and I grasped that the thing was called by the sound they uttered when they meant to point it out. Their intention was shewn by their bodily movements, as it were the natural language of all peoples: the expression of the face, the play of the eyes, the movement of other parts of the body, and the tone of voice which expresses our state of mind in seeking, having, rejecting, or avoiding something. Thus, as I heard words repeatedly used in their proper places in various sentences, I gradually learnt to understand what objects they signified; and after I had trained my mouth to form these signs, I used them to express my own desires.’ (Augustine, Confessions, 1. 8.)

Wittgenstein thinks this is a good example of a misunderstanding of how language and language acquisition works. Then he formulates an understanding of language that focuses less on meaning and signification and more on social activity and speech acts.

I have a feeling that I myself am babbling right now, I don't know exactly what my point is and I'm hungry for breakfast...


> A baby babbles I guess for fun but also because it's part of the process of playing with the world to learn to cope with it and to become an effective person. So talking, walking, eating, etc, are all part of the same general activity of life, and they all have their own forms of "grammar."

Yeah, exactly. Almost everything a small child does is a directed attempt to generate training data, whether they're talking to you or talking to themselves or grabbing random things or trying to crawl into traffic.


A baby also babbles more if they get positive reinforcement.

I am not finding the link right now, but there were some researchers who attached a mic to very young babies that could pick up very faint sounds and an earphone to their caregivers and the researchers listened carefully to the baby and signaled the caregiver to touch the baby every time the baby made a (basically inaudible) intentional speech sound. After a short time the baby started producing a lot more of those sounds.

Similarly, since babies can’t really talk, caregivers can advance their communication by a few months by teaching them a simple sign language.

(Disclaimer: I didn’t do either of these things with my 2 year old. Just read about it.)


> I'm looking forward to observing this more closely in the near future. :)

There's something I remember reading about, years ago, by I think a linguist: there's a point during a child's development where their language skills appear to suddenly get worse, which they thought was because the kid stops just rote repeating and instead is trying to conjugate words themselves (and getting it wrong because it's so new).


They actually do this all the time in all sorts of things, it's fascinating. One day they'll be proficiently grabbing things with a full-paw grasp and the next they'll be dropping stuff left right and center because they started using pincer grip or something.


I wrote a series of Markov chain chat simulators as a teenager. In fact I independently "invented" the concept, and I was crushed to discover I'd been scooped by several decades. I figured out a lot of tricks for making the text more coherent, but I abandoned the project when I decided the road to an artificial general intelligence did not pass through here: it's definitely attached to the main trunk, but represents a side-offshoot. My last thinking on the matter / current thinking in hindsight is that the apparent and peculiar power of Markov chains to model human language is simply a necessary consequence of the constraints of serialization, in much the same way that different libraries for serializing complex data structures might convergently evolve. Also, an ability to predict the likely next symbol provides some redundancy and error-correction ability. It could simply be that languages which approximately satisfy the Markov property are such a cheap and effective way to get error resilience that there's little reason for a language not to evolve that way. I use the word "approximately" deliberately: it was the realization that a chat AI making and holding coherent thoughts is incompatible with the Markov property that caused me to abandon my work.


> I think this is also the unsettling core question of Blade Runner: are we also artificial?

> I wonder what theologians might say about this question.

I'm not a theologian, but I was very catholic for first 22 years of my life, and for me the Blade Runner was more about the relation with god than about the question "are we artificial".

Basically it was a prometheic/messianic story - Man searches for God to get answers and fight for salvation.

And the answer he got was "whatever, I don't care, your life has no meaning and you cannot be saved". And then the Man forgives the God and dies - reversing the Jesus story.

It resonated strongly with me, world is much more consistent with incompetent God that doesn't care, than with a loving, caring and omnipotent God.

As for "are we artificial" - what does it even mean? My former religion accepted evolution, so the question was "was it God that caused evolution, on is there no God and it was just an accident".


We clearly possess an autopilot and an actual pilot function.

Actual piloting (critical thinking, reasoning, creativity) requires more mental effort and is much slower. Perhaps our brains are optimized to have the pilot train the autopilot (so to speak) when necessary, but otherwise leave things to the autopilot? I suppose that's why training, muscle memory, and practice are so important.

I don't think any of this is controversial but it does seem a lot more human activity runs on autopilot than we thought.


I'd like to contest the use of the term "actual" pilot for the the more deliberate pilot.

For some reason, we as humans seem to like thinking of our conscious / deliberate pilot as ourselves, and the subconscious / autopilot as some form of "Other" somehow cohabiting our bodies.

(I'd expect this viewpoint to be especially true for the academic / programmer crowd here on Hacker News, who stereotypically tend to be more skilled in logical / deliberate forms of thought, and comparatively lacking in the intuitive / automatic, such as social skills).

However, the subconscious is as much YOU as the part of which you are more aware, and, in fact, probably has a GREATER effect on your action. In conclusion, the autopilot is as deserving of the term "actual" pilot, as the deliberate pilot is.


You can’t just say it’s “as much YOU” as an argument against the autopilot analogy. A plane’s autopilot is still part of the plane as well and will also likely have a greater effect on the action (of flying the plane) than the actual pilot.


derangedHorse has a good reply to this (not that these are new arguments, eg "can the foot say because I am not the eye I am not part of the body?")

I was not referring to the subconscious as autopilot. To stretch this already tenuous analogy further the subconscious would be the flight control software.

I'm saying during our waking supposedly conscious experience our conscious selves (pilot) is actually rarely in control. Most of our choices are automatic (autopilot) and the conscious mind retroactively invents rational explanations for them when we bother to notice them at all.

Free will (if it exists) is probably almost entirely contained within the ability of the pilot to repeatedly tweak the autopilot settings, which is entirely indirect and a far smaller degree of control than we like to suppose.

If you consider evolution this isn't so surprising. Intelligence is just one strategy for adaptation and evolution repurposes and builds on top of existing structures. Why wouldn't the rational conscious mind evolve as a tweak on top of an unconscious intelligence, which itself is a tweak on top of subconscious/instinctual behavior? If animal studies are proving anything it's that intelligence is a spectrum and many supposedly human behaviors (concept of self, tool use, et al) are present in other species.


One rabbit hole to go into with this thought is Hubert Dreyfus and his use of Heidegger's philosophy to explain why the early AI research program was deeply misguided by focusing on explicit cognitive symbolic representation and logical manipulation rather than embodied motivated learning.

From what I understand, Heidegger's phenomenology is related to this piloting and autopiloting, exemplified by the way a hammer only appears as explicit conscious representation to a woodworker when something is wrong with it. The hammer's normal relation to the woodworker is just its ordinary function; the tool's own presence recedes.

But then I also wonder if there's something suspicious about the pilot/autopilot dualism. It seems to mirror dualisms like culture/nature and animal/human. Maybe what we think of as piloting is not as critical, rational, and creative as we are inclined to believe?


> The hammer's normal relation to the woodworker is just its ordinary function; the tool's own presence recedes.

We see the same thing with auto racing. Really experienced drivers "feel" as if the car is an extension of themselves. Its mechanical nature disappears beneath conscious thought.

My hypothesis (admittedly based on little evidence) is this is an optimization function due to conscious thought being a relatively slow process. Once the autopilot has integrated the necessary functions the conscious mind can get out of the way and focus on more "important" things. This appears to apply to memory as well: unless the situation is in some way extraordinary the brain doesn't bother keeping full details in long-term storage. When interrogated later our minds just make up the likely details and call it good enough.

> But then I also wonder if there's something suspicious about the pilot/autopilot dualism. It seems to mirror dualisms like culture/nature and animal/human. Maybe what we think of as piloting is not as critical, rational, and creative as we are inclined to believe?

If it helps I was thinking of the mind as four layers: unintelligent instinctual/automatic systems, subconscious processing, autopilot, and pilot (conscious rational mind).

That said I think you are correct: being critical, rational, or creative is probably rarer than we like to believe. Maybe it is partially a cultural belief, as if admitting we are just cruising through life most of the time makes us seem stupid or un-human?


Kanehman's system 1 and system 2.


I'd like to call attention to describing piloting as feeling slower.

It sounds like you have said something objective. But really perception of time is a completely subjective phenomenon as well, constructed from unconscious processes which create conscious experience.

People in great fear report time slows down, but also top tier athletes.

Thinking hard about something is one particular kind of unified subjective experience.


While you are correct that perception of time is subjective, I'm not referencing that phenomenon at all.

I'm talking about measuring reaction time or brain imaging studies.

For top athletes autopilot kicks in and reacts to the situation, then issues commands to our subconscious body control processes, which then issues nerve impulses to begin movement, all before the pilot (prefrontal cortex?) has even perceived the situation, let alone made any decisions. The autopilot knows how to do this via repeated trailing guided by the pilot function.

You can also observe this in brain imaging studies which can show the body reacting before any thought took place. If pressed people will invent a rational justification for their behavior but the brain images prove this is entirely post-hoc most of the time.

My theory is this is due to conscious thought being so much slower, but I don't have any proof.


> Perhaps our brains are optimized to have the pilot train the autopilot (so to speak) when necessary

In a process commonly known as dreaming? I don't think that it is a coincidence that new tasks car we are currently learning to perform (that are still "piloted") often appear in the occasional snapshots of that somehow cross the boundary to our daytime consciousness.


The author admits that he had a hard time differentiating authenticity of generated text on topics he was not interested in. This probably indicates that he might also have trouble detecting value in those topics even when discussed by a real human. Suggesting that people are not truly reasoning when speaking or do not exhibit general intelligence because they cannot solve assigned math problems was particularly infantile. I think there is just some heavy bias towards formal logic here, devaluing less easily analyzable forms of intelligence.

I doubt the school teachers who failed his mathematics exam would have major issues finding at least some of the problems in the generated texts he gave as examples.

So asking the question whether humans are "artificial" stems from a place of low empathy, I think.


*she :)


This is, I believe, one of the big reasons why people find it hard to reject religion (another big one being the difficulty of accepting death, of oneself or of loved ones). There's genuine mystery at work here, or so it seems.

You need to distinguish consciousness from ego/free will. Consciousness is the fact that there's something rather than nothing, subjectively. That you seem to experience: sights, sounds, sensations, emotions. Under that definition consciousness is something that cannot be fake (no matter what's the nature of the universe; no matter whether you're asleep or awake), simply because you experience things.

Ego/free will is a separate concept and is indeed an illusion (or an evolutionary artifact if you like The Selfish Gene). There's a lot of evidence for that, the simplest being that no mainstream physical theory allows us to have made choices any differently than we have (barring true randomness like quantum mechanics predicts; but it's also easy to see that that's not freedom, just plain randomness).


As I see it, the basic fact of ego is something that similarly cannot be fake: my mental life has a nexus called “me” at the center of agency, being a symbol for my own body, my desires and fears, etc. It’s no more an illusion than my cat meowing for food in the morning is an illusion. He’s there, he’s hungry, he wants food, so he meows.


It's less than fake. You can't actually pinpoint it. There is no little you sitting in your head over the dashboard and making decisions. You don't decide when to decide to move your hand. It just comes to you. If we disconnect your hemispheres which one is going to be you?

There are books on the topic, I don't have a slightest hope to get the point across, but I think if you ponder it long enough then you can come to the conclusion even on your own. You can call this familiar pattern "myself" but it's not like there's any ego that you can find in there.

And what I mean here is not "let go of your self be free and enlightened". It's just that it's all more reasonable from purely rational perspective. There are a lot of patters that you can observe in and around you. You can call them all John Doe, but when you examine these patterns, you can exclude some as something that's "not you", just a thing that happens here. If you keep doing that I don't see how can you extract what you call ego. You could group some selfish behavior patterns and associate them with ego as in when we say somebody has a big ego, but I don't think that's what you mean here.


As a first approximation, let’s say I am my physical body. If you’re sitting in the chair across from me you can see me. If I wave, what you see is me waving. I’m a body in space with some peculiar characteristics called “life.”

Fine, but what happens when I die? My body is still there but it’s not really me anymore. I’m gone. The body left behind is just a husk. So it seems that I am not just a hunk of matter, but at least a hunk of matter imbued with a dynamic pattern of activity: breathing, perceiving, reacting, speaking, and so on.

Fine, but what about sleepwalking? In some sense it’s me who’s doing the things the sleepwalker does, but in some crucial sense it’s not really me. That’s a subtle and strange distinction, but we make this distinction in everyday life. I don’t blame someone for snoring, and when I feel annoyed I recognize I am being irrational.

And so on until you start to refine a picture of the person’s self as something like that body’s everyday nexus of thoughts, emotions, and decisions, being the result of socialization and growing to adulthood, especially within a narratively coherent life.

There’s no need for a homunculus ego in some infinite regress of ultimate causality—that is indeed a nonreal, fantastical kind of self, the kind of self that early Buddhists criticized the brahmins of their time for promoting as the true self.

Real selves are just developed, cultivated, socialized entities that arise as psychological realities. There might be more complex, nuanced structures than just “one body, one self.” But the insistence that selves are just nonexistent delusions seems to me like an unnecessarily provocative way of formulating something.


So you seem to try to define it based on "conscious decisions". But since I recognize your username I'm sure you know how the story with decisions and free will goes[1][2].

Or if it's "nexus of thoughts, emotions, and decisions" then maybe you think more about patterns of behavior. If you would call it just as a currently observed patterns of behavior and update it as behaviors change then I think it's just a matter of naming them or not, they are clearly there.

But my point is that there is no nexus. Thoughts, emotions and decisions are there but there is no single central point to them apart from maybe current point in time which is basically a story that allows you to reason about the world as explained in [2].

But even in common sense distinction you are talking about seems very vague. You "say something before you think" you do something "on autopilot" or you're coding while being so deep in the flow that you are not aware of yourself etc. You or not really you?

You can blame someone for an outburst of anger (he's not sleeping) to then realize he had a tumor in the brain pressing against the amygdala. It can be the same story with yourself.

So where is self? Naming people is useful. It's not about that. It's just that we tend to look and talk about some inner pattern inside that pattern without really ever finding it.

Even treating whole body and behavior as a pattern seems somewhat context dependent. Maybe I was part of Milgram experiment or fought in some war - "that wasn't really me".

I'm sorry if it sounds provocative. I know that "losing self" has some associations that don't necessarily promote rationality.

I'm just interested in how people organize it in their heads.

1. https://www.nature.com/news/2008/080411/full/news.2008.751.h...

2. http://www.scholarpedia.org/article/Multiple_drafts_model


You can also say that the desk lamp in front of me is an illusion because if I take it into parts then there's no single atomic center that is truly the essence of the lamp.

That's how some Buddhist texts approach the question, typically with a horse cart or a wheel as the object they demonstrate lacks a fixed essence.

To which I say, sure, fine, there is no essence. There's still a table, a cart, a self! We don't need such essences. We don't need to be able to pinpoint the exact location or center of every entity we take to exist.

It is indeed interesting to look at edge cases and borders, like what happens to the self during states of deep meditative absorption, for example. Well, let's say it temporarily dissolves, like when you heat up a piece of wax. Maybe that's accurate, maybe not.

Buddhists do talk about the self like this, that it comes and goes, that settling into samadhi makes it calm down and fade into a more diffuse state, and so on. They also say that the self is ultimately an "illusion", but in the same sense that everything is an illusion: it is temporary, compounded, dependent, etc, while we sometimes are deluded to think otherwise, e.g. that our soul is eternal which is of course a common belief (that I do not hold).

Buddhists also always add that the teachings about "not self" are not to be taken as metaphysical claims, but as useful instructions for teaching a practice, the practice of meditation leading to liberation. Or in a lighter sense, you can practice having a less limiting self-definition, or to accept that your self is dynamic and expansive.

Still, people exist, and "self" is basically just a word I use to denote myself as a person. In my solitary flow states I am in a different state than in ordinary social situations; maybe I am a bit like a chameleon, too.

People are extremely complex and marvelous, so they exist in many different ways, within many different kinds of relations and environments, and they are constantly changing and adapting, but they are also constantly maintaining and preserving.

That's all a bit of a ramble, I didn't have time to organize these thoughts properly!


I enjoyed it. A bit hard to dive deeper in a HN thread. But your view doesn't seem that far from mine after all. Similar patterns just differently organized.


It's less than fake. You can't actually pinpoint it

This doesn't necessarily follow. Emergent properties are still real, even if you can't separate that property from the system that generates it.


I agree. But you need to define that property. That's what I mean by pinpointing it. Describing what are we talking about exactly.

The fact that we are talking about it, is not enough for it to exist.

Let's say there is a ray of light from the sky. And people start talking that there's a white tower standing on the ground as high as the sky. Everybody knows what we are talking about, it's just that we go there, examine it, and it turns out it's just a ray of sun from behind the cloud.

Not such a great example because a white tower would still be much more clearly defined than the ego is, but that's what came to mind.


> It's less than fake. You can't actually pinpoint it. There is no little you sitting in your head over the dashboard and making decisions. You don't decide when to decide to move your hand. It just comes to you. If we disconnect your hemispheres which one is going to be you?

The way I see it, your perception of there being "a little you" is basically a self-referential part of your brain taking a bunch of status readings from all over your brain and using them to generate the sensation of the little you doing whatever it is that you're doing.

The homunculus is there in your head, but he's just a picture on the screen in your Cartesian theatre.


My guess is that it goes the other way: many people insist there must be something to our minds beyond what could result from physical processes in a physical brain that has evolved from much simpler predecessors, because of the profound and disturbing implications it has for what we are and what will become of us.

To be clear: the mystery is there - no-one has yet shown how minds do work - it is the assumption that it must be forever so that is a matter of faith (as is the opposite view; the issue is why a person would lean one way or the other.)


I think we're agreeing. I didn't spell out that I do think consciousness is not understood, it's a mystery, at least for now. The religious solution is to slam some superstition on top of that mystery to make it more digestible.


I interpret ego as the mechanism that enables competition for surviving and mating. It’s what makes you feel more deserving of the limited resource, such as food or a mate, than someone else, and would seem to be a trait that’s beneficial for passing down genes.


It's a bad habit e.g: being ego centric and uncaring towards other. But it's the main way our ecosystem (jobs and education works) being an unchanging person with name and sameness.


> So I clearly remember wondering "Am I just a kind of slightly more advanced Markov chain?"

I think humans employ markov chains (MCs) all the time. (Back in the days of symbolic AI, these became popular as 'frames', AKA case-based reasoning.)

But it's clear that human cognition far exceeds the capacity of simple probabilistic devices like MCs, much as context-free grammars exceed finite state machines. Eventually it became clear that too much of intelligence cannot be modeled viably using simple probabilistic mechanisms like MCs or frames (like memory, logic, and learning).

I'm hopeful that work like GPT-2 will accomplish the same revelation for the limits inherent in probability-based models via deep nets. As long as AI models fail to model semantics explicitly, they will forever create only narrow savants or general morons.


You are a collection of chemicals wandering around by stocastically seeking the highest entropy state you can achieve, so yes, you are a sort of Markov chain I suppose.

But so is a rock, so I would argue it is not an overly illuminating fact. Furthermore I suppose you have this very convincing experience of free will, an experience that seems pretty robust even when facing such facts as the brain is just made of chemicals, so I wouldn't worry about it.


One important thing here is that language is not equal to cognition. It is entirely possible to posess a high order intelligence but learn a concept of language in later years (eg deaf people who learn sign language in 20s). So no, humans are not Markov chains, they are something stranger.


Oliver Sack's excellent book "Seeing Voices" seems to argue something slightly different:

"A human being is not mindless or mentally deficient without language, but he is severely restricted in the range of his ideas."

To defend the original metaphor: without language, a person might be akin to a Markov chain generated from the relatively small 'corpus' of individual experience.

While the 'well socialized' individual can draw on the vast range of human experiences shared in language. They have access to a much larger corpus.


They may have access to a much larger corpus, but isn't what they actually hear and read also limited to the relatively small corpus of individual experience? I don't want to underplay the density of ideas in language, but I think it's a common mistake to underestimate communication outside of language.


Communication outside of (spoken) language is still a form of language


That would be one meaning of the word language. It isn't always used that way.


> I wonder what theologians might say about this question.

Imho many theists are very aware of these alternative interpretations, but due to all of them being rather unsettling and potentially existential-dread inducing, they chose the "Welp, I rather go with the God thing, that's less hassle" route.

Nothing wrong with that, we all have a mind of our own that allows us to frame our world view in the way most convenient/understandable to us.

But sadly it seems these difference in world view too often prevent us from agreeing on a consensus about how to go about things or even where to go in the first place.


I'm not so sure that religious beliefs -- everlasting hellfire, for one -- are all that comforting. If anything, I find the idea of an omnipotent and omniscient God far more unsettling than meaninglessness.


If you're one of the downtrodden, the belief in heaven and hell exists can be a great comfort. You're a good person (even though you suffer now) and will go to heaven while your oppressors (who now lead the good life) will go to hell.


Yes, but most people know they aren't good, so they carry the burden of guilt plus terror of the afterlife.


>So I clearly remember wondering "Am I just a kind of slightly more advanced Markov chain?"

Oh yes, absolutely. https://www.sciencedirect.com/science/article/pii/S014976341...


> GEB indicates that your self (your consciousness of being someone) is a swirling self-referential symbolic process

It advances the hypothesis.

> Dawkins indicates that your self is a kind of evolved meme whose function in nature is to further your family of genetic replicators;

I'd say your self is rather a battlefield for such memes.


The reason I think you are not artificial has to do with the fact that you are observing yourself.

My actions are undoubtedly being driven primarily by a series of electrical, chemical, and other reactions going off in my mind and throughout my body. Manipulate my brain in various ways and you can manipulate my behavior in various ways. In this regard I'm effectively a glorified automaton. Yet the catch here is that there is something inside here, 'me', observing all of this happen and having the perception of controlling it. When I write a program to generate a random number I find it inconceivable that suddenly some entity poofs into existence observing itself imagining it's deciding on a random number only to inevitably decide on the number that my pseudo-random algorithm had already predetermined given its initial state.

And similarly, even if we made vastly more complex systems that could create a passable replication of human behavior - I do not think there would, at any point, suddenly appear some entity within that machine suddenly imagining itself driving the deterministic decisions occurring within. A religious individual would call this 'me' your soul. I'm more compelled by the simulation hypothesis for reasons beyond the scope of this post. But in either case this is something that will undoubtedly never going to be proven in any way during our lifetimes, if ever. So it's a place where an individual must come to their own conclusion based on very limited information.

That a bad decision could have unimaginable consequence here is undoubtedly what drove things such as Pascal's Wager [1]. Though he failed to consider of course that life itself could be a test. Willingness to adopt views one does not genuinely believe for hope of future reward and convenient social graces, is probably not something that would score so well. Quite the burdensome consideration, life is.

[1] - https://en.wikipedia.org/wiki/Pascal's_Wager


> I do not think there would, at any point, suddenly appear some entity within that machine suddenly imagining itself driving the deterministic decisions occurring within

Why would you think something like that??!! I'm really curious of your reasoning...

To me and other peoples like me, both reason and intuition are "99.9%" sure that this is how it all works. Life / the universe / math / is full of emergent phenomena/properties, things that you'd say "suddenly appear". Even in pure math, even basic and boring areas like number theory have deep structure in them unseen if you just think from the "basic principles" step by step.

The illusion is the opposite kinds of things: things where you (mistakenly believe) you understand the full causal sequence of things and that there are no extra things that can pop out of the darkness and surprise your reason. This is plain arrogance. "Step by step rational reasoning" doesn't work except in a very very small number of cases. Because natural processes can only rarely be approximated to a human-brain-friendly number of steps - most of the time you can't reason A -> B -> C because there's a gazillion steps from A to C, and they're all no-linear eg. you can't "compress them" into a fewer number of steps (btw, this is the insight behind deeps neural networks and "deep learning" - very very basic math operations + some adequate non-linear transforms between them ---> "emergent" intelligence).

"Rational thought" as understood naively by lots of non-techical people is a weird distorted aberration.

If I were to subscribe to mystical viewpoint, it would likely be some form of pan-teism or "the entire universe is conscious / pure consciousness" whatever thing, it's the only thing that would even remotely make sense... the whole idea of "soul" and all that ghost-in-the-machine and "chinese room argument" nonsenses deriving from that... I almost feel that you people thinking this way are an entire different species from us, how can you "compute" like that? It's almost as if human brains "diverged" at some point and produced two different types of minds with completely different views of the world...


Children. Vastly complex. Gain sentience and consciousness over time.

No need for a soul.

Initial set of the network (DNA, Hox genes, and a million others) and the environment "is all" that is needed for that fire to light up in there.

Not always, unfortunately. Severe developmental disorders sometimes prevent the forming of that consciousness.


> And similarly, even if we made vastly more complex systems that could create a passable replication of human behavior - I do not think there would, at any point, suddenly appear some entity within that machine suddenly imagining itself driving the deterministic decisions occurring within.

So I code a fleshy looking automaton that it's indistinguishable for you from a human. Every action and communication is 100% convincing. Surely you feel some empathy towards it (since you are not able to tell) and you may even want it not to suffer. Does that change after I tell you what it "really is"? How about other people?

I still can't figure out how very smart people who know a lot of science, look at this world and say "here - science", "there - also science", and then there's me. I know how my brain works and that you can manipulate my behavior just by touching it, but apart from it there's also the entity. It's obviously not there but I'm also sure it is.

Emergent behavior of complex systems. Is it so hard to believe that we are one?

If you want some romance in all this then how about the universe looking at itself. Is it burdensome?


>> And similarly, even if we made vastly more complex systems that could create a passable replication of human behavior - I do not think there would, at any point, suddenly appear some entity within that machine suddenly imagining itself driving the deterministic decisions occurring within.

> So I code a fleshy looking automaton that it's indistinguishable for you from a human. Every action and communication is 100% convincing. Surely you feel some empathy towards it (since you are not able to tell) and you may even want it not to suffer. Does that change after I tell you what it "really is"? How about other people?

Your comment has nothing at all to do with the comment you quoted. Whether something is merely convincing to you or I is irrelevant–since it wouldn't actually be cognizant either way. The parent comment was discussing meta-cognition and the fact that even if you made a replica it wouldn't have meta-cognition like we do.


A p-zombie. You’re going to split people 50-50 here


Thank you, never knew the name, now I can do some reading.


Ken MacLeod is a sci-if writer who uses elements of the concept in a few books - the corporation wars being the key ones. It’s an interesting read if sci-Fi is your vibe :)


>Though he failed to consider of course that life itself could be a test.

Before brushing Pascal's Wager aside, one should remember that Pascal was one of the main founding fathers of probability itself. We're not talking about some lightweight mucking around in the mud of primitive mankind's ignorance. The man knew what he was talking about.


I came to the "consciousness is an evolutionary advantageous adaptation" conclusion in High School too, and it sorta sent me into a death spiral in terms of academics. When the idea that we're all ultimately just bags of chemicals that are subjectively aware of our existence really kicked in, I completely lost touch with schoolwork and any "purpose" I had felt earlier. Ultimately, I think irrationality plays an important role in thinking about our own existence and it rationalizing everything down to its core isn't necessarily helpful.


The book 'Blindsight' by Peter Watts argues the opposite, that (some degree of) consciousness may have been advantageous in the past, but it has since become a parasite that is holding humanity back; humans would be smarter, faster, and more efficient without it.

While the book is science fiction, it does make an interesting case, and some of it is grounded in actual research.


That was one of the most interesting ideas I came across while reading science fiction. The exact reason I like this genre. I still think about it today. Why does intelligence need consciousness? Does it really? Evolution is full of sup-optimal solutions, consciousness may very well belong to this category. Highly recommend the book.

Off topic: There was also another book that made it clear to me, that "the stars do not belong to mankind". Something about the spiritual awakening of humankind, leading do another evolutionary tree for our children, while the adults are left to die, knowing they'll never be able to explore the universe. Forgot the name, but still think about this, too.

Lastly, the Three Body Problem with it's "Dark Forest" theory. I'm not completely convinced by the idea, but it's thought provoking.


The book you're remembering is Childhood's End.


Well, a lot of kids lose interest in school subjects for a lot of different reasons! I remained very interested in math, physics, and computers, gained some more interest in biology, etc. But this way of looking at life can be weird and alienating. I think part of the fascination is how life kind of keeps going on nevertheless. Regardless of what you think about the nature of language and meaning, you have to act in society. And you sort of have to cope in your own way with the big thoughts while also coping with the ordinary stuff.

Various insights into evolution, biology, materialism, or what have you can't really negate the reality of what's going on today. Or if they seem to do so then the insight is probably incomplete. I'm reminded of the way people use rational scientific rhetoric to exclaim that religion is irrational and dumb; well, but how about using that rational science to investigate how and why religion is a part of human psyche and society? Etc.


I forget who said this initially, but the world is on fire. If I were to sit unmoved in my apartment the world would eventually push it way under the door and into my life.

I’ve actually been looking for the source for sometime so I could read it.


"I think irrationality plays an important role in thinking about our own existence and it rationalizing everything down to its core isn't necessarily helpful."

It's not irrational to posit that you are more than merely a 'bag of particles'.

Just because scientific materialism, taken to it's extreme, might want to describe us as such, does not mean it is true.

Scientific materialism is only one metaphysical perspective, based on assumptions - such as the universe is ordered and can be described with a set of rules. There is no full evidence of this, it's just an assumption. Given that some of the material universe seems to 'mostly' adhere to a set of equations, and because it's objective ... we like scientific materialism a lot, but we also have to remember it's not the only way to look at things.

Consciousness itself, or rather, life, the perspective of 'the observer' could be the reality that matters. The expression of life itself is the interesting thing that only seems 'miraculous' from the perspective of materialism because it's literally denied by it -> that materialism can't seem to describe life is not so much a realization of science, rather it's an assumption that we started with: the universe is just a pile of particles, ergo, we are a pile of particles. The later does not follow the former as a logical conclusion, rather, the assumption that 'everything is just particles' basically implies the later.

It may very well be more rational to accept that life / consciousness is 'real' - and it seems to transcend our materialist conclusions because materialism as a metaphysical perspective just doesn't fully work, i.e. there's a hole in it.

Consider that we ultimately developed logic / reasoning / scientific materialism mostly to enable our lives and expression i.e. it's just a Tool, not a Truth.


Your perspective doesn't seem exactly common in hackernews and I appreciate it.

1) I gotta say though when you talk of "the observer" it throws me off as it sounds like the typical quantum woo twisting of the observer effect, perhaps you meant something else? what do you mean by "the observer"?

2) Regarding "the universe is ordered and can be described with a set of rules. There is no full evidence of this, it's just an assumption." this has proven so far to be a good assumption (as seen by the massive amount of scientific knowledge and verified predictions accumulated) and if anything it seems all evidence points to exactly this. Is there evidence that there the universe is more than just 'a pile of particles'? (although that is a somewhat simplistic way to put it)

3) Trying to distill the comment, it seems the main argument is along the lines of "science can't explain life itself and/or consciousness, therefore there must be more" is that a fair assessment? and in that case what would you convince you of the oposite? for e.g what if "life" is well understood and can be reproduced in a lab etc.. what if we can reproduce most human-like intelligence with AI, etc... in other words, what would (realistically) change your mind to the opposite?


I'm just making the case that the spiritualist argument is rational.

Humans in every culture since the dawn of time have referred to 'spirit' or that which seems to animate matter.

Yes - 'laws of the universe' we take as a given because they seem to work for us, in paper fairly well.

But you know what we also take as 'a given'? That you are alive.

'Your life' is kind of more important to you than science. Life itself, and the expression of it, seems to be our #1 concern.

That once branch of thought, Scientific Materialism doesn't by definition allow for life to exist, doesn't deny the nature of life.

1) Not 'quantum observer' - your spirit, soul, or some other scientific description. The word doesn't matter.

2) The evidence the universe is more than a pile of particles is life itself. And consciousness.

3) "Science can't explain life" - it's worse: Scientific Materialism rules it out completely by definition. If we decided that 'the universe is mathematical rules' - then - 'there is no life'. Creating life in a test-tube probably won't give us the answer.

FYI Science also has a problem describing why simple objects can ultimately make up very complicated ones with different problems, it's called 'emergence', it's a field of study.

Finally, I'll refer you to the the concept of 'biocentrism' - which is a more material outlook at the subject without getting so overtly metaphysical, and it's done by real scientists. [1]

[1] http://www.robertlanza.com/biocentrism-how-life-and-consciou...


> and it rationalizing everything down to its core isn't necessarily helpful.

Its why i promised myself to never try and understand how cars work. They just work.

If only i could apply that to just about anything..


You might appreciate https://slatestarcodex.com/2019/02/19/gpt-2-as-step-toward-g... where he compares GPT-2 to a "prediction engine" as per Andy Clark's "Surfing Uncertainty". https://slatestarcodex.com/2017/09/05/book-review-surfing-un...

The latter suggests that the brain is a prediction engine; and things we do are just the brain minimising prediction-error for systems at various levels.

(SSC is concerned about the dangers of AGI, and sees GPT-2 as imitating what people do).


Wait until you hear about Buddhism -- that'll blow your mind!


I would say one indication that you are not artificial is that you even ask the question. Questioning is extreme higher-order thinking.


Nurture over nature, always. Epigentics/genomics is telling us how we'll really evolve over the next few million years and it will have more to do with environment than our current genome.


We are created to understand and explore this magnificent creation and as much of its Unfathomable Creator as is possible for a created creature dependent upon time and materiality for our existence.

The Creator of dimension, time, space, matter, energy and the subtle mathematical laws that govern their interactions will ALWAYS be beyond our comprehension, but we can understand a bit of It in very small, very abstract slices.

Mostly, however, we are here to enjoy this wonderful creation and for that reason we are created as moral creatures with an animalistic body. The most subtle law of the universe, that only we live under, is the Law of Karma. This law dictates that we must use our abilities to learn and choose, via our free will, to self-evolve out of our mammalian capacities of pack warfare and alpha-dominance games (most people naturally live above their reptile potential, that is why there are so few serial killers). This is why all the Great Teachings emphasize compassion towards all our neighbors as the destination for a spirituality that is born of an inward seeking for self-improvement via a connection with our Magnficent Creator.

There are many ways for a human being to enjoy this creation, which starts first with our body. There is the physical pleasure of eating and sex; the pleasures of having friends and family, perhaps having children of our own; the pleasures of athletic feats (Alex Honnold WOW!), mental feats (chess, mathematics), creative feats (art, writing, performance), as well as scientific feats that explore the nature of the universe in all its grandeur.

Our intrinsic sense of morality is built into us as a feedback mechanism to nudge us away from mammalian competitive strife and towards truly human cooperation, where those that have the means choose to help those that lack, where all oppression -- based upon ethnicity, form of religion (including none at all), sexual preference or identity -- is stamped out in favor of a free society of equals that each enjoy the respect and comfort that this planet provides when generosity and compassion are the rule.

Such compassion also requires us to fight oppression in all its forms, both personally and as societies and cultures. This is group compassion that stems from individual morality and the understanding that we are all in this together.

The Law of Karma's primary function is to feedback into ourselves the happiness or unhappiness resulting from our treatment of others. This is why so very few ultra-wealthy people are happy: they have built their empires upon the misery of the workers they have used and discarded for the lowest price possible. Note that there was a notable exception I saw crop up a few months ago where a very successful health care company founder gave very large bonuses to his employees in preparation for going full non-profit. He did this out of gratitude and generosity, knowing that his hundreds of millions of dollars was more than he needed and was built upon their backs. That is the essence of the spiritual path. It matters not which form of religion (if any at all) he adheres to. We are measured by our hearts and how we tune our minds to live the truth of selfless positivity over selfish negativity.

The misery upon the Earth in 2019 is the direct result of our free will's ability to choose the most horrific path due to Lennon's "Instant Karma" not existing. Karma is much more subtle than that. You can see its results on Trump's face and those of every person aligned with him. Yes, they can have the pleasure of domination of others or wealth and power, but pleasure is NOT happiness. Happiness comes from within.

This is a part of the Sufi Message of Love. All human beings must unite to selflessly create "On Earth as it is in Heaven" because each of our free wills are equal and the people who lie, oppress and keep secrets have an advantage over the truthful, meek, and kind people in that they not only have chosen to live unfettered from their consciences (the part of us that is the source of our morality) but take pleasure in the misery they inflict upon others. It is difficult for those not yet on the spiritual path to understand how evil a person can become for the simple fact that until we begin to fight against our own vices we do not know how deep human pathology can grow.

We are perfectible just as our software and machines can be made perfect, if we put in the effort and pay attention to the details; don't worry, the universe will test us ;-). Yes, we are all born imperfect, none greater than any other, but we also ALL have the ability to learn and self-evolve from vice into virtue. To reach that perfection, however, we must go within ourselves and beg our Creator for help. That humility and seeking then opens up our potential for only then are we truly living up to our potential to know the bits we can about our Creator and the magnificent tools we, ourselves, are to explore this universe in peace and harmony with each other and the Earth itself.

I suggest anyone interested in this Message to look into Coleman Barks' translated poems of Rumi. His UCTV presentation "Rumi and the Play of Poetry" is on Youtube and is excellent.

All our problems are caused by a lack of love, and no solution that does not emphasize love as its foundation is only a band-aid.

"The Way goes in." --Rumi

For those who ask for proof of what I speak, you must experience this truth for yourself by activating your own free will. If you believe that what I say does not exist, you will be correct from your perspective. That doesn't mean you aren't capable of exploring this sphere of creation or that you are not behoven to the Law of Karma; it just means you haven't opened your spiritual eyes, ears and mind to its reality and remain in the realm of scientists that shunned Boltzmann and Einstein for their expansion of our understanding of this universe. It is your free will's decision to accept this Message and try, or to deny it and remain as you are. There is no compulsion in religion and I am commanded to love everyone anyway. That is why I try to speak of the sublime joy I experience in my life as a result of trying.

Peace be with you all. We love you. The evil, selfish people are destroying our beloved Earth and inflicting misery on countless human beings.


To be honest, I've read plenty of articles that read like those examples: superficially they seemed to make sense, but when you pay attention to the logic behind it, it makes no sense. And those were articles written by real people, by journalists.

And even scientists often seem to write a bunch of meaningless filler that feels scientific in their papers, presumably because that kind of text needs to go in that place in their paper.

Another thing, from the article: "Simple correlations also seem sufficient to capture most polite conversation talk, such as the weather is nice, how is your mother’s illness, and damn that other political party."

You know how hard I had to work at that? I used to be incapable of small talk or maintaining a conversation. I could talk only in those "deep structures" but struggled to put that in sentences that formed a natural part of a conversation. I worked hard at those "simple correlations"; they were not so simple for me.


A key to smalltalk is to understand the game that's being played. When you're talking about something technical, the point is to discover and deliver useful insights on the subject at hand. When you're doing smalltalk, the point is to make the other feel comfortable and accommodated, to entertain, and be entertained. Therefore, it's more about trying to understand what the other person is feeling (this is usually a matter of looking at body language, imagining what's going on in their lives) then it is about delivering useful insight. Usually, if in doubt, listening attentively and trying to work out what makes the other person tick is a productive strategy.


I've often felt like an alien anthropologist in conversations. Even if I was aware of the game and saw what's going on, I wasn't part of it. I needed an extra processing step that made me always too late to say the right thing.

I'm suddenly wondering if my problem might be related to my youngest son's speech problem. When he was 3, he couldn't speak sentences; his sentences were just 3 meaningful words in a row. He never babbled, unlike his best friend, who always babbled in long, incoherent Trumpian sentences. He got special speech therapy for half a year which helped immensely, and now, at 4, he makes excellent sentences, if a bit staccato and clumsy, and certainly without any kind of natural flow.

He never babbles, though. I think his Markov chain is broken and he replaced it with an rule based system that he reworked to produce language.


I have a feeling that an alien anthropologist approach is ultimately better than the reactive approach - I grew up with an autistic brother who was nonetheless socially capable, simply through distilling social problems into a set of hard rules. Sometimes you get edge cases and he gets into trouble, but for 90% of the time, he was actually way more socially able than I was, simply by having a better-formed understanding, less overwhelmed by immediacy and assumptions.

You need a degree of alien anthropology to be able to respond to what's really important in a conversation - an extremely socially capable deaf friend of mine pointed out, for instance, that body language is more important than verbal content in most casual interactions. These kind of insights are kinda hard to gain from a neurotypical, non-deaf perspective like mine, because you're a bit like a fish that doesn't realize he's swimming in water.


Oh, body language! My 4 year old son is extremely expressive with his face and posture, possibly because of his verbal problems. Even when he could barely talk at all, he was very good at expressing what he wanted or needed. He's so expressive that I've always thought he should become an actor.

My older son, who is verbally very strong, is often nearly expressionless.


One thing I've noticed is that it's reasonably common for news articles to just casually drop in facts that completely contradict whatever narrative the waffle is pushing, and almost no-one notices. (Wouldn't be too surprised if I only noticed it a small proportion of the time.) People just seem to come away believing the waffle anyway. It seems somewhat likely that even the Pultizer committee has been doing the same too.


Most normal people decide at some point that math is not for them in order not to embarrass themselves by claiming competence in a domain in which they are not confident at all. And hence, compared to nerds or seriously talented individuals who can confidently claim this status, they never spend much time on math and eventually seriously lack skills even in simple logical reasoning. This is basically math anxiety. Nerds, on the other hand, are incentivized to gain social status via math skills as a potential escape of their low status. I think this mainly explains the author's observations that these two skills are not correlated very much, basically an introversion vs extroversion polarization based on social expectations and incentives. Math is also intimidating, so I'd imagine someone with some experience in it also develops overall higher inhibition and hence is a worse verbalizer.

There is a huge literature on the relation between logical reasoning and verbalizing, which the author sadly ignores.


>Nerds, on the other hand, are incentivized to gain social status via math skills as a potential escape of their low status.

This is certainly not how I approached math, and it's the first I've heard anyone say it, even.

Instead, I'm good at math because I enjoyed it. It's simple and logical and my mind worked really well in that way. There was never anything standing in my way of learning math, so I always just picked up any new math easily. Later, because I was already so good at math (and so many people were bad at it) I sought out more math courses as a way for more easy A grades.

Never was it a conscious effort to set up my career or social status.


Some individuals are perhaps purely intrinsicly motivated, but I think it's a very tiny minority. I also think there is a good chance that intrinsic motivation itself is a status enhancing adaptation, evolved by runaway selection. So ultimately, you are executing this adaptation whether you want or not, much like this fish carefully creating beautiful patterns in the sand, not knowing why he is doing it. It's all about sex.

https://www.youtube.com/watch?v=B91tozyQs9M

https://en.wikipedia.org/wiki/Fisherian_runaway


If nerds were pursuing math to get status they would quit when they got absolutely no status.


But they do get status. With their teachers. With their parents. And, probably the most important, with their peers, aka other nerds.

Damn I'd even expect most "cool kids" having more respect for someone better at math (all else being equal), even if their social context won't allow them to show this in any form.


Being good at math got me some "kudos" occasionally, but nothing like being good at almost anything else. Art (music, writing, singing) was way better to be good at. Even other nerdy things like spelling bees and programmer got way more acclaim than math.

Math, instead, got mostly derision from other kids and little to no respect from teachers or parents. No, "cool kids" never had even an ounce of respect for math nerds. If they secretly had any respect for them, they certainly never showed it.

And what's the point of trying to gain respect that nobody expresses? It's certainly not something that would be worth pursuing just to get that respect.


Long-term planning/deferred gratification. You were setting up a career as a source of status later in life.


>Long-term planning/deferred gratification. You were setting up a career as a source of status later in life.

It sounds like you're tying yourself up in knots to explain something that everybody already understands. Math is intrinsically fun, but only if you can cut through the ruinously bad educational system and the difficulty of getting started.


Such intrinsic interest to that extent was sexually selected for, so we're doing it only because of that. That's all I'm saying.

Intrinsic motivation makes sense to learn about the world to some extent, but there was not much to learn for hunter-gatherers to survive.


Quitting is embarassing, certainly often locally a downward slope on the status landscape.


To put it quite frankly, your position is entirely wrong, and you are doubling down on an even weirder position. I also have never even heard of someone just pursuing math "to get acceptance as a nerd".

Being good at math gives you no status in life. People are proudly anti-intellectual when it comes to math, so the most you'll get if you're quite good at at math is "oh wow, that's cool. next topic". People who really like math, simply like math, despite it not winning them any social favors for the most part.

Mathematics is quite interesting and beautiful in its own way, so for you to say it's mostly "out of wanting to escape their low status" is both rude and uninformed.

---

Also, it's true that quitting doesn't do you any favors, but people quit things all the time. Especially math. Or new years resolutions. It's definitely not a tenable argument as to why people would stick with math.


Obviously, you're only saying that to get laid, so who can trust you?

You could just as well argue we're all paper-clip maximizers, and simply interpret any evidence to the contrary as short-sighted. The only sense in which you can reduce everything humans do to sex is in the irrelevant and unfalsifiable sense.


> Obviously, you're only saying that to get laid, so who can trust you?

That's correct, but why would that reduce trust?


Because sex is a short-term goal, and the more heavily you optimize for it, the less you'll optimize for being accountable to the long term side-effects of dishonesty.


You risk getting bad reputation if you don't care about your offspring and social environment. People care about their reputation to be socially included (because that enables access to sex, among other reward signals of course, but sex is the strongest).


People could take vows of celibacy, and you'd probably just say they were really, deep down, only doing it to have more sex. And I'll certainly call you a liar, if only because you are so willing to brand everyone else with that label.

Our brains evolved to make decisions. Yes, they were constrained by a need to survive and reproduce. But those two goals are not the same thing. And the existence of those constraints does not actually preclude any other mode of operation.

You're being absurdly reductive when you conflate any motivation with a need to reproduce.


I think it's accurate. There is no doubt about it. I also don't see that there is anything is wrong with it. I think you have simply internalized modesty memes, but that ultimately makes you ignore fundamental motivations which in turn leads to dysregulation.


That's why we invented birth control. So we could be better at reproducing. When human decision making and the need to reproduce come head-to-head, reproduction is always the motivation, even if it results in less reproduction.

This is why your argument is nonsense. You're basically trying to define everything as sex. You're playing a silly word game so you can feel smug about this directionless and immature insight.


I'm not smug. I'm concerned about dysregulation due to ignorance of basic motivational drives.


“Whatever ability IQ tests and math tests measure, I believe that lacking that ability doesn’t have any effect on one’s ability to make a good social impression or even to “seem smart” in conversation.“

It’s funny that he uses the phrase “seem smart” when we humans can’t give a hard definition of intelligence. In the quote he makes it seem like intelligence is coupled with IQ and mathematical ability yet concedes to the thought that one could “sound smart” in language. He also says those same people could be creative, funny, and relatable so why not just define different metrics do intelligence here and say that they actually are smart (albeit in different ways). I can assure you no one would “sound smart” when discussing advanced mathematical theories if their grammar was bad and no one understood the branch of mathematics they were in (a counter example where one could be smart through IQ and mathematical ability metrics but not be able to generate coherent speech).


Practical intelligence is clearly multidimensional, and there's no reason why someone who scores well on one dimension should also score well on the others.

Any suggestion that talent-for-math = general-intelligence is actually rather dumb. Ditto for assumptions about poor math skills, which can easily be a product of poor teaching rather than unusually low native ability.

If IQ tests measure anything, it's raw mental speed and memory - useful traits, but not nearly enough to draw a bounding box around general intelligence, which also includes abilities such as intuitive modelling, creative originality, and informal inference.

As the cliche goes, smart people can do stupid things in at least some situations.

Raw high IQ is just as likely to get you to wrong conclusions quickly as it is to give you useful predictions. If your modelling skills don't give you a good working model of the situation you're in, you're going to have a bad time.

Outside of core STEM, modelling depends on social and cultural experience and contextual training. If you don't have those, you're going to be handicapped even if you have a stratospheric IQ.


> there's no reason why someone who scores well on one dimension should also score well on the others.

Except that this is literally what happens; this correlation between seemingly unrelated cognitive tests is referred to as "the g-factor". https://en.wikipedia.org/wiki/G_factor_(psychometrics)

To be fair, this doesn't actually contradict most of the rest of what you say. But this correlation does suggest that there are some shared factors (whether innate, or developed, or both) that affect many or all kinds of "practical intelligence"; one might reasonably call these factors "general intelligence".


>Practical intelligence is clearly multidimensional

I wouldn't call that clear at all! Of course no matter how intelligent a person is, there will be environments in which they do poorly. Feynman would do poorly in the environment called, "Everyone find Feynman and beat him up". But that environment is very contrived, or, more formally, has a high Kolmogorov complexity.

Legg and Hutter argue quite strongly for single-dimensional practical intelligence in this paper (I don't agree with their reasoning, but the point is that it's definitely not blatantly "clear" that practical intelligence is multidimensional): https://arxiv.org/pdf/0712.3329.pdf


> Whatever ability IQ tests and math tests measure, I believe that lacking that ability doesn’t have any effect on one’s ability to make a good social impression or even to “seem smart” in conversation.

Ever wonder why incompetent people get promoted? Or why consultants can sell projects using only buzzwords? Ever had that strange conversation where two colleagues are seemingly discussing something absurd, obvious, or impossible, but they think they're being clever?


I've recently spent an evening with a guy, who when asked any sort of question, would go into a long-form, several-paragraph-wide explanation of the phenomena.

Except if you actually paid attention to what he was saying, you'd realize that he was simply quoting the definitions of things he had learned over a sufficient time. He had a great vocabulary as well, and would put it in a good, well-versed academic paragraph.

The fact is that, unless you seriously paid attention to what he was saying - you'd think he's making a really deep point about something you don't understand. In actuality - he was simply going from one definition to another.

The amount of effort required to refute him is way above it takes him to blab about anything. He'd go around the issue without ever answering the question.


While I agree to some degree this also sounds overly dramatic.

First, there's a ton of evidence that people also get promoted/appreciated for the right reasons, not just because they're a fancy Markov chain of buzzwords (example: serial entrepreneurs, like Musk).

Second, there's underlying reality that eventually comes crushing people that fail to meet the expectations that they had built around themselves.


>First, there's a ton of evidence that people also get promoted/appreciated for the right reasons

You're gonna have to post some of that evidence please.


Sure, the correlations between X1:{IQ, conscientiousness} and Y:{income, educational attainment} are stronger than between, say X2:{agreeableness, height, race} and Y.

Y are examples of what people want (wealth). X1 are examples of "valid" reasons to be recognized as useful and therefore attain wealth, X2 are examples of "less valid" or "completely invalid" reasons.


Your opinions are not evidence no matter how much you dress them up in notation. I asked for evidence as I could use it. Self-discipline is a better indicator for income and education than IQ.[0] So I know for a fact that your a priori fact is not valid. It's all good to share opinions, but please do not present them as reality when they are not.

[0] https://journals.sagepub.com/doi/10.1111/j.1467-9280.2005.01...


Oh well there's a misunderstanding. Conscientiousness is the psychometric term for self-discipline (not 100% the same, but extremely close): https://en.m.wikipedia.org/wiki/Conscientiousness

And I didn't bother to point out particular studies validating my claims because much of this has been known to humans for close to a 100 years. In the same vein as linking to a proof of the undecidability of the halting problem would be excess references given the nature of HN community.

If you want an example study look at the "Health and longevity" section on the above linked Wikipedia page.

Or to directly support my thesis about X1 and X2 separation, first look at Wikipedia IQ page and the "Social correlations" section, it's generally around 0.5: https://en.m.wikipedia.org/wiki/Intelligence_quotient

Or look at your own link.

Second, height is correlated to about 0.29 with various measures of success: https://www.researchgate.net/profile/Daniel_Cable/publicatio...


That's basically the Peter Principle at work, which there is plenty of anecdotal evidence about by now.

Basically, people get promoted for being competent at their current job. But they are being promoted into a job they may not actually be competent at.

Sure, some people also get promoted or appreciated mistakenly or for the "wrong reasons", but that's often being done on purpose and not by accident.


Anecdotal satire from the 60's is a really low quality source.


> First...

Sure but it's when people get promoted for the wrong reasons everyone gets frustrated.

> Second, there's underlying reality that eventually comes crushing people that fail to meet the expectations that they had built around themselves.

Not in a noisy environment. Stories abound of complete incompetents who take some inadvisable risk, only to find it paid off handsomely.


Well that is genuinely impossible to tell apart. I mean you can always claim that a system or a person is "overfiting", and given enough time and/or data they/it will be proven wrong.


I actually am competent when it comes to computer programming, but I've learned (through painful experience) that I have to develop a completely parallel set, totally unrelated to ability to actually program a computer, in order to convince anybody that I am. Real, professional computer programming that produces a tangible, usable result for a person tends to be a fair amount of: reading over documentation to gain a deeper understanding of the problem domain; settling on a mix between a quick, dirty, "brute force" solution and an elegant solution; aiming for a target, missing, and adjusting until the target is hit. We actually joke amongst ourselves about this ("I just cut and paste from stack overflow all day") but I have to be VERY careful how I present the actual truth about how I approach computer programming problems to people who've never tried it. I have to pretend that programming computers breaks down to discrete, easily-estimable "tasks" that can be performed linearly or even in parallel, but each of which takes up wall-clock time.


Interesting. BTW I thought it was ironic that in the first part of the post the author points out the lack of logical connections between sentences in GPT2-gerenated texts, while in the second part she changes the topic to performance on IQ and math tests, which have nothing to do with how well people can detect the failure-modes of GPT2 - after all, people with high IQ or math scores can easily be inattentive when reading texts. Maybe this post itself was generated by GPT2?!?!?!?!? (or GPT...3?!?!?)


> what the median student actually learns seems to mostly be a set of low order correlations. They know what words to use, which words tend to go together, which combinations tend to have positive associations, and so on.

This really resonates with my experience, especially with people I work with.

I hope in the future people are trained to tell these sorts of generated texts from real texts. I think including some test for this would greatly improve our hiring procedure


When I was young, I thought I was just naturally "gifted" (I was in the "gifted" program in grade school after all, so there was that). I figured that I was one of the smart people who could figure out anything and not one of the dumb people who had to work to understand things. I felt this way for a while, right up until I ran into calculus. Man, calculus chewed me up and spit me out. I had this sort of epiphany when I realized that, no matter how smart I was or was not, there were things out there that were difficult for me to understand and there may well be things out there that may be impossible for me to understand. I got to be a bit more humble after that - but I interact with people who, I believe, have never had that experience or, worse, ignored it: if I can't figure it out, and I know I'm smart, it must be pointless, so I'll ignore it. I depressingly suspect these people of mistaking need to concentrate with stupidity, regardless of the topic.


> Due to our concerns about malicious applications of the technology, we are not releasing the trained model.

Doesn't that go against the mission of OpenAI? I thought they were about making technology publicly accessible to everyone so that it can't be abused by only a few people. This makes them seem more like a business with proprietary data.


Their mission is to build safe AI. It's pretty clear how this model can be abused and I am glad they are not releasing it.


So the next corollary to this would be - if you wanted to sneak information past people, what would you need to do to their concentration in order to achieve that?


Politics, economics, and religion have been playing this game for centuries. Advertising and PR are relatively recent newcomers, but operating in the same space. If you want to know how it's done, look at how the experts do it.

In outline you create the simplest possible narratives with a strong emotional kick - preferably one that induces anxiety in the receiver, and/or blames an outgroup for all the bad things that are happening.

Then you can sell yourself as the solution to the anxiety and fear.

The narrative itself can be nonsense. It needs a certain superficial narrative coherence, but that's all.


Hmm? What's that? Sure, we can visit your mother on the weekend.

[Later, on the weekend] Wait, we're going where?


Slight of hand magicians and pickpockets spend their entire career sneaking information past people and they do that by either intentionally drawing their concentration in one direction, or waiting for it to happen some other way.


>I’ve noticed that I cannot tell, from casual conversation, whether someone is intelligent in the IQ sense.

For me a giveaway is both how quickly and how well they take in what I'm saying; that is, how much processing gets done? E.g one of my really intelligent friends would have already connected what I'm saying with what they know about me, and would have already guessed at what I'll say next. This isn't just the domain of intelligent people, but for me, how quickly it happens is telltale sign. Intelligence can be blinding as much as it is enlightening, though, and I prize kindness and compassion far more than intelligence, which our culture puts on a pedestal.


I'd consider it more a sign if they were able to do that and not know you.


> The point is, if you skim text, you miss obvious absurdities. The point is OpenAI HAS achieved the ability to pass the Turing test against humans on autopilot.

The Turning test requires an ongoing conversation between an interrogator and a subject. I think even an interrogator "on autopilot" (whatever that means) would pretty quickly notice if a subject's responses contain "obvious absurdities".


I think we concentrate more on a one to one conversation - in fact I would suggest that reading counts is "listening to a public pronouncement" - something that is in the environment and not specific to us.

I suspect that twitter counts the same. It's something we don't apply much attention to because it does not look like a human being talking to us.

In fact I suspect that a whatsapp that records one human speaking and then plays it back will have a different attention spike than the text based idea

Edit: of course I wrote the above on autopilot.


This raises an interesting issue: if an interrogator "on autopilot" was fooled, is that really a valid Turing test that was passed?

Fortunately, the solution is simple: to have a Turing test of the interrogator.


You'd have to test interrogators' interrogators ad infinitum then.


thatsthejoke.jpg


Just a random associated though: I wonder if the game of Mafia [1] is somewhat better way to discern intelligences, than the Turing test.

E.g. imagine a game where mafia (AIs) can eliminate actual humans from the game by convincing their fellow humans, that eliminated people are actually the mafia (e.g. AIs).

[1] https://en.wikipedia.org/wiki/Mafia_%28party_game%29


So-so. But it makes emotion a lot more involved. And faking emotions might be hard for humans, it is trivial for AI, and also easy to appeal to our empathy.


The point of the game would be to eliminate AIs, not mafia. No in-game "nights", just a free for all discussion with periodic elimination rounds.


There's ##werewolf@freenode that would provide you the opportunity to see how hard it is to convince others even as humans, there's even a gamemode where the game bot becomes the wolf and people detect that by just some heuristics and gut feeling.


I got more intense results from intentionally shallow models such as only training on the work of a single author but then letting the shallow training run for long periods. What I got was sometimes useless but sometimes amazing stuff. It came up with a strange kind of plotline and even characters that were kind of like the source author but it was still a unique piece of work. It has its own voice that I think you lose when the model uses too many sources.


Care to share a link to the code of the training setup?



An excerpt from Anathem (Neal Stephenson, 2008):

> "Early in the Reticulum — thousands of years ago — it became almost useless because it was cluttered with faulty, obsolete, or downright misleading information," [he] said. "... [Crap] — a technical term. So crap filtering became important. Businesses were built around it. Some of those businesses came up with a clever plan to make more money: they poisoned the well. They began to put crap on the Reticulum deliberately, forcing people to use their products to filter that crap back out. ... But it didn't really take off until the military got interested. ... Artificial Inanity systems of enormous sophistication and power were built ..."


A while ago, before I realised the obvious truth that arguing with people on Facebook isn't exactly productive, it dawned on me that many people mistake the ability to form a grammatically correct sentence for thinking. It's nice to see a more formal argument for that theory.


An awful lot of people think the grammatically correct part is optional, too.


I agree. I tend to skim through articles and even books, and that's no good. Instead of the ten books I "read" last year, had I put in effortful attention, focused and thought through only two books last year, I would have come out ahead.


Thats very true, it's also true that most books don't demand a high level of attention. Most media in general is designed to require zero effort on the part of the consumer.

I just worked through 'Gravity's Rainbow' and was very mindful and careful in my reading. It was a great experience, but at the same time it was fairly boring in comparison to something like social media.


"Reader, Come Home" by Maryanne Wolf is a good book on the topic of how new technology influences our reading habits.

https://www.goodreads.com/book/show/35887237-reader-come-hom...


congrats! that’s one of those books that really needs a careful reading. it’s on my re-read carefully list but it’s so long and convoluted (in a good but challenging way) that i’ve yet to even think about starting.


Slightly off topic but I found contemplating this possibility amusing:

Would it be possible that once we manage to eliminate obvious logical and contextual mistakes in the generated texts, they could be used to discover alternative (and consistent) views of the world (e.g. about art, philosophy...)?.

The AI would be able to create a huge number of theories and it's possible that some of them would be both interesting and original.

It would be a kind of restrained (restrained because we would prune mistakes we do not want the AI to make and wouldn't just be a random typing) infinite monkeys way of exploring theories about the world.

It would be even funnier if we could filter the subset of generated tests that is testable :)


Sure, but there is an easier way: just read what other cultures have come up with!

They usually went into great detail on the matter, and it has the advantage of being actually based on someone's real experience of the world, rather than just randomly aligning with it.


It this related to feeling stupid in conversations? I think most people just auto-generate a lot of nonsense which throws me off. Maybe other people can somehow guess what the point was behind the nonsense.


Some of these generated examples still make sense to me:

>> Aragorn drew his sword, and the Battle of Fangorn was won. As they marched out through the thicket the morning mist cleared, and the day turned to dusk.

> Yeah, day doesn’t turn to dusk in the morning.

I interpreted this to mean a literary jump in time, from morning to day to dusk, in one sentence.

So, the text itself doesn't bother me. What scares me is the ability for an AI to overwhelm us with a volume of content such that the signal to noise ratio is exponentially amplified through massive amounts of auto-generated content.


I wonder - could this be used to ensure attention?

Add some randomly generated sentences to a text that students need to learn and then ask them to identify these sentences in the text.


GPT-2 & human writers with a poor grasp of logic both produce content that might seem sensical on the surface, but ultimately nonsensical at a deeper level.

Hopefully we can find a way to counteract this in a systematic way. Perhaps the trick would be to punish'low order correlation' text in the first place.


> But if you ask an exam question where the deep structure answer differs from answer you’d guess looking at low order correlations, most students usually give the wrong answer.

Every kid knows about these. They're called trick questions, and they've been fooling students at all levels for centuries.


> I’ve noticed that I cannot tell, from casual conversation, whether someone is intelligent in the IQ sense.

This is because casual conversation is predicated on EQ and not IQ. In order to be able to ascertain IQ you need to actually test for it, opportunities to demonstrate it aren't going to come up randomly.

This should stare smart people in the face, but it seems we have a blind spot for the general uselessness of our own intelligence in normal situations.

The author goes on to discuss interviews, and I'd argue that EQ is generally more important in thriving and producing on a team than IQ is as well, with a few important exceptions.


Firstly -- Is there even actually a standardized/normed EQ test?

Secondly -- Is there research to validate it as a predictive measure of success?


I'm not aware of any, no. EQ is still a somewhat fuzzy concept. It needs to be more well-defined before it can become a fruitful research topic.


What about the content?

I vaguely try to check for whether someone has heard about things that they say they're interested in. If someone says they love rockets and space stuff, I'll see if they've heard of the rocket equation. If they like computers I'll see if they claim to be able to code.

There's also the other side, people can volunteer that they think vaccination causes autism or they'll ask my star sign.

Oh I get what you mean about EQ now...


The examples he presents of false text can all basically be discriminated from human generated text by looking for logical consistency. I wonder if this will be something attention focuses to within schools. With all these fake news articles, spam emails, and fake viral images I think this new generation should be trained to recognize these things early on. Even if we can’t make discriminating classifiers through technology as of yet, we can hopefully train ourselves to discriminate between fake and real media in the meantime.


> Even if we can’t make discriminating classifiers through technology as of yet

Doesn't the existence of GANs restrict the space where discriminators can win to NP problems?


Not quite correct: Publius Ovidius Naso (“the one with the nose”) Looks like AI > human blogger already.


English is not my native language. Can someone please explain to me the title of the article. I read the article and I get the point, although, is that syntactically correct english ?!


(Humans (who are not concentrating)) are not (general intelligences)


English is my native language and I still don't understand the title even after you explained it, but the article was interesting anyway. Maybe "Humans who are not concentrating may as well be dumb humans?"


Humans who are concentrating are acting with general intelligence. Humans who are not concentrating are acting with lesser kinds of intelligence. The title is pointing out that human intelligence is not always the 'general intelligence' that is supposedly unique to humans. It is grammatically correct, but you have to understand that "general intelligence" is a term of art.


That's the real danger of AI, not the "raise of machines". It gives theoretical foundation for saying "humans who are not X are not Y".


> I’ve taught public school teachers, who were incredibly bad at formal mathematical reasoning (I know, because I graded their tests), to the point that I had not realized humans could be that bad at math — but it had no effect on how they came across in friendly conversation after hours. They didn’t seem “dopey” or “slow”, they were witty and engaging and warm.

> Whatever ability IQ tests and math tests measure, I believe that lacking that ability doesn’t have any effect on one’s ability to make a good social impression or even to “seem smart” in conversation.

> If “human intelligence” is about reasoning ability, the capacity to detect whether arguments make sense, then you simply do not need human intelligence to create a linguistic style or aesthetic that can fool our pattern-recognition apparatus if we don’t concentrate on parsing content.

In the context of the article, these are troubling assertions for the author to be making. They seem to be implying that people who struggle with mathematics are fundamentally less intelligent that those who don't, in a way that cannot be picked up by chatting to them.

If I understand correctly, the author furthermore seems to be saying that a GPT-2 style text generator will sooner be able to match the conversation of such a person than of someone more well-versed in formal mathematical reasoning.

This seems factually wrong to me; I think the author vastly underestimates complexity of the subconscious processing that people do in order to come to the viewpoints they hold, and to transform ideas into coherent speech.

As a related point / analogy, the process by which humans do conscious mathematics (such as arithmetic) is inherently slow and inefficient, whilst at the same time it manages to perform incredibly advanced "calculations" in the process of being a highly-advanced motion-control system.

I posit that the human process for synthesizing ideas is happening primarily in this more complex underlying format, which we are still some way off from being able to simulate (though I do believe we will be able to, eventually).

The author's conclusion seems a bit like seeing that computers are better at arithmetic than humans are and thus concluding that they will soon surpass us in intelligence.

Furthermore, the author's reasoning seems demeaning to people who struggle with mathematics and explicit logical reasoning, and is a few steps from a claim that such a person is inherently less "conscious".

To claim that a strong grasp of formal reasoning is necessary for those in a position of policy and decision making is one thing. But to assert (without substantial evidence to back it up) that someone with low mathematical-logical reasoning ability has speech which is significantly easier to emulate because it fundamentally contains less content seems to be simply a form of intellectual/academic self aggrandizement.


Thanks for outlining what I largely objected to within this article. I read this as a reductive misapplication of the author’s experience in mathematics and domains of machine learning to broader discussions of intelligence in a way that struck me as the author presupposing a highly culturally dependent interpretation of intelligence. I can understand the motivation for trying to use mathematical and symbolic reasoning abilities as proxies for abstract reasoning, but you’re painting with far too broad a brush stroke if you’re applying that across society given different levels of emphasis on schooling and mathematical literacy. I suppose that part of my core objection is that I believe, based on this article, that the author has a selfish view of human intelligence that focuses on their own competencies and judges those as lesser who don’t have similar expertise.

I’ve got a degree in physics from a top 3 university and I have met individuals more intelligent than me who suffered through various math classes, which I believe was largely due to a lack of experience with the machinery of math or formal reasoning.


> They seem to be implying that people who struggle with mathematics are fundamentally less intelligent that those who don't, in a way that cannot be picked up by chatting to them.

They are saying exactly that.

I wonder what the author's response would be when speaking with an individual vastly more intelligent than himself, who, interrupting the author mid-sentence says, "sorry, this is such a simple concept, I don't converse with imbeciles", and walks off.


It is stronger, they don't make the distinction that it is people with dyscalculia that are stupid, but rather simply being ignorant is enough to qualify as this lesser form of intelligence.


Being good at math (here I am talking more than just arithmetics) usually is a very good proxy for being good at manipulating abstractions. And, imho, that's at least one of the cornerstones of the intelligence.


The issue that the GP is pointing at is that the ability to reason in the abstract the author is implicitly stating to be the marker of intelligence.

It may be one form of intelligence, but certainly a brilliant writer, a gifted musician, or an exceptional artist can all be considered intelligent even if their ability to grok logical constructs is limited compared to those that spend their waking hours doing just that, and almost certainly have been honing this skill for their entire lives.


I think the second essential part of the GP's marker for intelligence is the ability to form sentences that convey information, and do it efficiently.

Ability at abstract reasoning is invisible to outsiders unless the bot can also transmit their information to others, as well as understand transmissions from others and react appropriately (constructively or entertainingly).

AFAIK, up to now, none of the measures of synthetic intelligence have tried to measure the flow of information from and into a bot -- its efficiency, coherence, or relevance. I think the rise of master aper bots like GPT-2 and Q&A bots like Watson that beautifully model syntax and rhythm yet no semantics may finally force this issue to the surface. To wit, information matters more than style.

Frankly, I welcome the arrival of bot overlords like these. Maybe they'll motivate us humans to pay more attention to the meat of what we hear, read, and say, and therein act less robotic ourselves.


Being good at math is also related to having been taught maths properly, and for most normal people getting enough encouragement to put in the work necessary to build one's skill is also important.

I know other traditions for labelling people stupid, that centrers around them lacking skill with driving or carpentry, and this "maths" ability tradition seems to be largely the same thing.


Don't we know this already from Kahneman (Thinking, fast and slow)?

"Humans who are thinking fast are not general intelligences."


How did the question answering occur? The computer correctly found that the race lasted seven days??


tl;dr: APPLY YOURSELF


L. Ron Hubbard spent some time in Arizona in the early 1950s, when legendary hypnotist Milton Ericson was lecturing.

Ericson described a "confusion technique" that is in evidence in lectures that Hubbard gave later in Philadelphia. You'd catch him saying things that somebody might say in a lecture but that people don't. For instance he would continuously say something wrong and 'correct' himself. (e.g. "The Japanese Alphabet has 48 letters, or was it 46 letters?"; quotes around 'correct' because it was all bullshit anyway)

Have people listen to lectures like that with a malfunctioning tape recorder for hours with high social pressure and structured communication, that will turn their brains to mush. No wonder Scientology practice is twice as harmful per hour as what other cults do.




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: