The "standard model of intelligence" is that a lot of tasks seem to have correlated (not identical) performance, and we can call the primary dimension of this relationship a "G factor", and in turn measure this implicitly with tests designed to have the same measures of correlation.
This man has a delightful british accent, which seems to count for something, but his "what is intelligence" snippet is a gigantic strawman. "You can't compare a composer and a mathematician on the same axis, they're doing different things!" No shit, jackass. What you can do, for instance, is study the relationship between measured mean population IQ and GDP, which tells you it's really important to, eg, fight IQ-lowering dietary deficiencies and heavy element poisoning if you care about helping countries develop. It's incredibly powerful as an aggregate measure.
>"You can't compare a composer and a mathematician on the same axis, they're doing different things!". No shit, jackass.
And this outburst serves what purpose exactly? What he said could be basic and obvious, but science involves laying out the basic and obvious things first all the time. Besides, you'd be surprised how many people believe in IQ as if you CAN measure a composer and a mathematician in the same axis [1].
>What you can do, for instance, is study the relationship between measured mean population IQ and GDP, which tells you it's really important to, eg, fight IQ-lowering dietary deficiencies and heavy element poisoning if you care about helping countries develop. It's incredibly powerful as an aggregate measure.
So, essentially combine 2 dubious and much contested metrics, IQ and GDP, to make decisions. I fail to see how this can be enlightening.
Nor how anyone would need this pseudo-quantification to undertstand that "fighting dietary deficiencies and heavy element poisoning" is important "if you care about helping countries develop". Does anyone without the unique tools of IQ and GDP think that lead poisoning is good for a country?
[1] Not to mention how many have the related misguided notion that "music is math", just because music theory involves some (very basic) math relationships in it (mathematical relationships that you can summarize to a 1st year math student in a day without getting a composer out of him).
Who are you, to preside as an authority to chastise and admonish?
> Does anyone without the unique tools of IQ and GDP think that lead poisoning is good for a country?
Pretty obvious that nutrition and food are powerful forces for intellect, and the aggregation of powerful intellect at scale is a powerful economy. Regardless of jargon, a well-fed, unpoisoned populace thinks cleary.
>Who are you, to preside as an authority to chastise and admonish?
I'm a commenter on Hacker News. Who said that only some higher "presiding authority" has the right to "chastise and admonish" rude behavior and name-calling on HN and the internet in general?
>Pretty obvious that nutrition and food are powerful forces for intellect, and the aggregation of powerful intellect at scale is a powerful economy. Regardless of jargon, a well-fed, unpoisoned populace thinks cleary.
Pretty obvious to me that there are far more obvious reasons for wanting a well-fed, non-poisoned populace that to get "a powerful intellect" and a "powerful economy". Basic humanism and compassion for one.
And who are you to chastise him for his chastisement? Feel free to ask who I am to chastise you for chastising him for chastising the other commenter, I do love these sorts of recursive finger-pointing discussions.
Yes, you are right, it is much simpler when you have just a single number to "optimize" across population. And the correlations are strong indeed. But if you want to explain more variance, you need to reach for better tools.
We are all well familiar with people's various simplified models of the world. They tend to itch hackers, because they work well enough to not be automatically rejected by their users, yet hackers know, and sometimes even have proofs, that the models are ultimately wrong. The same thing happens with IQ. We all know it is mostly bullshit, but the truth is that it does work as a rough predictor of performance. It does explain some of the variance, not all of it.
> Isn't it depressing to see how technology is turning us into perfect consumers?
I don't know what somebody on the street would consider a 'perfect consumer', but an economist would use that term to describe someone who made strictly rational (i.e. objectively good) purchasing decisions. Maybe I've been drinking too much kool-aid from the econofountain, but what on Earth is depressing about that?
I think you are using words and concepts with multiple definitions like "good", "perfect"(for whose purpose) and "rational". You are swapping definitions at will and it's leading you astray.
Perhaps look at this to temper the econofountain -
"Perfect" isn't a loaded word here, it just means maximally rational.
Take 2 bundles of products - A & B. They both cost the same, but bundle A would bring you 2x as much joy/utility/whatever-you-want-to-call-it than bundle B. To choose bundle B would be "less good" for you than choosing bundle A: that's what irrationality is...by definition. Choosing A is the "most good" option: that's what rationality is. Someone who chooses A over B 100% of the time is a "perfect consumer".
> It is already the case that for many of us when we make a decision about what book to read or what film to see or what restaurant to eat at, we don’t make a reasoned decision based on personal experience, but on the recommendations of an app! So that little bit of free will that we got to exercise in our daily lives has evaporated. So the lake of free will has become a pond.
I'm trying to figure out an interpretation of this that actually makes sense. He's suggesting people don't even consider their prior experiences when choosing restaurants now? The subjective experience of "free will" requires a high threshold of ignorance?
Maybe I'm overestimating other people but in those specific contexts, I almost never take the number of stars or a few people's reviews as the only relevant things to consider. (Or, maybe he's flattering himself by perceiving a type of non-agency in "the masses" that doesn't exist.)
I think the point is that when the AI takes over the control of the world it does not have to be through a technological singularity, with Roko's Basilisk and all the drama. It could as well be through generations of people progressively yielding their free will to what he calls "apps". In this scenario, our choices, over the course of a century or so, become ultimately non-existent.
> In this scenario, our choices, over the course of a century or so, become ultimately non-existent.
You need to explain how choosing to let apps make certain types of decision removes all choices whatsoever. As it stands, you make it sound like a carpenter who is using a nail gun instead of hammer has stopped driving nails.
Suppose the carpenter pulls out an autonomous house-building robot and tells it to build him a house. Is he still driving nails?
But to your main point, while we may be offloading only trivial decisions to apps today, the better they become at making these decisions, the more natural it will be to trust them for more significant ones. As the original article mentions, it's not much of a stretch to imagine an app that looks at your demographics and preferences and tells you who to vote for. And from there, why not apps that choose where to live, what career to pursue, or who to marry? Some day, it may even seem foolish not to defer to apps for important decisions. After all, how can one fallible, emotional person ever hope to make a better decision than a datacenter full of machines that can coolly consider all of the parameters and potential outcomes?
At that point, floating through a blissfully optimized life, one might say that yes, the apps are deciding everything for me, but they're doing so only in accordance with my preferences and values. I'm still in charge; I'm still exercising free will. But in the absence of making decisions oneself, where exactly did those preferences and values come from?
> And culture builds on culture. (...) That cumulative process is what allows for the civilization that we now live in, but it raises a very interesting paradox and it’s interesting that it’s very rarely commented upon.
> Here’s the thought experiment: (...) imagine I showed you an Apple II running World of Warcraft. If you knew anything about computers you’d say, “Impossible. Impossible!” (...)
> Now, let’s think about human beings. Human beings are hardware that’s about 100,000 years old, but we run string theory, Lie algebra. We’re running 21st-century software! (...)
> I happen to be one who believes that the cultural becomes so complicated at a certain point that it won’t run on our brains. And in fact, you could argue that the reason why we’ve generated computational devices is consciously or unconsciously, we’ve come to recognize that our endogenous, organic computing power is not up to the task and we have to recruit machines to represent culture, because we cannot. I think there’s good evidence for that.
> I think that that ultimately might be what bounds us; that we’ll reach a point where our memory capacity and inferential power simply cannot accommodate the latest cultural artifact. At that point what happens? Does it become independent of us, or does it just stop? It’s like evolution coming to an end.
---
The whole thing is worth watching really, but that question really hit home for me. Though maybe I'm more optimistic: if we automated away all of the menial -- such as deciding what's for dinner, searching for the next book to read, etc. (unless it's done for pleasure of course) -- and the political -- automate a large portion of how our institutions are run -- all of these things that stifle and fragment our creative energies, so that we could focus all of our brainpower and energy on creative endeavours, how much more could we accomplish?
There's still a cognitive limit, but it seems to me that currently we are mired by so much unnecessary overhead that once we are able to remove it, we'll find that limit is a lot farther away than we expected.
I'd guess that for most of us, well over 50% of our creative energy is instead spent on the unnecessary and menial.
Especially since brains are not Turing machines, and the cost of overhead is not linear (50% overhead might mean more than 50% productivity loss): brains get tired over time, they burn out, they need rest, context switching is hard... that overhead burns you out and is making you less productive even when you're not actively thinking about it (what's for dinner, your personal finances, schedules, meetings, that dentist appointment, am I due for a haricut? did I already book the mechanic? my car is making that weird noise again, oh crap did I miss the new Game of Thrones last night, now I gotta catch up... etc etc etc etc) -- what a relief would it be to automate all of that and only focus on what's fun and what's important!)
And security. The biggest problem of course is that this a field ripe with potential for abuse... there's a fine line between a personalized book recommendation service, and a massive, streamlined propaganda machine.
We have the knowhow to build either (very near) futures: one in which we automate all the menial aspects of our day-to-day lives and are left to explore and create... and one in which our desire for convenience is exploited by governments and corporations.
The technical difficulty of achieving either is the same, the only difference between utopia and dystopia in this case is the foundation on top of which it's built:
One is built on openness: open data, neutral networks, open software and standards, all massively available so that anyone with the ingenuity to do so can pitch in and help build this future.
The other is built on a foundation of walled gardens and closed ecosystems: everyone building on top of platforms and technologies that are ultimately controlled by a handful of individuals whose interests we now have to trust align with our own.
It wouldn't be propaganda if it gave you want you wanted. But the same systems that can predict what you want will give you what some unnamed face wants the masses to know, and silently reports on people whom it detects aren't complying. It's the same system, what it does depends on who owns it and how transparent their operations are.
> Here’s the thought experiment: (...) imagine I showed you an Apple II running World of Warcraft. If you knew anything about computers you’d say, “Impossible. Impossible!” (...)
He should have look at some of the stuff that demo scene programmers churn out of a C64 these days.
I find his thoughts about the potential limits of humanity provocative and really clever. These three parts made great impression on me:
> And in fact, you could argue that the reason why we’ve generated computational devices is consciously or unconsciously, we’ve come to recognize that our endogenous, organic computing power is not up to the task and we have to recruit machines to represent culture, because we cannot. I think there’s good evidence for that.
> The atom bomb, for example, forced a crisis. We had an extraordinary power and we didn’t really have the moral probity or sophistication to deal with it. We still do not. And that’s not making a judgment about whether our actions were right or wrong; it’s just that I think thinking reasonably about how to deploy power on that scale is beyond us.
> Human beings are hardware that’s about 100,000 years old, but we run string theory, Lie algebra. We’re running 21st-century software! How is it possible that old, antiquated hardware can continue to run ever newer and more complex cultural software?
Interestingly enough the "mental software" used to handle the crisis of the atom bomb came from mathematics (von Neumann's game theory, in particular the idea of mutually assured destruction) rather than moral philosophy or something which we label as "humanities". I'd probably argue that what von Neumann was doing with MAD was "humanities" but I don't have a good definition of the word.
That's an interesting approach, discarding generality / repurposeability as a requirement.
On the other hand, requiring those would mean that being very very good at, say math and only math, would maybe not actually count as intelligence.
Perhaps there's two related things? Artifacts of fixed / embodied intelligence, like bird's wings and traffic lights. And whatever process produced those artifacts.
Saying that intelligence is solving problems or making problems go away, also implies that there's no such thing as disembodied isolated intelligence. Because being isolated and disembodied means there are no problems to work on.
So you can have a general algorithm (say, fitness-biased random search), that only becomes intelligent when put in the right environment (given a fitness function that matches to a problem).
The problem with "intelligence" tests is that they generally measure deductive ability because this is easy. Make up tricky problems and check if the answers are right.
Actual intelligence is usually and balanced combination of taking things apart, deduction, and putting disparate things together in creative ways. This second attribute is much harder to measure so.... why bother.
To quote Albert E. "Creativity is more important than knowledge."
His Hamlet analogy doesn't work for me. Couldn't I change Hamlet's name every other line of his dialogue (this making < 1% changes to the original) and yet by doing tihs drastically change the qualities of the work.
The analogy doesn't work at all because DNA is more like a program than a tragedy (yeah, some programs are actually tragic but stay focused, please). Changing only one character in a program can have catastrophic consequences. Biological systems are generally more tolerant because they are highly redundant but still; 1% can easily explain the differences.
I just don't understand why he doesn't pick the obviously better analogy although somewhere else in the interview he shows some computer knowledge. And I don't know why he relates intelligence to DNA when we know that crows can compete with chimps on tool use.
When you are changing bases/alphabets, you are selecting them intelligently. Mutation is random.
Here is how I think ~12 Million bases make such a large difference (I haven't actually read Hamlet):
- Most of it is junk. Or what we think is junk.
- Some of it is anatomical difference.
- Rest of it is words which change the meaning of crucial sentence. Eg: '... I hate you, but ..' to '... but, I hate you ..'. I don't know if the book has sentences like this but please ignore this. If a sentence is changed, it might affect meaning of all the text after this. Or in original case, things which are built on previous things in the body.
Stupidity can arise out of having the wrong model for solving a problem. Say, the theory of humors or of spirits rather than a germ theory of disease. You can collect all sorts of information _consistent with your model_ and still end up no better than, or even worse off, than you were before. E.g., determining that you've got to bleed a patient to death.
You lack the appropriate model to solve the problem.
In the case of wilful ignorance, and as illustrated in the examples of continental drift and Mars cananls, or today, global warming, requires not only having the wrong model, but in rejecting the correct model.
Related, I particularly like his definition of genius.
This man has a delightful british accent, which seems to count for something, but his "what is intelligence" snippet is a gigantic strawman. "You can't compare a composer and a mathematician on the same axis, they're doing different things!" No shit, jackass. What you can do, for instance, is study the relationship between measured mean population IQ and GDP, which tells you it's really important to, eg, fight IQ-lowering dietary deficiencies and heavy element poisoning if you care about helping countries develop. It's incredibly powerful as an aggregate measure.