The fundamental mistake of thinking that technology alone can solve all human problems (is this what technological liberationists believe? Great name.) runs deep in this community and comes, I think, of worshipping intelligence.
The entire concept of all powerful AI that must be feared or at least obeyed springs from this trap. But imagine when we create this AI and ask it "how should I live" and it replies "eat better and exercise" how will its intelligence convince me to change my ways better than my doctor does today? How will more smarts give us peace in the Middle East? These human problems seem tied up more with the all too human traits of will and desire and pride and... Intelligence? It's far down the list.
Politics, for better and for worse, is our system for granting humans the power people here seem to think AI will be able to take on its own. I think part of Maciej's message is: stop waiting.
I spent a long time ruminating on this awhile ago and came away very depressed. Deep down, I believe that human nature simply does not allow complex civilizations to live indefinitely. The interplay between selfishness which ensures individual survival and empathy which ensures survival of the species (thus indirectly increasing chances of individual survival) is tuned for sharing a living space with a few dozen other humans, not a planet of seven billion. Evolution found a local maxima sufficient to allow us to spread very rapidly, but it's not good enough for the state that we are in. Although advanced technology could allow a large number of humans to thrive, the lack of empathy will perpetuate an imbalance in resource and standard of living that, like a washing machine with an imbalanced load, will spin wildly out of control (and in this case tear itself apart).
I hold a sort of vague hope that a techno-Utopia is possible, but it's with the understanding that a great many cultural and political changes would need to be made first. Given that my talents lie in programming, I do my part in that arena and hope that others with an understanding of things like economics, security and human psychology will do theirs.
> But imagine when we create this AI and ask it "how should I live" and it replies "eat better and exercise" how will its intelligence convince me to change my ways better than my doctor does today?
By following up with "how to I motivate myself to change my ways"? Or, better, by its answer already taking that into account.
> How will more smarts give us peace in the Middle East?
Not necessarily more smarts, but better tech means more affordable connectivity and education, and better education too. It also raises the standard of living, which leads to more trade, which in turn is an incentive for peace. Sure, it won't bring absolute peace. But it'll bring more peace.
> Politics, for better and for worse, is our system for granting humans the power people here seem to think AI will be able to take on its own.
Politics has existed since the dawn of man. Politics we like, though, only after the industrial revolution.
> Not necessarily more smarts, but better tech means more affordable connectivity and education, and better education too.
This is not true. Neil Postman has something to say about this, already in 1996, in "The Surrender of Culture to Technology". It specifically mentions education and how the "wiring up to the information super highway" was misguided.
You're thinking that the evil AI will be some sort of nagging scold, "eat your vegetables or I'll tell you to eat your vegetables again". It's more likely that it'll be like Gawker: "eat your vegetables or I'll send an angry mob of idiots to burn your house down"
I suspect an effective tactic would be 'eat your vegetables, or I will start automatically ordering meatless and reduced calorie versions of everything'.
This article brings up an important source of bias that tech people risk - that we overuse models from programming when thinking about other aspects of the world. We should be learning alternative models from other subjects like economics, philosophy, sociology, etc so that we can improve our mental toolbox and avoid thinking everything works like a software system.
I'd say that another related source of bias is that we are surrounded by people who think like us.
Also, there's a clear answer to 'If after decades we can't improve quality of life in places where the tech élite actually lives, why would we possibly make life better anywhere else?' -- because the tech elite live in a rich society where most of the fundamental problems (e.g. infectious disease control, widespread dollar-a-day level poverty, access to education) have been solved. The remaining problems are much harder and we should focus on problems where our resources can go further - e.g. in helping the global poor. We should also work on important problems that we have a lot of influence over, such as risks from artificial intelligence and surveillance technology.
In addition to learning models from other subjects, we also need to understand that the complexity and often chaotic nature of human behavior might mean some subject can only be modeled superficially.
> economics
Varoufakis[1] and Blyth[2] argue that it might be impossible to create models of the economy. They both warn that there is a seductive quality to math, but that "elegance" is an oversimplification that can easily unravel when given the chaos of the real world.
It is certainly impossible to create realistic models of the economy, if one's economics is based on assumptions of linearity and equilibrium. Mark Buchanan's book _Forecast_ [0] makes the case for more realistic models, using techniques from the natural sciences, that would have better predictive value.
Good question. The book's blurb says he is a science writer; I don't know how rich he is.
I guess it's easier to point out the flaws in linear, equilibrium models, and to point to better results in, say, meteorology, from using better models based on different assumptions, than it is to construct and validate completely new economic models.
>we should focus on problems where our resources can go further - e.g. in helping the global poor.
I'm not convinced that the right people to help the "global poor" are Silicon Valley technologists. Scientifically-minded entrepreneurs focused on profit but with idealistic rhetoric attempting to "help" cultures they know nothing about has, historically, not worked out great.
In spite of the radically different culture, Uber has improved transportation. It turns out putting Indians into a car and charging them money works a lot like doing the same for Americans.
Facebook and Whatsapp have improved communication.
At work, Slack, Salesforce and Jira work the same as any western office.
Why, exactly, do you think other cultures and the global poor can't use SV technologies?
All those technologies were first developed in the US, for an American target audience, then exported to other countries. Other things also developed in the West and then exported: electricity, vaccines, et cetera. I agree that that's generally been a good thing.
The parent comment, however, seemed to imply that technologists should focus on "solving global poverty" instead of solving local problems. This represents a fundamental shift, because at that point SV technologists are attempting to solve problems they personally do not have experience with.
All the examples you provide solved problems that SV itself had- transportation, communication, etc.
I didn't interpret the article as defining things like Uber as solving "local" problems. But if you do, then the article is simply wrong because SV is manifestly solving local problems.
I agree that silicon valley technologists are not well informed about the problems of the poor (or other important problems) and too often use the rhetoric of doing good whilst not thinking carefully enough about how to actually do that.
But I don't think concentrating on local problems is the right solution given that there are much more important non-local problems to solve. Perhaps entrepreneurs should concentrate on profit and then donate what they earn. Alternatively, they could learn about important problems and partner with experts in those problems to avoid naively doing more harm than good.
Colin Woodward wrote "American Nations: A History of the Eleven Rival Regional Cultures of North America (ISBN-13: 978-0143122029)" and basically, the people who settled the center of the tech industry were Yankee Progressives, the sort who send third sons to be missionaries around the world. You have to accept the premise that somehow, the past stains present-day thinking more than is easily explainable, but this does not stop people who have demographically interesting professions from using this sort of thing.
You pushed almost ALL of my buttons...note that I on every point I almost agree with your points (which of course meant I have to nitpick them:).
1. I agree that it is important to broaden your horizon and not blindly think everything works like software. But IMO, not enough people learn to attack problems like a good hacker.
I'd argue that for understanding and modeling,the fundamental principles that drive hard sciences ( best discussed by Karl Popper and Feynman in my opinion) are as of yet the only methods which have proved themselves. For doing and building
complicated things, (software) engineers have developed a way of zipping between layers of abstractions which is again in this domain of fundamental principles, but less fleshed out. Economics and other "soft" sciences is then (when its done well) about properly applying these principles from hard science and engineering to domains with incredibly scarce data and no real way to do experiments(because, you know, ethics). History and other "exploratory" sciences is then about gathering more data and cleaning it. And finally, philosophy and the arts are about pushing the boundaries of our imagination and to inspire us, so we can apply all of those other tools to new domains and cross fertilize.
2. To AI Risk: I said it before, i say it again, I am incredibly disappointed in the current AI risk movement. There is way to much focus on the vague "tail risk" of a rogue AI (be it by chance of by ill will) and some sort of "paperclip" scenario, and almost no mention of the very real, very right now structural risk of continued wealth concentration, mass surveillance (which is where we agree) and mass unemployment. The tail risk in my opinion is negligible, due to simple physical constraints computing faces right now, and to the fact that we have EMP. An actual AI apocalypse just isn't going to happen. And even if you handwave that with "tail risk"/rogue actors, then I ask: why not go after bioweapons? They are much easier (since we know they can work with our current tech, unlike AI) and about as dangerous.
Now, the structural problems will combine with demographical change, a regress from the brief period of increased equality (and freedom) and other factors. That stuff is happening right now. There are enough resources pointing out how many jobs will simply be made redundant,yet there are only some movements talking about UBI and other schemes to move to this post-work society. Disproportionately more noise is made about the scary death robots.
3. Death: I hope research into immortality, healthy aging (i.e. "dying with a young body at 70" ) and the likes continue. But it is very much a first world problem, nay, a millionaires and above problem. I don't buy the argument of stopping death being infinitively important on account of if making every other intervention more important. A lot of the current problems in the world are artificial (or at least not mandated by the laws of physics) and due to too much power in the hand of too few, without checks and balances. Let us fix that first, then make our overlords immortal. Tangentially, there was a great article+ discussion on HN here (https://news.ycombinator.com/item?id=9523231) on the topic of Peter Thiel and his "libertarian future"
I apologize for going into rant mode, I hope I managed to make it somewhat congruent
My problem with these sorts of screeds is that they treat all techies as complicit. It’s really a problem of big tech companies, and the big tech companies have so much power because there is money in it, and they have the money because humans choose it. We individually chose to give our freedoms to huge corporations with unfriendly designs for our lives.
It’s not like we couldn’t see it coming. Stallman, for example, has always argued against use of Facebook and similar services.[0] It’s just that it’s much easier to run centralized services, so they improve, they create better user experiences, and they draw in even more cash, while decentralized Free Software struggles to get enough attention to stay in business.
I, for one, didn’t get a Facebook until a critical mass of my friends got one, and I didn’t get a LinkedIn until a government agency refused to give me service without one. I tried to avoid the surveillance state, but it’s very difficult to function in society without it. Especially if you don’t have some sort of fan club to support you, like Stallman has.
If you believe a large fraction of techies are not complicit and in fact are very against these unfriendly designs, then do you agree with the solution proposed by the author—just like the solution to the problem of unsafe working conditions—namely, regulation? After all, if a large fraction of techies are against this, there's political will, right?
Sure, in your worldview the big tech companies have too much money and power, but why would it be so much more asymmetrical than the unsafe factory days as to require a qualitatively different solution?
> I tried to avoid the surveillance state, but it’s very difficult to function in society without it.
The author made this point, too, and connected it with the point about unsafe working conditions and how it is theoretically opt-in but in practice a highly-discouraged opt-out, which I thought was poignant, especially because it suggests an easy-to-understand (though difficult-to-implement) solution: regulation.
Yeah, I’m not so sure that regulations will work all that well. They tend to entrench incumbents and squash innovation.
He points to Europe for regulations not working correctly, and I think it’s amazing that Google has greater market share over there than anywhere else. You would hope that European pride and resourcefulness would promote homegrown search engines. Nope, the major alternatives outside the US are protected by totalitarian regimes, and Europe tries to regulate Google, instead.
For example, right to download, right to delete, would only work if you are definitely identified. Current tracking is probabilistic, supposedly anonymized. In an ideal world, right to delete would mean no more tracking that you haven’t opted in. In the real world, I expect that right to delete would mean you need to be logged in and tracked even more closely. That’s certainly appealing to the “anti-terror” interests, too.
A big difference of computing versus other tech is just the amount of leverage that is possible. Mark Zuckerberg was making a web site to connect classmates, and then it was connecting all the universities, and in no time he’s the #6 richest person according to Forbes. When he started, I don’t think this is how he thought he would end up. He has simply been guided by his morality of creepy openness. The amount of change that we’re capable of creating is beyond our comprehension, and I think it’s unfair to judge our industry by the disaster stories that make it to public consciousness.
While it doesn't directly address how to fix this type of problem, I found the book "Moral Man and Immoral Society" to be a helpful discussion of how we make ethical decisions individually vs. in groups (the thesis being justice is nearly impossible to achieve in group actions).
You say "because humans choose it" as if it counters the author's point, but in your last paragraph you perfectly describe how we don't freely choose it; instead, some happily choose it and others must later surrender to it as a condition of normal participation in modern life. This is "choice" only in a meaninglessly narrow sense.
Maciej's point is that we blithely build these systems because TECHNOLOGY--we lionize the builders of these systems--that we later complain about being forced to participate in because they've been successful. This is the circumstance that our refusal to engage in politics creates for us and everyone else, and we bear responsibility for that.
I think the best thing that ever happened to me was realizing the thesis of this article and getting some humility.
I used to be firmly in the software will eat the world camp. Turns out the world is pretty cool. If you still think like this: travel, study people and get outside.
Getting to know the world around you and people who are different than you will make you a better person and a better engineer.
> Turns out the world is pretty cool. If you still think like this: travel, study people and get outside.
I agree with the last two points, but why travel? Wasn't one of the points in the article that we're so busy trying to think broadly -- by going to faraway places and dreaming of general case solutions -- that we reject the small-scale case of the people already around you? Unless you mean travel to Bayview...
From the article:
> I am very suspicious of attempts to change the world that can't first work on a local scale. If after decades we can't improve quality of life in places where the tech élite actually lives, why would we possibly make life better anywhere else? ... We should be skeptical of promises to revolutionize transportation from people who can't fix BART, or have never taken BART.
The point in the article about thinking too broadly is a subpoint of the larger point about the hubris of believing we can solve faraway problems that we've never seen up close. Both focusing on the local problems that we can see up close, and getting out there and seeing those faraway problems up close, could arguably be antidotes to that hubris.
I think what he describes here is a fundamental flaw of human nature. We now have overwhelming evidence that humans are really good at taking incomplete, disparate pieces of information and coalescing them into a whole to reach a conclusion. We generally infer very well. The problem is that the conclusions we reach are also generally wrong. Given a very specific type of domain with deterministic answers (e.g. most engineering fields) we are able to reach more often than not correct answers. And the rate at which we are correct has only increased (nearing 100%). However, when it comes to the messy domains in which we live we are still pretty poor. Our instincts are generally good enough to keep us alive (don't walk down this dark alley, that guy looks shady, etc) but much more than that and we suck.
I for one am deeply uncomfortable with people training AI systems to form the kind of decisions we as humans do. These systems may have more information to go on but if it's using the same methods that we do, how accurate can it be? Thanks to the rapid advancement of the last 150 years we have quickly reached a place where society's technological knowledge and ability far exceeds our moral ability to reason about it, and it's implications. We are like an untrained child with a gun. Yeah we may understand the mechanics behind pulling the trigger and how it works but we don't yet have the maturity to understand when we should, and what truly happens when we do.
There's this attitude that because computer work on pure logic, anything a computer does must therefore be logical and rational. The problem is that the humans who program the computers will often encode their own illogical, irrational, biased, and flawed logic. The computer is not logical and rational, it just follows instructions well. If the instructions are illogical, irrational, biased, and flawed---but executable---the computer will follow them. And when they spit out a illogical, irrational, biased, and flawed result... well, it must be logical, rational, unbiased, and unflawed, because the computer is purely logical, rational, unbiased, and unflawed.
We code our biases into the algorithms we use every day. They inherit our flaws. The sooner we all wake up to this, the better.
On the other hand, if the instructions are clearly defined and machine "intelligence" is used, the computer will often fix the biases of it's creators.
What you are pushing is nothing but the idea of (secular) original sin for algorithms.
Unfortunately, innumerate reporters imagining AI as somewhere between a human and a 5 line python script doing exactly what the creator proposed are unable to comprehend this.
> We code our biases into the algorithms we use every day. They inherit our flaws. The sooner we all wake up to this, the better.
And our values. And assumptions. In Arthur C. Clarke's "2001: A Space Odyssey" the homicidal behavior of HAL was ultimately based on the conflict between his secret instructions and what he was able to share with the human crew, Bowman and Poole.
This is actually a serious concerns when it comes to AI. People tend to think it's overblown but non deterministic behavior is bad because you don't know what the program will do. Maybe it crashes, maybe nothing happens, maybe it nukes your cpu. You simply don't know. The first rule of true AI has to be to ensure it values human life. It would be grossly irresponsible to create an entity more intelligent than us and do otherwise. But what happens when the military inevitably decides these new AI things would make really good drone pilots. Or when a sufficiently powerful enough AI comes to the conclusion that the best way to protect human life is to take human life. What's the end result of that conflict. What's stopping a sufficiently intelligent AI from rewriting it's own code to get around restrictions it doesn't like. We already have self modifying code. It's a scary thought, and the fact that people are basing these things off the way humans think makes it even scarier
While I can certainly detect a bunch of content-free mood affiliation designed to appeal to those who dislike technological liberationists, I can't tell what the point of this article is.
First: In the real world, this has led to a pathology where the tech sector maximizes its own comfort. You don't have to go far to see this. Hop on BART after the conference and take a look at Oakland, or take a stroll through downtown San Francisco...
Second: Techies will complain that trivial problems of life in the Bay Area are hard because they involve politics. But they should involve politics.
So first he's criticizing techies for not fixing the pathologies of Oakland. Then he's saying these problems are rightly the domain of politics. So what's the problem? Those are political problems, should be left to politicians, and techies are doing exactly that.
Finally: In a world where everyone uses computers and software, we need to exercise democratic control over that software.
Aha - so first techies are not solving problems that the author thinks they should be leaving to politicians. From this he draws the conclusion that techies - currently doing a great job of solving the problems within their domain - should bow down and submit to control by the politicians who failed to solve the local problems he cares about.
Another choice line: If after decades we can't improve quality of life in places where the tech élite actually lives, why would we possibly make life better anywhere else?
Why would we do something that benefits far away humans over something that benefits nearby humans? Don't help those foreigners (or people in NY)! Build a wall! Trump 2016! (Same idea, just directed at a slightly different target.)
Your analysis isn't merely uncharitable but actively tendentious, almost insultingly so. Reading it I'm left wondering who here you'd expect to convince with it. "Those are political problems, should be left to politicians"? That's "M-x doctor"-grade logic.
I'm sure you have any number of substantive rebuttals, most of which I'll disagree with but many of which are at least interesting. Why write a comment so contemptuous of the site, instead of simply writing out what your real thoughts are?
There is nothing in the article to rebut. The article is merely engaging in mood affiliation - saying words that evoke the right emotions and appeal to the tribe of some readers - without actually engaging in reasoning that someone could rebut.
It dismisses problems like death and AI risk solely by describing them as "fake". No argument is made for this - the reader in the tribe of the author is assumed to already agree. It asserts that politics should control tech. Again no argument is made, because the reader is assumed to already agree. Similarly for the idea that techies somehow owe something to random (globally) wealthy people who are physically close to them.
What, exactly, is the argument you think I should rebut?
There's no especially respectful way to say this but I'll try my best since I think it's worth saying: this stuff about "mood affiliation" and "signaling" (elsewhere) and "tribes" seems like rhetoric you often reach for when backed into a corner.
It's ironic, I know, that I'm responding to your argument that "this article has nothing to rebut" by saying that your own comments are deliberately designed not to be rebuttable. But, it's true. This article self-evidently makes points that can be disagreed with, and your comments are designed to avoid that trap.
I think I'll have to back 'yummyfajitas on this one. The article does make some points, but a big part of it is just random insults at technologists and ridiculing their ideas. Some of it may be warranted, and I'm the first to take potshots at adtech, but there are limits. For me, this talk (and the few last ones of his too) reads a bit like "hahaha, look at those stupid, arrogant techies, they want to cure death, what a nonsense hahahaha".
You're not siding with 'yummyfajitas. You're (almost) doing what he refuses to do: pick an argument in Maciej's post ("techies shouldn't be trying to cure death") and rebutting it. If you fleshed that point out a little more you'd be going, rhetorically, in the opposite direction as him.
I happen to agree with most (maybe all?) of what's in Maciej's post. But my problem isn't that Stucchio disagrees with it; my problem is that he refuses to engage honestly with it, and then writes assumptively that we must, as fellow tech people, feel the same way. The reasoning he's used it embarrassingly weak, and I object to being assumptively affiliated with it.
>pick an argument in Maciej's post ("techies shouldn't be trying to cure death") and rebutting it
What argument? All Ceglowski offers in this article with regard to life extension is his bald assertion that such projects are 'delusional'. There's nothing there to rebut.
> What argument? All Ceglowski offers in this article with regard to life extension is his bald assertion that such projects are 'delusional'. There's nothing there to rebut.
Stating that a claim should be disregarded because it is presented as a bald assertion without support is a rebuttal.
Politics is far too important to be left to the politicians!
Politics is how we resolve disagreements, which is hard to do. Maciej argues (rightly, IMO) that avoiding politics for technology is basically retreating from a set of hard problems into a set of easier problems.
> From this he draws the conclusion that techies - currently doing a great job of solving the problems within their domain - should bow down and submit to control by the politicians who failed to solve the local problems he cares about.
He's saying that techies should directly engage in politics to achieve positive outcomes locally, before believing that they can use technology alone to achieve positive outcomes globally.
He's saying that what techies consider to be "problems within their domain" is worth some introspection.
>He's saying that what techies consider to be "problems within their domain" is worth some introspection.
But couldn't that lead to exactly the sort of arrogance that Maciej argues against? That engineers think their ability to solve tech-related problems means they are specially equipped to solve more complex problems in society?
For example, if some sociology major who had no programming experience tried to advise me on a challenge I have with some software I'm working on, based on something they read in Wired, I'm probably just going to roll my eyes. Frankly, it's pretty unlikely that someone who no background in the field I've dedicated my professional life to studying, can provide useful advice.
Likewise, do you think an engineer can provide useful advice on how to deal with poverty to a sociologist, one who has spent their career studying this issue? Do you really think the secret to improving society is to have a bunch of techies telling sociologists and economists how they think things should be run?
Maybe the right thing for techies is to pay their taxes, and participate in local elections as informed voters. Other than that, maybe we should stick to tech, and leave running society to the professionals.
I personally have a different perspective. Yes, avoiding politics for technology is retreating from a set of hard problems into a set of easier problems - but IMO most of those hard problems are distractions and utterly irrelevant to anything. That's why I'm siding with the techy approach on this one - avoiding politics seems to be much more cost-effective way of making things work.
And sure, a lot of the problems won't be easily solved by direct application of technology. But then again, not all should be. Isn't that one of Maciej's points? There are non-techies too, let them have some fun changing the world too. Let's each focus on helping in things we're good at.
Now don't get me wrong - I don't think the current computer industry is doing much to help anyone with anything. In this I agree with Maciej. But I think he extrapolates the industry problems too far. The issue isn't really with technology or smarts-driven approach, it's with the industry poisoned by short-term thinking, getting more ROI on bullshitting everyone and doing pointless make-believe work than on producing something useful.
>So first he's criticizing techies for not fixing the pathologies of Oakland. Then he's saying these problems are rightly the domain of politics. So what's the problem? Those are political problems, should be left to politicians, and techies are doing exactly that.
No, he’s saying that techies should not be separate from the political process.
Techies, as a general rule, do not like politics. Here is a problem, here is a solution, implement, boom, done. Next problem. Politics is so much arguing and posturing and creating compromises that benefit nobody, just because somebody is passionately mistaken about something.
Google famously did not fund a lot of lobbying in Washington until Microsoft pushed a bunch of anti-trust regulators at them. Now Google is one of the major organizations in Washington.
Sadly for San Francisco politics, a huge proportion of the techies are not allowed to participate, because they are not US citizens, and the self-inflicted harms of 9/11 are keeping a bunch of my coworkers from even getting green cards.
> Why would we do something that benefits far away humans over something that benefits nearby humans? Don't help those foreigners (or people in NY)! Build a wall! Trump 2016! (Same idea, just directed at a slightly different target.)
You've misunderstood his sentence. He's not saying that we should prefer fixing things locally to fixing things globally. He's asking why we should believe that the tech elite can improve life on a global scale when they can't even improve it on a local scale.
If that's the question, the answer is pretty clear: we should believe they can do it because they already have and continue to do it every day.
And in spite of what the author claims, there are even solving some political problems. For instance, observe how Uber has improved governance in most major cities. AirBnB seems to be taking a crack at it too: http://www.bloomberg.com/news/articles/2016-06-27/airbnb-is-...
"We started out collecting this information by accident, as part of our project to automate everything, but soon realized that it had economic value."
Is he saying the end results of the project to [insert stated project purpose here], e.g., "to automate everything", "to organize the world's information", etc., did not have enough economic value to sustain the project... and hence founders were "forced" to collect data as a means to generate value?
Here's another version: pre-Google search engines realized they could sell ad space, i.e., paid placements, e.g., to auto manufacturers.
Once the advertising industry became involved, then collecting data about the network's users, if one could do it, was a no-brainer.
"... really it's just the regular old world from before, with a bunch of microphones and keyboards and flat screens sticking out of it."
Not sure that young people who worship Silicon Valley want to believe this, and why should they?
The author always makes a good case for the potential long term consequences of the so-called "changes to the world" that many programmers are adamantly pursuing.
Maybe these programmers are not changing the world. Maybe they're just doing what others already did in the past, on a smaller scale, without the benefit of cheap electronics.
Is he saying the end results of the project [...] did not have enough economic value to sustain the project... and hence founders were "forced" to collect data as a means to generate value?
I don't see any reason to read that into his speech. We're used to the "you pay or are the product" dichotomy being propagandized at us as the reason for accepting surveillance business models. I don't see that here. I see it more like describing the realization that there is an(other) revenue stream staring the creator in the face.
This reminds me of the Adam Curtis film "All Cared For By Machines Of Loving Grace." Indeed, "eat your vegetables" itself is as much an environmental idea as a nutritional position. Curtis attributes environmentalism in its present form to systems-think grounded in the computer industry.
The deep irony is that computers themselves arose from the ashes of Bertand Russel's (failed) attempt to systemically automate mathematics.
I'm always amazed that techies, who have to deal with the fragility and foibles of computers, aren't leading the charge to be skeptical of this sort of thing. Of course, the incentives are not aligned with that.
what a shame noone is actually responding to the substantive points here. This is without a doubt one of the better "programmer describes politics from a programmers point of view" to turn up here.
I think Maciej's mind is getting seriously poisoned by the hatred towards SV. I understand the anger - hell, I abhor the ad-driven tech too, I've stopped believing in the startup marketing long time ago. But just because companies say bullshit to make money doesn't mean the tech doesn't work, or that technologists are worse than normal people because they use their brains to solve complex problems. This, and previous talks start to sound less like pointing out existing problems and more like anti-intellectualism.
In Maciej's recent talks I feel a sense of defeat that he's trying to mask with calls to actions and thinly-veiled insults towards programmers. Like the constant references to politics - if there's one thing true in this world, it's that the politics is one of the least cost-effective way of solving anything one can imagine. If people want to do tech, or science, instead of politics, maybe it's not because they're stupid, but because they want to do something effective? After all, it's not politicians who fed the world, it was Haber & Bosch. It wasn't environmentalists who saved whales from extinction, it was whoever invented that synthetic whale-oil substitute.
While I accept I might be a bit technophilic, I think some of Maciej's recent comments border on ridiculousness. Criticizing not just Google per se, but the very effort at curing death? It's not hubris. It's something we should have been focused on doing for at least past few hundred years. If it takes rich SV guys to finally do something about it, I think instead calling them full of hubris one should instead ask, why it was them who finally started tackling this problem in a way that has at least a chance of being effective.
Another point - Maciej said that "really it's just the regular old world from before, (...) [and] it has the same old problems.". Well, people don't change (fast enough), their problems don't change. But the world looks very different than it looked few hundred, or few thousand years ago. So what changed, if not people?
I say, technology. Personally, I'm getting more and more convinced that social changes are caused by changes in technology landscape, not the other way around. I.e. we didn't have liberal democracy 500 years ago not because we were stupid then, but because technologies that can support it - like the printing press, effective firearms, etc. - did not exist or did not yet reach the point of ubiquity.
So yeah, daily bashing of advertisers and anti-social startups is fun. But let's not pour the baby, the soup and the whole medicine cabinet out with the bathwater.
There's a point in this talk https://www.youtube.com/watch?v=5Vt8zqhHe_c where I realized he is a brilliant comedian, but also pretty far left, and a lot of his funny criticisms just boil down to criticizing for not being as far left as he is or wants you to be. So I don't think his mind is being poisoned -- it's already poisoned, from the point of view of someone not as far left. ;)
I think a lot of tech people lean libertarian, which is a strange double-think ground of left and right views, because the role of a technologist has inherently anti-left and anti-right components. Technology is upstream of culture, upstream of society, to work on it implies defying authority whether in the form of a single sovereign or public opinion. While much of the world is still the same, much has been remade according to man's desire, through technology, and there is still much more to be remade. And often not a collective mankind, but an individual, often male, or a small group of individuals, often male. That kind of influence is inherently anti-democratic, anti-egalitarian, but anti-authoritarian as it is highly individualistic. Yet its benefits can be made private (until someone else figures out how to do the same, anyway) or open to all according to the whim of the creator, because it's fundamentally just knowledge that once derived can be shared endlessly.
The comments about techies not going out of their domain to try and solve things is just nonsense, as cross domain work happens all the time. Applied technology is often the catalyst (and money source) that brings multiple disciplines together who otherwise would be perfectly content sitting around in academia pursuing their narrow focuses for all time (this isn't to say that such narrow pursuing can't bring great fruits). Ok, it's not the only catalyst, and maybe a lot of technologists have that habit and viewpoint of being able to do better than narrow field experts, but either they succeed and are shown to be right all along that they didn't need outside help, or they are wrong, and they fail, or they change their minds as they realize they're failing and study up and get consultants. I don't know anything about Google's anti-aging prospects but if they aren't already in contact with Aubrey DeGrey and the other biologists and doctors he's in contact with they soon will be once they realize how hard the problem is.
"After all, it's not politicians who fed the world, it was Haber & Bosch. It wasn't environmentalists who saved whales from extinction, it was whoever invented that synthetic whale-oil substitute"
The politics vs technologists is a false dichotomy. I dont know specifically about Haber and Bosch, but a lot of researchers get their funding from the government. SV would not be what it is today without government funding.
Not specific to your comment, but I feel like the whole 'how can we make the world better' discussion has become 'our side is better at it'. Who cares who's better, what can we/I do?
... curing death? It's not hubris. It's something we should have been focused
on doing for at least past few hundred years. If it takes rich SV guys to
finally do something about it, I think instead calling them full of hubris one
should instead ask, why it was them who finally started tackling this problem ...
I totally appreciate what you're saying. You're saying "the rich SV guys are right, why haven't we been trying to cure death? Obviously curing death would be amazing! Death is awful!".
But the fact that people whose expertise is primarily in software, entrepreneurship, and/or investing, who all have, I think we can safely say, limited experience as biology researchers, the fact that these people are making predictions about when death will be cured and judgements as to which approaches are most promising, predictions and judgements that are not deferred to actual biologists who study senescence...there are of course upsides to such hubris (e.g. being unaffected by perverse incentives in the academic system in which biologists traditionally work), but surely you can see that there is hubris here?
In spite of all the caveats like how academia sucks, at the end of the day, the actual biologists who study senescence are still understood to be the experts on the feasibility of immortality. Making any kind of prediction or judgement call that isn't deferring to the acknowledged experts is incontrovertibly arrogant.
To be clear, we wouldn't be leveling this criticism if "rich SV guys" were simply increasing financial support for aging research. There are undoubtedly controversies around actors raising money for medical research (e.g. because it arguably results in disproportionate funding for diseases of old rich white people), but nobody criticizes those celebrities of hubris, either.
This criticism is because there are people in Silicon Valley who earnestly, genuinely believe that because they're hackers not biologists, they're gonna cure death by hacking biology. Even if you believe they're right, how can you deny the arrogance?
... finally started tackling this problem in a way that has at least a chance
of being effective.
And this part is why this hubris is harmful: the dismissal of the efforts of anyone other than the "rich SV guys" of even having "at least a chance of being effective".
politics is one of the least cost-effective way of solving *anything* one
can imagine
Which way of solving institutionalized slavery, or women's suffrage, or totalitarianism do you think would've been more "cost-effective" than politics?
At least one of those was directly caused by technology, and solved by politics and its siblings, economics and war.
I moved this out of the parent comment because I really want you to read the parent comment and was worried about the wall-of-text; hopefully, you'll be interested enough to read the continuation, too.
it's not politicians who fed the world, it was Haber & Bosch
Without diminishing their accomplishments, it's important to note that Haber & Bosch have come and gone but world hunger still isn't solved. And it will never be solved by technology alone. It's going to require politics, and economics, and unfortunately probably war. Hopefully technology can provide an important supporting role, but the realization that technology will only be providing a supporting role, that's exactly the humility I'm arguing for.
Also, in what way were Haber & Bosch representative of "rich SV guys" or techie hubris? Maciej has never seemed to me to be criticizing mainstream scientific research at all.
It wasn't environmentalists who saved whales from extinction,
it was whoever invented that synthetic whale-oil substitute.
...
Personally, I'm getting more and more convinced that social
changes are caused by changes in technology landscape, not the
other way around.
Isn't synthetic whale-oil a clear counterexample to this causative claim? Surely you agree that research into whale-oil substitute was driven at least in part by environmentalism, it wasn't the invention of synthetic whale-oil substitute that incited people into caring about the environment.
And again, in what way is the invention of whale-oil substitute representative of the hubris that Maciej is criticizing?
we didn't have liberal democracy 500 years ago not because we
were stupid then, but because technologies that can support it
Didn't the Greeks have democracy 2500 years ago? The Roman Republic also had a form of democracy, which lasted longer than the US is old, our technology may well turn out not "support it" any better than theirs.
This reminds me of a quote from a talk I read recently:
More broadly, we have to stop treating computer technology as something
unprecedented in human history.
... let's at least learn from our past, so we can fail in interesting
new ways, instead of failing in the same exasperating ways as last time.
It is ironic that the people most qualified to grasp the problem of essential, irreducible complexity are software developers. That many don't shows that something very fundamental in the underpinning of computer science has been lost: https://news.ycombinator.com/item?id=11932963
> There is also prior art in attempts at achieving immortality, limitless wealth, and Galactic domination.
Hum... Except for very primitive ones, based on religion, I actually can not think of any such previous attempt. Does anybody knows what the author is talking about?
I think the tone is a little sarcastic, if you attenuate the claims a little bit, it would include projects like the Third Reich, the Soviet Union, etc.
Ok, maybe, but those have no relation at all with creating technology that will improve people's life. If this is what the author thinks as a joke, I should grant less effort in understand his other points.
Before you pointed that, I was expecting them to somehow relate with the Industrial Revolution and the exploitation fo the New World. Those are much more appealing metaphors, but I couldn't get anything that we'd be refusing to learn from them.
They were both about improving life through technology! It just happens they missed other important perspectives. That's part of what the talk is about, people with new found powers not sufficiently considering the consequences of their actions.
I would also say that the World Wars in fact do relate to the industrial revolution, as part of the reason they were so horrible was the new found power provided by scientific industry.
Hidden in the article is an example how moralizing destroys soul:
> Abortion has been illegal in Poland for some time, but the governing party wants to tighten restrictions on abortion by investigating every miscarriage as a potential crime.
One day, you are "pro life", and the next day, you are happy to harrass people in terrible life situation.
The way I interpret these contradictions is that words in ideology are largely instrumental. They achieve a goal that is seldom stated explicitly. That's how yu get aporia like pro-life bombings of abortion clinics and people promulgating family values that refuse to recognize certain families.
It's hard to disentangle subconscious and conscious goals, but the goal of proscriptions against adultery and other reproductive mores usually have a foundation in primogeniture - the practice of passing land and property on to the eldest son.
Robin Hanson has his "Farmer v. Forager" framework to classify many things about which people disagree with respect to social ... policy.
I don't think anyone who isn't a bit ... off can openly promote subjugating women for its own sake; so it's a better story at least if there's a more rational explanation.
The only real "goal" here was media getting more ad revenue. Maciej is just repeating the most recent wave of bullshit stories that circled the Polish press few weeks ago.
It's utter bullshit, though it did make some circles in the Polish media for a while. Basically, a lot of people hate the political party that won the recent elections, and so journalists invent all kind of nonsense because it sells papers.
A standard rule of thumb applies - if you read something on the news that sounds too ridiculous to be true, it's most likely because it's not true.
>How will more smarts give us peace in the Middle East?
Less religion (or at least less true believers). In addition to intelligence, technology can also increase the comfort of everyday life. Comfort + rational thinking = less likely to risk your life killing others.
You'd think that, but there's been a huge swath of engineering school graduates joining ISIS and similar organizations. It's not that ISIS is recruiting them, something about engineering students [makes them more likely to join ISIS](https://www.washingtonpost.com/news/monkey-cage/wp/2015/11/1...)
The engineering school of the university I work at is plastered with posters urging students it's not too late just make the call don't make the jump. If it weren't so tragic it would be funny that it's the only college that seems to have that problem
So I guess ISIS appeals to the ones who would rather go out with a bang.
In the world of an RPG why does my character go off and become an Adventurer instead of living a quiet, humble, out of the way life?
In that fantasy world the normal quality of life isn't that good, and if you're just sitting around you're probably going to be picked off by some a-hole before natural old age.
In that fantasy world if you go out and TRY to have adventure your hard work, blood, and tears usually result in a better outcome. You did good, you get good rewards.
I think the real reason our society feels so vulnerable to 'radicalization' is that there is a distinct breakdown in 'the dream'; in the belief that you can put forth work and actually be rewarded. Or even in the ability to take that risk without starving/growing ill and ending up in a poverty spiral.
The people demand fulfillment, and meaningful lives; they want dignity and respect for their contribution to the whole.
Engineers are given answers and told to look for places to apply all the answers. Scientists are given answers and told to look for new questions that haven't been answered. These are fundamentally different approaches to life. The former, the engineering approach, I would not call "smart."
I don't think it is a coincidence that engineering students are both far religious than science students, with the exception of software engineers. Last I checked something like 90% of structural engineering students were religious, while only like 10% of physics students were. ISIS claims to have all the answers. Liberal/progressive ideologies readily admit to a world without solid answers. The former appeals to the engineering mindset far more than the latter.
Un-peace in the Middle East comes as much from (IMO, inevitable) mismanagement by the British Empire ( and later the US ) as anything. Of course, the primary thing is the failure of the Ottoman Empire, with the death-blow by the Kemalists in 1920-1922.
There's literally a book - "A Peace To End All Peace" ( ISBN-13: 978-0805088090 )dedicated to numbing your brain with a thousand blows to show this.
Technology seems to be at odds with many people's sense of what is human and with human identity. It's a story as old as that of Captain Ned Ludd. I saw this very closely in my most recent gig. I put a (very crude) machine learning element into the flagship product and the reaction was a lot fear. Geez, I was just trying to keep operators from breaking (expensive) things. Again, though, the incentives were not with that. But the perception was "they're takin' 'er jerbs."
The French are also to blame, and the Germans exploited the British and French mismanagement with considerable glee (as did the USSR WRT the US' mismanagement in the region for a while, and vice versa).
I suffer from English-language bias and France did not persist in the Middle East post-WWII ( many holdings ended in the 20s-30s ) , so it's easier to forget. Plus, France's colonies in Africa were much more of their empire, and of course, the Indochina .... thing...
Come to think of it, one could probably go with the blame back to Adam and Eve. So maybe instead of asking which particular subsets of previous generations screwed things up, we should ask ourselves why do we allow the effects of that to continue?
One of my wild dreams is that one day nations and societies suddenly look back at the history of harm they did to one another and say, "So what? Fuck it, it's in the past, we have no obligation to perpetuate it."
I wonder where's the connection between rational thinking and nationalism? The latter seems like utter manifestation of stupidity, or at least absolute lack of empathy. Ingroup/outgroup thinking is not rationality, it's a heuristic in the mind that doesn't work for us anymore.
Adam Smith at least wrote "Wealth of Nations" after writing "Theory of Moral Sentiments" to drill down into the morality of money. His point in it is that capitalism is a moral improvement on Mercantilism.
The entire concept of all powerful AI that must be feared or at least obeyed springs from this trap. But imagine when we create this AI and ask it "how should I live" and it replies "eat better and exercise" how will its intelligence convince me to change my ways better than my doctor does today? How will more smarts give us peace in the Middle East? These human problems seem tied up more with the all too human traits of will and desire and pride and... Intelligence? It's far down the list.
Politics, for better and for worse, is our system for granting humans the power people here seem to think AI will be able to take on its own. I think part of Maciej's message is: stop waiting.