At my last job, something I tried to push unsuccessfully was a company-wide automation initiative.
Imagine a company-wide incentive structure for automation whereby if you automate 90-100% of your job, you receive massive bonuses. If you completely automate your job, the bonus could be the entirety of your annual salary as a single payout. Or perhaps a combination of bonus and equity.
So now you're out of a job. But if you can automate other jobs in the organisation, you also get bonuses.
Eventually the majority of the company becomes automated, hopefully generating the same or greater revenue and value for customers, with less overhead costs. Which is great for any stakeholders who can continue to receive revenues, but terrible for employees. So it would be good to include some kind of equity and/or revenue sharing model. The sell could then be "hey, let's all work together to automate this company, we'll get bonuses for doing so, and when the whole thing is automated we can continue to receive income while sitting on the beach".
This sounds like a cool idea - if and only if everyone coordinates to it together fairly, which seems extremely hard to achieve. I can see tons of caveats, e.g.
- If you automate away your non-tech co-worker (someone who can't contribute much to the automation of jobs other than his own), will he get the bonus? If so, how do you assure people doing the automating won't feel treated unfairly as the non-tech guy just got bonuses for doing nothing?
- If someone automates you away, will you get anything? Will you be angry?
- If you keep automating away your co-workers, who's gonna be left to help you automate yourself away? (assume for a moment that you have the most difficult to automate job in the company)
I mean... this could work if you'd manage to set up incentives so that everyone in the company works together and not against each other. But I'm not sure how to do this, especially in a way which doesn't leave anybody feeling he's being treated unfairly.
> If you automate away your non-tech co-worker ... will he get the bonus?
Yes, and you get nothing. It's fair; just don't automate away other people's jobs, automate away your own.
> If someone automates you away, will you get anything?
Yes, 100% of the bonus. It's not a choice, you must leave. I still think that's fair.
> If you keep automating away your co-workers, who's gonna be left to help you automate yourself away?
You don't automate away your co-workers unless you like doing free work. Problem solved. If you have the most difficult to automate away job, you entered into it knowingly. The market will ensure that those jobs will be higher paid by supply and demand.
This model doesn't require cooperation to work, just a relatively clear set of job responsibilities and a clear definition of what it means for you to be automated away.
I actually really love this idea, from an employer's perspective.
Isn't what you just describe about automating other peoples jobs away a huge part of IT to begin with... we take paper forms that have long review chains and turn them into electronic forms (more effective data entry, fewer needed) with mostly automated review chains (cut middle management)... yeilding higher returns.
This is why IT is considered White Collar (especially software developers and engineers)... It's our job to automate people out of jobs... which then churns the need for jobs developing ever more complex systems that aren't human. Often badly written.
We'll never reach greater than say 60% unemployment... there's just to much crap to shovel (even in automation).
Ok, so how about when you're an accountant? Now your employer has a strong incentive to pay someone extra to eliminate you, and you have no way of automating yourself because you don't know squat about programming.
That's the kind of situation that needs to be handled.
> and you have no way of automating yourself because you don't know squat about programming.
Automating existing work is more than just programming; a big part of it is expertise provided by people knowledgeable about the existing workflow. None of the projects I've been on that involved automating existing work involved only programmers and no one with experience doing the work.
Yes, I know, and I appreciated that fact in the post upthread. The point here is, if you go with the suggestion of 'redemeer that "if you get automated away, you get 0", said accountant will have no incentive to help with the automating, and every incentive to make it as hard as possible - which I think is not the result we want to have. To get a happy ending for everyone, we need coordination, not competition.
> The point here is, if you go with the suggestion of 'redemeer that "if you get automated away, you get 0", said accountant will have no incentive to help with the automating
If you go with a principal that if you are involved in automating the job away, you get some reward, the accountant may have some incentive to help (especially if it is likely to happen, but with perhaps lesser quality -- but still perhaps a net cost savings to the company -- anyway, which would result not only in the company getting less than if the accountant was actively involved, but the accountant getting less as well.)
If you get automated away, by yourself or by anyone else, you instantly get the bonus and you're instantly fired.
The accountant has every incentive to either be an amazing accountant so that any automation would be a pale imitation (everyone wins), or he can choose to automate himself away, move to the next job, automate himself away and pick up the bonus there, and so on (everyone gets their due).
This is competition, and I think it's a happy ending for meritocracy. The only people who lose are those who desire to continue to be paid while not generating value.
My reply would be that if a programmer can out-account you as an accountant, then you've failed as an accountant; you need to look for another job or learn the necessary skills like programming.
Of course this does mean a lot of people out of jobs, which is the crux of the article, but I don't think it's a fault or weak link in this employment model.
Definitely depending on the type of organisation, you'd need to tweak the incentive structure. And yeah the ideal scenario would be the entire organisation working together to automate all of their processes, and for everyone to share in those benefits.
The basic, not fully thought out idea, was that if you automated your own job you got the full bonus. If you automate another person's job, you both split the bonus 50/50.
I think with most jobs, automation would definitely involve programming and tech skills. But there would be a lot of non-tech tasks as well, such as thoroughly understanding their workflow and processes and identifying more efficient practices.
"Automation" can also be a combination of computer and human outsourcing too.
In the small web dev company I was working for (launching internal startups), the "creative process" of designing website mockups was incredibly time consuming. I suggested we collect all of the customer requirements and collateral, and automatically outsource the mockups via API to something like MobileWorks/Freelancer/Elance/99Designs/DesignCrowd etc
The problem there is the perceived unfairness that occurs when someone else in the company, sharing your approximate job, puts you out of work via automation.
How do you compensate those who lose their jobs as a result? What if they were also trying, but just happened to not be first to succeed?
It's a really cool idea though. (As a manufacturing engineer in a fairly old-fashioned company, I am currently trying to automate as much as possible; no idea yet about individual bonuses since I'm <1 year on the job)
Yeah it would need a lot of thought around the incentive structures.
The original thought was that if you fully automate your own job you get your salary as a bonus. But if you fully automate someone else's job, you both split their salary 50/50 as a bonus.
Now both of you are free to work on automating other jobs in the org. So you'd end up with various teams working together to automate jobs in the org.
As a byproduct, it would also free up a lot of human capital in an organisation to focus on pure R&D and intrapreneurship.
Why would a good capitalist pay years of salary, percentages of the company, or whatever else you're talking about offering, when they can simply outsource the automation job for $5,000 to China? And then fire you.
And then hire you back on a $5,000 1099 contract to automate other people's jobs?
You seem to be trying to come up with some sort of "fair" system, as if that were how compensation works now or ever will work.
That's why I hate political labels. What do you mean by saying "he's a marxist"? Does subscribing to one of Karl's ideas require automatically subscribing to every other?
I know you meant this as a joke, but I want to emphasize a general point - one of the things that totally dumb down conversations about economy is people grouping ideas under labels like "communist" or "libertarian" or whatever and evaluating them in batches.
To be a bit contrarian though... there is a useful aspect associated with using labels, even in political/economic discussions, in a way that is similar to ascribing names to things called "design patterns" so that one can communicate concepts more tersely.
If done in good faith, I think it's legitimate to start with approximate labels, like "libertarian", and the arguing parties can then proceed to make their position clearer if necessary. Hopefully, this saves time.
Also, I think it's fair to describe someone that believes in the importance of "workers should own the means of production" as a marxist (at least it's more accurate than socialist or communist).
If done in good faith is the key though. I'm complaining because I've seen too many discussions derailed by people assigning labels to each other and then calling it a day. The argument goes like this: "you said that, it sounds like what Marx would say, and Marx was wrong in something completely unrelated, so therefore you're wrong too". It's a trap people too often fall into.
Well, you can start with the same question and reach different conclusions. Does this mean that you're still wrong? And does the fact that Marx was wrong about one thing means he was wrong about everything else? It's too easy to group some idea with some others and then dismiss the whole thing by noting that one the other ones are wrong.
The founders, directors and investors could still own 51% of the company.
Heck, if they wanted they could offer only the bonus payout incentive structure with no equity or revenue share. In essence that would still encourage employees to automate the company. Employees work themselves out of a job, and the owners reap all of the automated revenues.
It's hard to change the status quo, specially when there are paychecks directly involved.
At a company I worked for, after they realized that things couldn't go on anymore without massive automation, their first approach was to introduce it into the infrastructure lifecycle and require all new infrastructure to be heavily automated. That allowed the legacy stuff to die slowly without much disturbance.
Another different idea entirely would be spin off a subsidiary with those ideals and slowly migrate the work there. Perhaps too much work, but if the situation is desperate enough..
It's definitely a scary proposal for management to accept. The company I pitched this to was a 15 year-old web dev / digital agency with ~15 employees and <$2m in revenues. So a pretty tiny and poorly performing company. Management liked the idea because almost all of their revenues were going into wages.
But there must be a particular type of company, at a particular scale, with a particular ethos out there that would be up for giving an automation initiative a shot.
If it worked, it would make for an impressive news story which could spark others to follow suit.
Imagine an economy where everyone's job was to automate their job...
Really small web dev company. Almost 15 years in business. 15 employees. <$2m annual revenue. Something like 90% of the revenues went to wages, and their processes were ridiculously slow. They had 3 actual devs, 3 designers, 2 SEO people and the rest were managers or admin staff. So many manual processes that could clearly be automated.
> Instead, my job will be outsourced to a datacenter. Everything I do professionally, from reading and giving feedback on specs to estimating development schedules to architecting and typing in the code and writing unit tests to reviewing others' code, every one of these things will be subject to automation. We can't do it now, of course, not at any price.
The whole thing derives from these flawed premises. Machines are incredibly dumb. Even these machine-learning wizz-things. They can do statistics, inferences, correlations. That's very very far from enough, especially for something like programming, which we can see as turning a spec into code. First, one still has to write a spec (so the "programming" isn't totally gone). Second, it means you essentially need to have "solved" natural language processing; as well as have assigned semantic values to linguistic constructs. Good luck with that.
At this point, I feel compelled to point out that humans don't agree on the meanings of what they say. Most of the time, humans can't write software that meets a spec. That's because of ambiguity, and machines are way way worse at dealing with that than humans are.
I think the error is simply to think the current rate of progress will keep going (and actually even the idea that there's been an acceleration of progress in AI recently is an illusion -- what we're seeing is increased application). It's the same error the fathers of AI made when they assumed everything would be solved in a matter of years. That's where the expression "AI winter" comes from.
> Machines are incredibly dumb. Even these machine-learning wizz-things. They can do statistics, inferences, correlations.
Do you think human brain does something more? All we can do is statistics, inferences and correlations, and we do it poorly.
Moreover, there is an even bigger problem than 100% automation of everything - namely, the automation of 50% of jobs. What are we going to do with half of the society being unable to find any job? And that most likely includes your or mine parents, siblings and friends, who will no doubt come to us techies for assistance. And if current trends continue, it's 10 years away, not 100.
Other technologies will make the need for jobs obsolete.
It's called a post-scarcity society and it must be achieved at the same time or slightly before most jobs become automated.
A molecular re-arranger would be the holy grail of a post-scarcity society.
Also, you're not giving enough credit to the human brain. Whatever it does, we can't even get close to replicating it with current computer technology.
Yes, they can emulate a bee or an ant with computers, but that creature doesn't have the ability to reason and come up with ideas, and dream (as far as we know and are emulating).
At present the only difference is the human brain is far more malleable and adaptable than is a machine. Our most flexible AI is generally based on supervised learning. That still requires human interpretation and verification to work properly.
I think you can also state that machines generally lack the ability to discern expansive context unless explicitly trained to do so.
Granted, it will happen. True AI requires these things. At present all ML needs a push from a human to become more "intelligent."
I don't think the current economic model can survive 50% unemployment and to be honest I think social unrest would take its toll before we even got that far. We may even see a partial collapse of society in some developed countries leading wiping out the advances of last century.
The bigger question is can we as a species evolve into a more altruistic species and move beyond capital. Some days I don't think we can...
I think the idea that programmers will be replaced by software-writing AI programs is silly and unrealistic, but that isn't to say that programmer jobs are secure. What's really going on is that developer's tools are getting better.
Languages, debugging tools, version control systems and bug trackers are all rapidly reducing the number of developer-hours required to implement a given program, which we should expect to reduce the size of the programmer workforce eventually if we can't find new things for them to do.
It is possible that we will find enough new problems for programmers to solve to counteract increased productivity, but for many applications, most of the general problems that one would hire a programmer to do already have reasonable-quality solutions. I think we'll see smaller and smaller teams of hyper-productive developers working on a shrinking set of increasingly esoteric projects, or things with a single specialized use (like calculating taxes for that particular year).
If we dealt rationally and benevolently with it, 100% unemployment would be an age of abundance... almost a post-scarcity society. I do not expect to see it soon, but I think it's possible. If technology continues to advance at the same pace it has for the past 50 years and if we can engineer our way out from under the impending fossil fuel energy crunch, I'd say it's probable within the century.
Even before things get as extreme as effectively "100% unemployment," I could certainly see a massive drop in the need for labor. We may already be seeing the leading edge of this.
It wouldn't mean nobody would do any work, but it would mean that basically everyone would be a trust fund brat and able to work on whatever they want.
Truly intelligent beings would do that.
But so far we aren't known for being that intelligent. We will probably continue with means-justifies-the-end market fundamentalism, maybe with a bit of ham-fisted wealth redistribution done in an unfair and inefficient way thrown in to keep the masses from rioting too much.
Indeed. Our real problem is that of coordination, and that there is no easy path from here to there - if we keep just following the market's lead hoping it will magically solve everything, we're not going to get to that age of abundance.
I think there is a (relatively) easy path from here to there. Let's presume that we want to create a Star Trek post-scarcity future.
One theory I like for Star Trek economics is that they've got an extremely high basic income, the equivalent of $10M/year or so. Energy costs about the same (~10c/kwh), and is the limiting factor in replicator economics.
Some stuff is really expensive, like land and human labour. Replicator-made material goods are incredibly cheap, unless you need an huge amount -- you couldn't afford to buy a copy of the Enterprise.
Obviously you don't need to work, and most don't. A few are VR addicted, but most are just dilettantes, they spend their time doing hobbies, art and travel, much like the able bodied retired population in the western world. A large minority get "real" jobs, but mainly for purpose and prestige. And some do it to get rich -- they want to buy a planet or a starship or something.
The nice thing about this theory is that the path is quite clear: start with a very small (possibly inadequate) basic income, and increase it regularly as circumstances permit.
That's the only cogent explanation of how the Trek universe's economy might actually function I've ever heard.
So huge goods (e.g. Starship) would be pricey, but everything else except labor feels "free." Definitely describes how that society seems to operate, and I could imagine it working if you had replicators, matter antimatter power generation, and other kinds of mega-technology.
I always thought it's the agreed-upon explanation ;). I.e. Star Trek's post-scarcity society is enabled mostly thanks to the replicator technology. So food, clothing and basic construction materials are essentially free and unlimited. People do whatever they want - most civilians just go and live and further their passions; some do work for others (e.g. Sisko's dad and his restaurant), but they do this because they want to, not because they have to. Many join Starfleet, which basically offers people grand goals and adventure, and enforces structure and obedience based solely on non-monetary incentives.
Things like starships are pricey partially because they're huge (and thus reaching the limit of civilization's energy output) and partially because some things apparently just can't be replicated with their-era replicator technology.
Like I said in another thread: free marketers have adopted the Marxist notion of historical inevitability and automatic progress.
I don't think it works that way. We get the future we construct. If we just trust the means to produce the ends, we will just get some random future selected for us by impersonal and amoral evolutionary and economic forces that don't give a damn about our well being.
Historically speaking, most of the philosophical question never got an answer by philosophy. They just tend to become obsolete as science advances -- and I consider any question regarding strong AI to be half philosophical by nature (100% unemployment can only happen with strong AI, otherwise we'd spend the last 0.0001% or whatever it is and focusing on developing strong AI)
With that in mind, there are a note: from what I've gathered from actual Machine learning and AI researchers, anything that resemble strong AI is not just far away, but ridiculously far away.
On a less tangent point: a lot of our economic system, and assumptions are based on a scarcity of resources (human labor is one of the scarcity). By the time automation could really take hold and realistically take over most of human's work, it seems to me that there are many things to be concern about rather than employment, and they depends on whether the model of economy is a scarcity or an abundance one, and whether it's actually sustainable or not etc.
AI comes up a lot but in my opinion there is no guarantee that it will ever actually exist.
AI is by definition is artificial and the lines between artificial and non-artificial will be blurred very soon as BCIs come around.
If we take all deceased people's brains and throw them in a pool of goop that allows them to talk to each other and connect them to a BCI so that they can communicate with the rest of the world, is that Artificial Intelligence?
I predict that we will have ultra-intelligent humans/transhumans and advanced AI might or might not ever come into existence.
As expected, DARPA is leading most of the BCI research programs at the moment. I'm not sure how I feel about Super Soldiers and modifying humans, that's a huge topic in itself.
There's an interesting little passage where the protagonist of Use of Weapons, experiencing the Culture's society for the first time, meets a sociologist who works part-time cleaning tables at a cafeteria -- even though the job could easily be automated.
“I could try composing wonderful musical works, or day-long entertainment epics, but what would that do? Give people pleasure? My wiping this table gives me pleasure. And people come to a clean table, which gives them pleasure. And anyway" - the man laughed - "people die; stars die; universes die. What is any achievement, however great it was, once time itself is dead? Of course, if all I did was wipe tables, then of course it would seem a mean and despicable waste of my huge intellectual potential. But because I choose to do it, it gives me pleasure. And," the man said with a smile, "it's a good way of meeting people. So where are you from, anyway?”
That same concern existed 150 years ago and the truth is that we are still working the same amount of hours as before. We just don't do the same things for living.
There will always be jobs to do. Most of the manufacturing and repetitive jobs can be automated, but there will always be jobs that only humans can do. For instance, food can be processed by a machine, but I prefer to pay to see how a sushi master prepares each Nigiri for me. Or, would would someone prefer a massage chair rather than getting that massage from a human? Music? Arts? Design? All that things can be done by machines, but we will always prefer to get them from a human. Our lifestyle will change. Objects will loose value because they will be extremely cheap to produce and our jobs will be more focused towards services to other people.
What concerns me most is the inequality and how the richness produced by machines will flow though the society. With that level of automation in the industry it will be really easy for a few people to control the production of all the goods.
But this is something that has to be solved by the politics.
The human mind, while having less data storage, and being drastically slower for some types of problems, will stay significantly faster than computers at other types of problems.
We merely must learn to better teach those brains (matrix upload?), extend them (artificial memory enhancements and built in calculators) and utilize them for the problem sets they excel at.
> The human mind ... will stay significantly faster than computers at other types of problems
This is controversial. Semantic tagging of photographs used to be something that was done much better by the human brain, but all of a sudden it seems like computers are catching up fast. I hesitate to say there's any one domain that we won't be able to tackle algorithmically/via machine learning in some way.
> extend them (artificial memory enhancements and built in calculators)
My phone is already my artificial memory enhancement. Communication is via a high latency, unreliable interface, but it's (for all intents and purposes) an extension of my brain ;)
Absolutely, technology of some kind will be able to replicate all human domains of intelligence at some distant point in the future, but our own minds will be continually extended (with lower and lower latency, and increasingly reliable interfaces).
The question becomes moot as humans and machine coalesce.
I'm happy to see people thinking about this sort of stuff.
I foresee a Constitutional Amendment limiting the amount of computational power any private person or entity can own. Each person, upon birth, would be granted a certain allotment of computational capacity. This computational capacity can be hired out, but you collect it's paycheck. If you take this to the logical extreme, everyone could own the robot that replaces them at their job, and live out their life collecting it's paycheck.
Check out Marshal Brain's "Manna", where the concentration of wealth nightmare scenario is taken to it's logical extreme. (I don't find the scenario very convincing, but the thought to get it there was useful.)
I really can't imagine 100% unemployment, because I think there will always be a demand for actual human people in some roles: hairdresser, waiter, singer, prostitute, masseuse, athlete, actor, croupier, etc.
Will robots/AI be good enough to do all these jobs? Sure, they might even be now. But as long as some people want to be sung to / served by / entertained by an actual flesh-and-blood human, there will still be employment in those sectors. In fact I wouldn't be surprised in the far future if those service positions were the most valuable in society, since that's where real scarcity will be.
Maybe some of these jobs won't be replaced with a robot, but will become completely irrelevant/unnecessary?
Instead of having a robot hairdresser, what if cutting hair wasn't even a thing? e.g. maybe we'll all get holo-wigs that let us change hairstyles as easily as profile pictures.
Similarly we don't talk to switchboard operators to enter URL in the browser addressbar. We've got AI that does it better and doesn't judge :)
Exactly. And yes, in principle we could keep inventing completely random and absurd jobs just to keep people busy earning for living - but then, what's the point? Nowdays we have to work because resources are scarce. If we reach the point of post-scarcity, forcing people to slave away their lives for food and shelter seems just evil.
I wasn't suggesting we invent jobs to keep people busy - I think we should strive towards employment being required for no one. Once food and material goods are post-scarcity, absolutely they should be provided to all.
But as long as anything is scarce (and experiences are, broadly, and I don't think there's any way around that), we'll have something like "employment". It might start at "I'll sing you a song if you massage my back" barter-style, but that evolves into currency eventually.
Ok, maybe you weren't suggesting that but I see this tossed around quite often - that people will always find new jobs and therefore we can keep requiring everyone to work for sustenance. And we're inventing many bullshit jobs right now - not to look too far, half of the work done in web development is probably contributing to nothing else than zero-sum games of companies trying to out-advertise one another.
I agree - as long as resources required for basic survival are scarce, there will be employment. But the point is, we shouldn't try to maintain the status quo as long as possible; we should jump to letting people live without a job the first moment this becomes feasible.
> But as long as anything is scarce (and experiences are, broadly, and I don't think there's any way around that), we'll have something like "employment".
If labor isn't scarce (e.g., if there are easy substitutes that provide the utility provided by labor), then there's no real basis for employment evenif there is scarcity in goods.
Scarcity guarantees that people will have an incentive to sacrifice to get scarce goods, but if labor isn't scarce, then sacrificing time in the form of labor isn't likely to be an option to do that.
Sure and there will always be some demand for horses for various purposes but the total demand for horse labour is way below its peak in 1915. Do you foresee any future technology which might reinvigorate the demand for horses?
Increased human unemployment might, actually. If both cost and time weren't huge barriers, I expect more people would ride.
But I think the future for humans looks somewhat similar to that of horses today; a small number of us will still be "working", but more because of individual preferences than any kind of need.
It's not about cost and time; horse maintenance is more labour intensive, and then there's a problem of waste. Actually shortly before the first automobile was constructed, London authorities were worried that if the demand for horses keeps increasing, the city will literally be drowned in manure.
My point wasn't that elements of those jobs couldn't be done (or even done better, by some measures) by machines, but that there exist people with a strong preference to humans in those jobs.
Take croupiers; we have very good video poker, automated roulette tables, virtual blackjack, but still lots of humans dealing out cards (especially in the high roller rooms). I was at a casino in Las Vegas two weeks ago; the robot roulette gave strictly better payouts (36 to 1) than the human-run table (35 to 1) for single number bets, but there were still people at the tables.
There no doubt will always be demand for something to be done by humans, just because we're kind of a whimsy species. But I see no way for such jobs to become the basis of the economy. We need to end the concept that everyone has to work for a living. Otherwise it's not the 100% automation that will kill us, it's the 50% automation, i.e. half of the population not being able to earn their bread.
Sometimes it is forgotten that what is of 'value' is what is inherently scarcer. Today, it just happens that there is a shortage of automating agents (programmers, AI scientists etc.), once this opportunities (STEM) become common, they won't hold much value, and humans will value what is the next scarce thing (Hand crafts, Arts ? The future will tell)
We also need to remind ourselves, that the rich cannot feel rich without an army of poor underneath them. I can guarantee having the very rich people shielded from the rest is actually not an evolution of our race.
This whole thing - I can see it coming and am fully convinced that it will. I just can't work out how to leverage that insight for personal benefit (not millions just modest advantage). Feels a little weird for once I'm ahead of the curve & recognise it as such but still can't seem to leverage it. My profession isn't anywhere near AI or even tech so it's a little difficult.
That's exactly what I feel too. Having this knowledge allows me for slightly more than just telling people that the end is nigh. I don't see the way to either leverage that knowledge to safeguard myself and my friends/family, or (preferably) do something to safegurad the society at large. It's easy for me to notice now when people around me, or even I myself do the Moloch's bidding, but I can't see a way to contribute to fight against it.
"You and I and everybody else, if the present trends continue, will be selling what we do to the highest bidder."
That's how its supposed to work. The problem here is not "selling what we do to the highest bidder", its the "You and I and everyone else". If you are competing against "everyone else" then the highest bid is going to be pretty low.
The article alludes to one core bastion of economics - "the invisible hand", without mentioning another - "as humans, we are creatures of infinite wants".
With that in mind, a computer can never truly replace a human, as there would never be a purpose in creating a machine that 'wants' things that are different to its owner. Along the same lines, as robots fulfil each current human want, new wants will simply emerge.
A trite example - A robot that cleans my floor satisfies a current want, but I want a robot that cleans my whole house - every surface. After that's fulfilled, I'm likely to want a robot that can reconfigure the house based on my predicted needs, and after that, a robot that works in conjunction with others to make sure my preferred dwelling config is available in the right place and the right time every night. I'm likely to keep working in my job to have this, and my career will no doubt evolve to reflect the group need that people have for these robots.
The selfless robot in Interstellar is a realistic scenario - incredibly helpful, but programmed to want what its owners want. Imagine endless configurations of that - I think it's a likely future.
'Stupid' machines have been replacing humans since the industrial revolution, but humans have just taken a position higher up the chain. The article claims that 'smart' machines will end this process - I disagree. The greed of human nature will mean we always have to work.
> I'm likely to keep working in my job to have this
You are assuming that humans will remain better at developing towards the remaining non-automated desires of humans. That is unlikely. When a robot is able to reconfigure your house to meet predicted desires, you will probably (and most people will certainly) no longer be relevant in any job that you (or they) are capable of performing.
Anyway, that doesn't matter. What matters is that even if you are a super elite engineer that can compete a few years longer than the average joe, that doesn't fix the problem for some % >50 of the population that are already being rapidly obsoleted.
You're rather missing the point; when robots are intelligent enough to replace you, you won't have a job nor will anyone else. All your new wants will handled by robot works, if you can still afford them now that you're out of work. This idea that there's always new work for humans will not hold once we have AI; that's the whole point.
I think there has to be some kind of massive leap before we get to that state, regardless i we'll get creative freedom to do whatever we please and excel in things we might not have been interested in earlier as they had no "economic value".
Obviously its going to be a journey before the masses will get used to not working as they have been all their lives.
There's that old joke about a Lisp programmer who's looking for a job - "will write code that writes code that writes code that writes code for money" ;).
Google has cars that are about to kill the entire transportation industry. Amazon already reduced the workers in some of its warehouses to biomechanical grabbing hands; all higher-order functions are performed by robots. No, I don't think we're giving too much credit to robots.
autonomous vehicles and biomechanical grabbing hands are completely different than engineering software. The aforementioned tasks, while an impressive feat (especially in computer vision), are programmable tasks. To create "robots" that can engineer/create new software without a software engineer is not going to happen in this lifetime.
Just because it happened twice--from ag to factories from factories to service--doesn't mean there will be a huge new need for employment. In the past, human consumption just increased. I'm not so sure there will be a need for all that much more consumption. If the promise of near human level AI occurs, we'd need to have exponential growth in consumption to keep up the pace.
Also, a lot of those jobs disappeared forever in the third world and never came back. Ag jobs were never really replaced with manufacturing job. Only now are some third world countries getting manufacturing jobs. If they are replaced with automated manufacturing those coding, engineering, and management jobs will go to the first world.
So maybe it works out for America and Europe. But someplace like Indonesia and India get permafucked yet again by a new industrial revolution.
This argument keeps coming up. The problem is this is based on exactly one data point. So saying "historically..." is hardly relevant!
The automation that is happening now has never happened in the history of the human race. You simply cannot extrapolate from past events and say it'll be like that again.
It is more based on thousands of years of data. Every generation likes to think that they are unique, and that "this time everything is different". In reality life will go on.
I have no doubt that life will go on but that isn't the argument here. The argument is that while automation kills jobs in one area but create more in another. There's no logical basis for thinking that's a consistent pattern. Large scale mechanical automation is only a few hundred years old.
Agriculture provides for the basic needs of humans. What do humans do once their basic needs are met? Some of them go on to do meaningful things that expand the human condition, such as devoting themselves to the arts and the sciences. The vast majority of them turn to zero-sum status games.
This is a serious problem, because the more physical wealth can be provided by robots, the more those status games necessarily become about obtaining power over other humans. This puts capitalism, as an enabler of this kind of status games, on a path to escalating conflict not just with democracy but with human rights.
Of course, many would say that this is already happening. For now, the conflict is simply at a level where human rights violations and everything that goes with it can still be exported to third world countries.
Not only you have zero-sum status games, you have plain old zero-sum games, which do nothing but just waste resources. Prime examples: marketing, political campaigns. Every additional effort people put in those two basically goes to cancel-out the effort of everyone else, giving you a system in which you can waste infinite resources for no marginal gain.
Yes, but automation is getting better. "Historically always been the case" is a deceptive panacea. It isn't inconceivable to imagine that it might start eating into knowledge-based work even more, continually shrinking the percentage of the population that is employable regardless of talent or education. Not that this is necessarily a bad thing (10 hour work weeks anyone?), unfortunately, the current world economy isn't set up to handle this transition well.
There is little doubt that there is work for everyone. Problem is if there is profitable work for everyone.
For example, you will find countless people to help dig various archeological site, help processing the various museum collection still hidden in boxed, ...
Also "historically" is misleading. Historically the industrial revolution was one of the worst period to live - the conversion of those 90% to the industry was not a happy story. And the post-war golden age was after the worst wars ever.
We are also dealing with some hard limit this time. On the production side, we are hitting earth physical limit on about everything. Even on the consumer side, we are hitting some limit, even for virtual activities, there are only 24 hour a day for you to consume.
History is good to remind you to be optimistic. However, history is not a solution: some guy in 100 year may look back at this period as another great time in human history, but that's down to us find the details on how to achieve it.
The difference is, previous automation replaced human muscle. Today's automation replaces human brain.
Of course, it does it gradually. So you see low-skilled jobs disappearing, and no useful jobs being created in their place. As this process continues, I expect we'll be facing a growing number of people trapped in poverty who have no means and abilities to retrain themselves for jobs with more skill requirements.
Imagine a company-wide incentive structure for automation whereby if you automate 90-100% of your job, you receive massive bonuses. If you completely automate your job, the bonus could be the entirety of your annual salary as a single payout. Or perhaps a combination of bonus and equity.
So now you're out of a job. But if you can automate other jobs in the organisation, you also get bonuses.
Eventually the majority of the company becomes automated, hopefully generating the same or greater revenue and value for customers, with less overhead costs. Which is great for any stakeholders who can continue to receive revenues, but terrible for employees. So it would be good to include some kind of equity and/or revenue sharing model. The sell could then be "hey, let's all work together to automate this company, we'll get bonuses for doing so, and when the whole thing is automated we can continue to receive income while sitting on the beach".