For example, Gmail's new autocomplete apparently helps people write email faster. It doesn't write the email for you. Why isn't that a bicycle for the mind? It's partially helping you do something you could do anyway, but faster. (And maybe better, if you're still learning English?)
Maybe someday, for some emails, Gmail will be able to ghostwrite the entire body of an email for you? But that's just a matter of technical competence, not a change in strategy. Tech companies will change tactics based on what's feasible.
But maybe the big change is when you let the machine take full, mostly-unattended responsibility where safety is on the line (real driverless cars).
Yeah, automation vs. amplification is a perennial issue, not sure it maps that well to companies.
The difference is control, whether the task is sufficiently predictable and well-understood.
Similar to how much of a task can be hidden behind an API - the specific usage determines acceptable performance metrics.
autocomplete/suggest is a T9/Clippy offspring, the latter by MS, and closer to automation.
BTW: to be fair, when Steve Jobs demoed the iPhone, he used it to prank a cafe for 2000 lattes. Whereas Google pranked a business automatically.
> automation vs. amplification is a perennial issue
How is that even a difference? Amplification is done by automating subcomponents of what you're trying to do. If you automate away the entire task, it means you amplify the human working one level up.
The business model follows from these fundamental differences: a platform provider has no room for ads, because the primary function of a platform is provide a stage for the applications that users actually need to shine.
Please tell Microsoft this. Windows 10 and Skype are both littered with ads. But I guess this is what happens when operating systems become commoditized. All of the other major operating systems are either free for anyone to install (Linux) or freely avallable to OEMs (Android/ChromeOS) or bundled with hardware (Apple's operating systems).
1. On the lockscreen
2. In my start menu's tilebar
3. In my start menu's search results
4. In my start menu's program list
5. In Explorer, for OneDrive
6. In Edge, on a "blank" page
7. In toast notifications
And that's just what I remember offhand; I'm certain there's even more.
Windows 10 being "littered with ads" might be an overstatement, but I remember seeing plenty of them in start "menu". AFAIR the start "menu" in Windows 8/10 is by default filled with bullshit pseudo-ads like tiles with crap newsfeeds, tiles linking to Windows Store games you really should play, Xbox games ads, etc.
I always aggressively clean up the start "menu" whenever I get on a new Win8/Win10 instance, so at the moment I don't have one near me to tell you exactly what the ads were.
Occasionally, you may see advertisements for other companies in Skype (if you're using the free version of Skype). Advertisements help keep Skype free for millions of users, and these advertisements will not disrupt your Skype experience in any way.
Skype is bundled with Windows as the default messaging platform just like iMessage for the Mac.
Microsoft could return to charging for Windows (via OEM installs and fleet licenses); or just leave it free as the "best way to experience Live/365/Azure".
I'm almost sure they charge OEMs but they charge less than they use to for low end computers to compete with Chrome OS and people are keeping computers longer. They also gave Windows 10 away to anyone who had at least Windows 7.
The attempt to create a dichotomy between the two philosophies is a bit thin when you frame the question around products instead of "computers"
All four companies care deeply about creating compelling and useful products for users. That gets spread across many different domains for all of those companies.
Someone can love and find value in a piece of software just as easily as they can a piece of hardware.
Interesting that the author didn't mention Intelligence Augmentation/amplification[1] in the article. IA is often contrasted to AI that it makes humans more able to do their job.
It's interesting how AI gets so much hype and the potential dangers and benefits are discussed so much. But the reality of tech so far has been IA. We make tools to enhance our abilities, not tools that exist for their own purposes.
IA is not sexy, you cannot get huge funding for improving existing workers. That is what computers should do anyway (and not succeded bc peple visit FB, hackernews at work) . But AI promises you don't need employees and that is something owners would invest. Reality doesn't matter, they want full automation promise, what they get is another topic.
Managers are just another employees from the POV of the owners. What the owners hope for is AI eliminating workers, and strong IP laws ensuring they still get to be the owners and rake in all the profits.
I don't think so. IMO, Apple and Microsoft are just less aggressive in taking all the agency away from their users. They do that too, but they don't put as much focus on it as Google does.
Those who have watched Halt and Catch Fire might be reminded of the competition between Rover & Comet, which are two companies with different philosophies in a similar space.
So, on the one hand one approach is to replace the mind (take it out of the loop and "let the robots take over". The other is to "augment the mind and enable it to do more". One sounds better then the other.
The robotic choice unfortunately has a very authoritarian or centralist vibe. That is, it's taking over for the person with but with central tutelage. The other the tutelage is in the person using the augmenting tech.
Both approaches are really equivalent on the technical level. Augmenting the mind is done by automating and abstracting away lower-level stuff - taking the mind out of the loop and letting the robots take the details over.
And the problem with all those companies is agency. It seems they all try to minimize user's agency, by making it purposefully impossible to control or dig into underlying automation. They differ by the effort put into taking control away from the user, with Google being the leader in blatantly dumbing down and isolating everything.
> In Google’s view, computers help you get things done — and save you time — by doing things for you. Duplex was the most impressive example — a computer talking on the phone for you
But who is "you"? Obviously, "Duplex" talks on the phone on behalf of people needing an appointment, ie, not everyone. And specifically, not the people tasked with answering the phone, who are duped into thinking they're having a human interaction when in fact they're talking to a machine. It's even worse than that. We're used to telling machines to do things for us. But the people answering Duplex calls are being told what to do, by a machine.
When GPS became popular I had a friend who absolutely refused to get one because, she said, she "wouldn't take instructions from a machine".
At the time, I thought that was the silliest thing ever.
It is a silly example, because GPS (or rather, GPS-based nav systems) really, 99% of the time, is better than you at navigating. Also, the technology itself is pretty much purely informative - a map with a path displayed. Voice instructions are optional and something implicitly understood as friendly directions, not orders.
I think most examples of machines telling people what to do still happen in context of paid work, and even there, quite often there's a good reason for that.
Personally, my worry with Duplex is all the abuse - er, growth hacking - that will happen.
There are good, honest use-cases for this technology though: for example a food ordering platform that needs to gather holiday openings hours for hundreds of thousands og restaurants globally. These restaurants are not tech-savvy: so the only alternative is using people to call them.
I don't disagree. I'm not condemning the whole product right now. I have only two primary worries:
1/ Huge increase in amount of robocall spam.
2/ Given the discussion on the Duplex HN thread, I'm starting to seriously worry people will try and do machine-to-machine communication through Duplex, which is something completely idiotic and wasteful, but might also become easier than doing the right thing (agreeing on an API).
GPS is better than human only wherever it has better data. Bring it to a place where it has no data and its useless.
Meanwhile, humans can easily adopt to find themselves in places they've never been before.
Yeah. These two "philosophies" are merely marketing. Absolute fluff. Does anyone actually believe that google is "obsessed" with giving users back their time, as the article says?
No. Fuck no. Are these guys kidding me? They're obsessed with scooping your data and serving ads to you that are so seamless with your experience that you mistake them for your own decisions. Like google actually wants to help you personally, just out of the goodness of their heart. Who the fuck believes that? These things don't exist for the common good, they exist to increase value for their shareholders. They're like paperclip maximizers in that respect. And you aren't the client. They are not out to make your day happier and healthier and more human or whatever.
Never trust a service that is so integral to your life you can't abstain from it, yet you don't pay for it.
Calling these "philosophies" is probably marketing fluff, but the dichotomy is definitely a real thing in interaction design; interacting with a computer as an agent (querying, dialogs, etc.) vs interacting with a computer as a tool (physically manipulated interfaces). All of the companies cited build products in both categories, but Google (and now increasingly Facebook) definitely skew towards the computer-as-agent applications, and Microsoft and Apple towards computer-as-tool.
At an organizational level I'm inclined to agree with you that the altruism ascribed by the article is far too optimistic, but I would imagine that there are individuals in each company who do think along these lines and try to design along them. (I do, however, find Google "giving users back their time" a bit hilarious as their business model is precisely to get users to spend as much time as possible looking at ads - maybe they free us from mundane tasks so we have more time to browse ads).
I disagree with this particular dichotomy being a real thing. The "computer as an agent" vs. "computer as a tool" interactions are functions of the problem being solved. You'd think that CLI interfaces are tools, but then tools like ls or mysql are clearly agents querying things.
IMO all those companies are on the spectrum of trying to remove user's agency from the problem. That happens when you build your tools to perform ever more complex queries and actions, while simultaneously making the user unable to see all the relevant details and tweak them. That happens when you dumb down your tools. All of the companies mentioned are guilty of it. Apple software is powerful and mostly very well made, but it's continuously dumbing down and locking the user away. Microsoft software used to be "bicycle for the mind", but it too is getting dumbed down every iteration and evolves towards pretty fluff. Google is simply much further down this spectrum, but they're not in a separate category.
It's not about productivity. A big part of productivity is interoperability, and that's not something platforms want to give you. It's all about getting you to use their software, so they'll bait you with pretty interfaces and faux-productivity promises, in hope you pay for it (with money or data).
True. And Microsoft and Apple are starting to show worrying signs of going to the other side with their own garbage AI "assistants" among other things. Windows 10 supposedly lets you switch it completely off, but it's a very dark pattern. They make it incredibly esoteric and there's no telling if a future strongarm update might just turn it back on and expand its capabilities dramatically without telling you except for some obscure never-read EULA footnote. Give it even more power to gobble up your data for dubious advantages. There's really no way to use Windows 10 satisfactorily without hacking at the registry with a machete.
I do think that individual developers and designers have good human hearts and altruism, but the corporate structure is devised to keep that under control and serve shareholder value above everything.
There was a talk at the latest CCC which likened corporations to very slow, analog AI which responded to stimuli and attempted to increase share value by any means necessary over years and generations. And that this slow, silicon-free AI has already taken over our governments and captured almost all regulation. It was an interesting idea.
The "bicycle of the mind" platform vs aggregator argument is a bit contrived: the author is conflating features built on top of platforms with the platform itself. Most of the examples listed (Google Photos auto-edit, Maps suggestions, Gmail compose) are typically features built on top of platforms that can be (and are) used without these features.
Instead, I would think of these tools as an electric bicycle of the mind (to borrow from the article): the purpose of the motor is to help you on steeper hills. It is inherently still a bicycle.
Maybe stop trying to shoe-horn multi-decade companies with thousands of employees into ill-fitting philosophical silos? I'm all for arguments for/against companies but retro-fitting them into narratives and then proclaiming A is better than B is disingenuous.
Is there a third philosophy of decentralisation which has different incentives and new constraints that are more to our liking in avoiding a dystopian future.
> I believe that we need to design technology to help bring people closer together. And I believe that that’s not going to happen on it’s own. So to do that, part of the solution, just part of it, is that one day more of our technology is going to need to focus on people and our relationships. ...
If Landmark Education (or one of its many descendants) focused on software technology, that would arguably be a key message. I wonder if that's just a coincidence.
Do you know if Landmark is a cult? Is it something to be worried about? A close friend has recently gotten super involved with it and has done all three courses and is evangelizing it pretty hard, he's normally not someone who gets sucked into this kind of thing easily though.
He also doesn't seem "brainwashed". I've googled about the organization and have my reservations, but I'm not sure if it's legitimately helpful with some bad marketing strategies or Scientology/Waco/Jonestown
facetiously, ff a computer does things for you it frees up your time to watch more ads on the computer, if a computer helps you to do the things you need to do more efficiently it frees up your time to get off the computer.
> collective action... the best form of which is bounded by the popular will
This notion is dangerous. Popular will brings fascism, authoritarianism, discrimination, instability, and uncertainty. Popular will is not and never has been a sustainable form of authority.
Political economists have a term, ‘institutions’ to describe sustainable forms of authority, like laws, morality, family structure, culture, language, literacy, technical education, career stability, and pedagogy.
Institutions like “don’t be evil”, and “we will never sell your data”, are the entire reason why the current batch of tech earned enough trust to exist at all. “Do the right thing”, and “bring people together”, however nice they sound, appear to be easily manipulated by popular will, which will be either their downfall, or ours.
I think he's saying something different in this context: that the reason democracies are generally the least bad forms of government is that they are bounded by the popular will, that is the consent of the governed puts limits on the exercise of power. He's not advocating for the tyranny of the majority, rather saying that the power of Google and Facebook is particularly dangerous because it is not constrained or bounded in the way that a democratic government's power is bounded.
I think you make a good point about institutions though. In practice they may be more important on a day to day basis than the popular will in placing constraints on government power.
This is a good point. Sustainable growth requires a balance of power, which is why things have developed in this direction. Collective action at large scale has traditionally been controlled by governments. The most successful governments are bounded by popular will, which is directed by institutions, which are informed by traditions that have developed from previous successes and failures.
Disruptive innovation requires disturbing some part of that chain of influence. Microsoft and Apple have found success mostly by providing tools to people who exist in established parts of that chain. Google and Facebook and many other current-generation startups are more likely to use the power of collective action to create feedback loops where collective action directly informs further collective action with limited checks and balances. This sometimes leads to imbalances which spark mass outrage over policies that do not fit into people's concepts of how things ought to work.
Is this kind of power sustainable? It's likely to follow a more volatile path of quick growth followed by outrage over the outcomes that make people uncomfortable, but it might find equilibrium regardless, as it still has to exist in the world that was formed by traditional institutions.
In reality, Google and Facebook grow more unbounded, as they continually use their power($) to buy more companies and become more horizontally-integrated organizations to add to their business models. This by proxy also absorbs the members/users of those other companies and brings them under their corporate umbrellas.
It would be like if the U.S. decided it would be good for them to own the entire Caribbean in addition to Puerto Rico, so they acquired all the islands (through power or other means) and then those citizens became some sort of pseudo-citizens of the U.S. as a result.
Isn’t this just the story of all corporations though? They start selling a niche product, and then they have three axes for growth: vertical, horizontal, and addressable market. All corporations take “breaths” on each of these axes as they see opportunities. But a corporation that grows too wide finds a more focused competitor stealing their share. A company that grows too tall sees a competitor using network effects against them. Growing the addressable market generally lifts all ships, but hits a limit once you max out global marketing.
Every company will eventually flex as far as they can on all these axes. I’m interested if you have some insight about something that makes Google particularly dangerous in the horizontal direction?
At least when it comes to Facebook, for a significant number of people, participation in Facebook is not an actual choice. It's similarly to living without electricity: Yes, you could do it, but the costs are too large to actually do it in practice. It's still freedom of choice on paper (because no one points a gun at people and forces them to use Facebook), but not in practice.
This is maybe one of Facebook's most crucial achievements: It has turned itself into something which people feel they cannot live without, even if they might hate it.
Google is a bit different story. Living without Google is doable with some but not gigantic sacrifices.
I think that's overstating it. Who literally needs Facebook as much as electricity? I'm saying this as someone who has a Facebook account, but never logs in to it or posts anything.
It's certainly overstating it, but take this example from my own life.
My step-sister spent years fighting cancer. I knew this, and would talk to her occasionally, but I wouldn't find out about significant events until after the fact.
As soon as I surrendered and finally joined Facebook, I discovered she was in the hospital again, and was able to go visit and spend some quality time with her. As it turned out, it was the best conversation we ever had, the most time I spent 1:1 with her before she passed away.
Are there other ways of keeping in touch with family? Sure. Once your family is invested in using Facebook, however, that's where you have to be to know what's going on all the time.
I dislike Facebook, and look forward to getting rid of it. I cannot, however, dismiss the unique value it offers.
The only other means of communication which might have led me down the same path on that occasion was a family message board on QuickTopic.
However, frankly, Facebook is much better, for a wide variety of reasons. The message board was never ideal, privacy was a binary choice, people regularly struggled with it, photo sharing was a joke, etc. It's not at all obvious that the short hospital stay would have been communicated there.
Email, phone? Nope, wouldn't have happened.
Has Facebook displaced other communications mechanisms? Sure. But for the most part, the alternatives were far from ideal. Facebook has won by being better.
It's just unfortunate that they aren't satisfied with being a good way of keeping in touch with friends and family.
I recently wrote a blog post in which I described the various lock-in effects that are at play. This statement from above has little to do with my personal experience (I'm almost gone from Facebook, although not completely yet) but with actually understanding how crucial Facebook is for most people.
Here is the relevant part:
- The need to supplant emotional labor with Facebook (read Sarah Jeong’s essay on how she tried to stay away from Facebook and really felt bad about it. For many, not having Facebook means an extremely weakened social support network)
- The need to use Facebook to get required information (such as parties, events, gossip, personalized news).
- The need to use Facebook to run a business/make a living (Read: Emerging Markets Can’t Quit Facebook)
- The need to use Facebook for work-related tasks.
- The need to use Facebook to maintain and reach a personal audience (particularly relevant for influencers and people from the fields of media, marketing and communication, politicians etc.)
- The need to login to 3rd party sites with Facebook credentials"
Not everything applies to everyone. But I bet that almost every active Facebook user would recognize themselves in some of these. And unlike what the general tenor at HN suggests, most people are not willing to accept any sacrifice in order to remove Facebook from their lives.
Tyranny of the majority is a well known concept in political science. For example, initially the civil rights movement (both the civil war era, and the 1960s) and laws was quite unpopular. From a certain point of view, they were strong armed thru by activist judges and politicians who were violating the popular will.
> From a certain point of view, they were strong armed thru by activist judges and politicians who were violating the popular will.
Not just "from a certain point of view", Obi-Wan: The whole point of a constitutional system is to check the majority, to say that some laws are impermissible, no matter how much the majority wants them, because they violate some principle we regard as more important than majority rule.
Still the majority of the population, as it happens. If serious percentages (say, 60%) of a democracy want something enough that they make it a voting issue, it'll happen eventually. You'll get a candidate who is happy to do 'everything the other guy would do + X'.
The big thing with constitutional changes is that they will normally require serious unity amongst the population, most of whom will likely by default agree with the constitution.
The majority isn't greedy and evil, they just don't put up much of a fight when the politicians 'misunderstand' and implement policy that favours the majority without accepting the damage to a minority. There are a lot of policies like that, and in many cases one suspects the politicians know exactly what they are doing but the public is intellectually lazy.
> Can you elaborate how popular will brings authoritarianism? Seems like a complete non-sequitur.
Someone once said, 'Democracy must be more than two wolves and a sheep voting on what's for dinner'. Democracy, in its purest form, is arguably mob rule.
That's why constitutional democracy - democracy with limited government and the rule of law, including human rights - is the ideal and the foundation of modern government. The majority, the popular will, can't vote to take away your free speech and other rights, no matter how unpopular you are. Remember the foundation of the U.S. is the assertion that 'all men' have "inalienable rights", and that governments exist to protect those rights.
It's not proven that " constitutional democracy" is the ideal. The "foundation of modern government" says that a Black person as no rights, but counts as 3/5ths of a person for their master's voting rights.
Modern constitutional democracy and republicanism were developed during the 17th and 18th century by Enlightenment authors (Locke, Rousseau, and Montesquieu). They setup some fairly novel concepts for enabling representative government, including the idea that rights derive from the principle of authority (Locke) and the idea of separation of powers to limit authoritarianism (Montesquieu).
Ideal is hard to quantify here, but if we focus on maximizing representation and effectiveness at the same time, then it's fair to say that these forms of government were uniquely successful. Despite their flaws and failures, they still seem to be the best at maximizing those two variables. Prior to the U.S., people accepted Plato's sensibility that all forms of democracy devolve into authoritarianism -- which obviously is a meme that persists today. (It's worth pointing out that at the same time the U.S. was forming, France had its democratic revolution which ultimately settled into Napoleonic rule.)
The racism of the Enlightenment authors I mentioned is fairly staggering. It's hard to reconcile that with their work, even at the time it was written. Despite that, America now has 100% suffrage and managed to retain its government structure (via a civil war).
> It's not proven that " constitutional democracy" is the ideal. The "foundation of modern government" says that a Black person as no rights, but counts as 3/5ths of a person for their master's voting rights.
Great point; I agree that I shouldn't have used the word "ideal", because it's not at all ideal. Constitutional democracy sucks; it's just much better than any alternative. (I think Churchill said that.)
I was under the impression that it was "liberal" democracy that gives rise to principles-over-majority rule in practice. The idea being that government authority is derived from the will of the majority just as strongly as it is derived from the rights of the minority.
This seems much more closely aligned with the principles of liberalism than it does constitutionalism. Any dictator can write a constitution and derive authority from it. They can't, however, practice authoritarianism through the practice of liberal principles.
I see this as a matter of terminology, not meaning. Here's what I know:
I've seen "constitutional democracy" used by experts, though not often enough to say it's a term of art in that field. Certainly I've seen experts say that the basic formula = (democracy) + (constitution that protects minority rights and limits government).
And certainly "liberal democracy" also is used to describe that form of government. Those liberal principles are enshrined in the constitutions; I don't know if that defines "liberal democracy".
> Any dictator can write a constitution and derive authority from it.
That would omit an essential component of the 'formula', democracy.
I think you're right, at least in an academic sense. I just associate "minority rights" with "liberal" by applying the principle that people should be allowed to do whatever unless there is a (relatively) objectively justifiable reason to do otherwise. So enshrining that principle in a constitution makes for a liberal and minority-rights protecting component, while the democracy makes for the authority-of-the-majority component which counterbalances it.
> enshrining that principle in a constitution makes for a liberal and minority-rights protecting component, while the democracy makes for the authority-of-the-majority component which counterbalances it
Such blather just gives people a guaranteed argument in favor of their opinion. If the majority is in agreement, great--majority rules. If the majority isn't in agreement, great--they are suckers for fascists.
The devil's in the details. There isn't really such a thing as popular will - there are a lot of people who all want different things and who can change their minds. What can happen in practice is you have a referendum like the brexit one where there is a lot of misinformation and the vote goes 48-52 one way or the and then the more fascist tabloids go on that the result is the will of the people and anyone who questions it is an enemy of the people etc. Much of the language originates with Hitler and Stalin who used that stuff in worse ways.
> Popular will is not and never has been a sustainable form of authority.
This is a strange claim. It could mean a number of things, but the senses in which it is defensible seem like stretches.
Popular will brings fascism and authoritarianism because it is a source of authority. It has been the well-documented foundation of extremely powerful dictators going back to the ancient Greek tyrants. It was the basis of Julius Caesar's authority, and while it proved unsustainable for him, it was also crucially important for the emperors that followed him for the next few hundred years.
At the other end of history, it has been a bedrock of the authority of recent dictators in Germany, Italy, Latin America, China, Russia, Africa, Korea, etc. etc. etc. etc.
You could make the argument that governmental authority, over the course of centuries, doesn't generally derive directly from the popular will. But there are three problems with that argument: first, people generally don't care about whether the popular will 300 years from now will endorse the institutions of today. Second, while it is true that governmental authority over long periods does not derive from the popular will, it is always pretty harshly constrained by it. Third, religious organizations routinely do derive substantial authority solely from popular support, and they do this for many centuries at a time.
> Institutions like “don’t be evil”, and “we will never sell your data”, are the entire reason why the current batch of tech earned enough trust to exist at all.
And then, once they had earned our trust, they broke those institutions. So if they are the only alternative to "popular will", then it looks like we're screwed.
Yeah, a pure democracy would be an absolute shithole. You don't want to live in a world where people's reddit votes determine every action of state, from legislation to foreign policy and war to incarceration. Kind of like that episode of The Orville.
This is a very american centric perspective due to decades of propaganda by libertarians who want to set up an artificial divide between government and people. See 'Democracy in Chains' by Nancy Maclean.
The government is you in a democracy, it's there to protect your and everyone else's interests, that what democracy and rule of law means. It's what guarantees a somewhat civilized society. It is not this simplistic 'majority can do what they want' illiteracy spread by market fundamentalists.
Markets have existed for thousands of years, it did not lead to any civilization then and cannot now. Discrimination, segregation, slavery, plunder and exploitation are the result of uncontrolled capitalism and free markets, not democracy and 'popular will'. Anyone who knows history and human greed understands this instinctively.
It appears some people become rich and increasingly alienated from people, society and government and begin to believe they don't need government. This is the kind of hubris that leads to tyranny and quasi feudalism, which is of course what they want.
>Zuckerberg, as so often seems to be the case with Facebook, comes across as a somewhat more fervent and definitely more creepy version of Google
At this point I think the difference between what FB and Google are doing is superficial marketing. The abuse potential of their technologies grows with every new peephole that they drill into your life.
While people become increasingly dependent on said technology for a growing proportion of life's activities, FB and google have repeatedly demonstrated that they do not have our best interests in mind, and serve primarily to exploit overreaches in privacy and trust.
Not to be melodramatic, but it's like some weird kind of slavery.
For example, Gmail's new autocomplete apparently helps people write email faster. It doesn't write the email for you. Why isn't that a bicycle for the mind? It's partially helping you do something you could do anyway, but faster. (And maybe better, if you're still learning English?)
Maybe someday, for some emails, Gmail will be able to ghostwrite the entire body of an email for you? But that's just a matter of technical competence, not a change in strategy. Tech companies will change tactics based on what's feasible.
But maybe the big change is when you let the machine take full, mostly-unattended responsibility where safety is on the line (real driverless cars).