Hacker News new | past | comments | ask | show | jobs | submit login
Anthony Levandowski to Larry Page: Google's self-driving project is broken (2016) (twitter.com/techemails)
45 points by mfiguiere on Oct 6, 2021 | hide | past | favorite | 73 comments



(2016)

Anthony Levandowski has since become infamous for his "move fast and break things" approach to life. But you can get away with a lot if there's a presidential pardon waiting for you!

https://en.wikipedia.org/wiki/Anthony_Levandowski

https://www.nytimes.com/article/who-did-trump-pardon.html


Pretty much solidifies my opinion of this guy. Someone who did clearly illegal and unethical things? This is the icing on the cake.


Is it really clearly unethical? It's just messing with intellectual property laws. Pretty much every interesting innovation in history would involve messing with intellectual property laws, if the West's modern draconian ones were retroactively applied.

I don't like Levandowski (anyone who's worked at Google or Uber has perpetuated evil and he definitely paid his way to get that pardon), but it seems weird to claim that the thing people hate him for was actually unethical rather than just illegal.


No, he's unethical.

> While working at Google's Project Chauffeur, the self-driving car program that would later evolve into Waymo, Levandowski allegedly modified the car's software so it could be taken on routes that were previously off-limits. After another employee became angry with Levandowski for altering the code, the two began to argue — which resulted in Levandowski taking the employee on a test run to prove his point, an executive told the New Yorker.

> Levandowski caused an accident during that test run, a former Google executive told the New Yorker. Google's self-driving Toyota Prius allegedly blocked another car from merging onto the highway, which caused the other driver to swerve into the highway median. Levandowski allegedly then took control of the Prius and swerved to avoid contact with the vehicle, but the violent motion seriously injured the other employee's spine.

> Even though Levandowski and Google's self-driving car appeared to have caused the accident, the pair allegedly drove off without checking to see if the other driver was okay, and the incident wasn't reported. Even after Google's self-driving Prius was involved with an accident, Levandowski defended his safety standards, and sent his coworkers an email with the subject line "Prius vs Camry" that contained a video of the accident.

https://www.businessinsider.com/anthony-levandowski-google-s...


Oh my god, I couldn't help but laugh, what a crazy hilarious scene. I could see him berating his coworker that he's just fine following the crash.


Considering that his actions were a key part of Uber's rush to market for self-driving, and that resulted in the autonomous execution of a pedestrian, I'd say that it was unethical.


One might even say criminal


From 2016. For one, I appreciate Waymo's cautious approach that values human life more than business success, a la Tesla.


I read this very differently. This was coming off of the failure of Firefly (https://qz.com/1005083/the-cutest-thing-google-has-ever-made...). The problem with it was that you couldn't have a safety driver in the car. So it would require level 5 ready software.

My interpretation of Levandowski, is that he is pitching the current waymo strategy, to get as many retrofitted vehicles on the road in the target market and collect a ton of training data. These human driven "self-driving" cars could pick up passengers while collecting the data so as to build up a customer base and service flow for the future service.


Yeah, it seems like Waymo did end up following Levandowski's ideas and didn't try to build their own vehicles. They have the Waymo Driver now, which is supposed to be modular and can be upfitted to different types of cars/trucks with OEM partnerships.


Precisely. Five years on, Waymo is still working on the project and the company Levandowski left to go work for has pulled out of the market after an extremely high-profile fatality.


Or the fact that Waymo is no closer than they were when he criticized them is proof he was right. How is Waymo doing?


Trolling? Rides are open to the public in AZ and SF. They're the leaders in AVs and every company competing with them relies on innovations they developed along the way.


5 years for two small sections of two cities isn't exactly inspiring. LA was supposed to happen two years ago.

I guess at least they haven't killed an innocent pedestrian with their self driving, like Uber.


1. First ever fully driverless commercial robotaxi deployment in the U.S. — in Phoenix metro area.

2. Trusted tester program opened up to San Francisco residents for rides with a safety driver. Approved for commercial service (with a safety driver for now) by CA DMV.

Seems like they've made progress exactly like what Levandowski would have wanted.


> For one, I appreciate

Capitalism, and the stock market, on the other hand, doesn't appreciate it. It kills "cautious" companies and keeps alive companies that have a revenue model, even if that revenue model comes at the expense of safety.


What does that say about capitalism?


"Capitalism" means that the customer decides which company gets the money by his buying decisions.

So, it says about capitalism that few customers actually care about safety when they do their buying decisions (the market then delivers according to these chosen priorities); customers rather complain when their conscious buying decisions turn out to be bad ideas (instead of accepting these as consequences of their buying decisions).


Again, we see a conflation of "capitalism", in which rich people decide everything, with "free markets", in which consumers have some autonomy. In USA, where cheap at-home Covid tests are still not widely available, guess which one we have?

https://slate.com/technology/2021/09/covid-rapid-antigen-tes...


How many people have died because of Tesla FSD again? Let me check. zero.


That's not true. Just from memory, there was this crash:

https://www.kqed.org/news/11801138/apple-engineer-killed-in-...

Reuters reports about 10 Tesla fatalities directly attributable to Autopilot:

https://www.reuters.com/business/autos-transportation/us-saf...


Several deaths, and more serious accidents, are attributed to Tesla Autopilot though, which Tesla marketing has confused customers into thinking is equivalent to FSD.

Compare Tesla’s initial approach (or lack thereof) to driver monitoring vs GM Super Cruise for instance.


You mean it hasn’t already killed anyone since its release - two weeks ago? I don’t know if that’s something to brag about…

https://www.google.com/amp/s/www.cnbc.com/amp/2021/09/25/tes...


FSD has been released for many months.


Is your claim is that it's 0 because Tesla FSD doesn't exist?

Because if you Google "How many people have died because of Tesla FSD" it seems to be at least 10 according to tesladeaths.com . I also found a strange case of 2 deaths in Texas with no one in the driver's seat a few months ago.


Car accidents resulting in fatality are relatively uncommon, thanks to modern safety standards, and Tesla FSD doesn't have nearly as many miles as it needs.

Your sample size is way too small to be implying that FSD is safe - given a year or so, with millions more miles, we'll see what the score is.

By the way, and this is anecdotal, I was personally given a FSD demonstration a few weeks ago. The car immediately did an unsafe lane change in the middle of an intersection. Albeit, that was one of the pilot program beta versions.


None, as FSD is not released yet. Currently they're testing City Streets module in a closed beta.


So my original comment was correct, and downvoted ...


This is absolutely untrue, and trivial to verify.

Quick Edit: I suppose it's more accurate to say FSD hasn't killed anyone, but AP has, but given the current context, I'm not sure it's relevant.


I said FSD!


There have been 10 autopilot deaths last I checked.


There's a very strong argument that the best way to save lives is whatever strategy advances self-driving technology the fastest.

30,000 Americans (and vastly more people worldwide) die each year from human-driving mistakes... let's not ignore the ongoing human cost of the status quo. Self-driving has the potential to cut that human toll to almost zero. We should not trap ourselves in a future where 1 self-driving mistake death is considered worse than a hundred thousand deaths from normal human mistakes.


The status quo is ridiculous, but we already have a better solution, dense cities and mass transit. We should be building trains using decades old tech, not AI bs. Of course I'm just shouting into the void but the dream of self driving cars is incredible over engineering and will take us backwards in the long run on the problem of global warming.


Its much easier to sell the future if you portrait it as "autonomous pods for everyone" instead of "trains everywhere", even though the latter is vastly more efficient, simpler, cheaper and eco-friendly. As usual, convenient ideas fare better than good ideas.


So in your analysis, how many people should robocars be killing? The easy answer is "anything less than 30K", right?

Somehow that case only looks "very strong" to people with a financial interest in the tech.


Death by robocar is simply unacceptable.

Death by shared responsibility is well accepted. It’s about agency and consent.

If you rescued a group of 30 drowning people with your yacht. No one thinks you can shoot one or two on the head, on a whim, and argue that it’s okay— based on the fact that your actions led to a net saving of 28 lives.

So, how you die matters. We human drivers are all out there making a reasonable effort to avoid accidents. The standard self-driving tech must be held to is also “reasonableness.” None of the deaths due to self-driving cars I have heard of can meet that standard. They are unacceptable.


Unless you are a serial drunk driver you would want FSD to be alot safer then the human average.


When you factor in the human element, getting on the road as soon as possible may not actually be that strategy.

Uber had one fatality and is now out of the market. The public has very little stomach for error here. Multiple companies with multiple fatalities could shelve the whole industry indefinitely.


Wait, does everyone get a free Tesla with FSD at some point?

How does your solution spread to millions of drivers? I think if you actually want to start saving lives now (and not decades from now for FSD + policy change + phasing out), mandate real driving classes and tests for licensing.


So, if you make it more time-consuming and more expensive to get a license, essentially the entire burden is now put onto the less well-off segment of society.

And I'm skeptical how much good you do. Yes, younger drivers are less safe, but I'm unconvinced that simply having some more instruction time and a test that is more rigorous but presumably not onerous really provides the equivalent of a few years of experience.


Exactly. Perfection is the enemy of progress.

If one death is caused by Tesla's FSD should we ignore the fact that statistically it would have already saved x lives?


Excessive caution also costs lives when it comes to self driving because it pushes adoption further into the future. Tesla is overall near parity with human drivers in terms of casualty rates, though with different people at risk. Assuming Waymo’s system is significantly better than that it’s therefore safer than human drivers and thus should be deployed to save lives.

Of course that’s valuing everyone’s lives equally, there are some arguments to be made that slight improvements aren’t enough.


> Tesla is overall near parity with human drivers in terms of casualty rates, though with different people at risk.

Unproven claim.


If it was much worse than people everyone would be talking about it, insurance rates would match etc.

I am not saying it’s as good on average and clearly it’s much worse at some things. That said it’s not say ten times as likely to result in a fatal accident.


That's much different than making a positive claim the autopilot has parity with human drivers for casualty rates, as there are a number of different reasons that is the case.


I didn’t say Autopilot was as good as the average driver. I said it was within the ballpark of the average driver. The fact that some major insurance companies offer discounts for Teslas with self driving over those without seems like strong evidence for such.

Unless you have some evidence why there isn’t a huge story around Autopilots overall safety?


First, that's not true. Most people don't even have FSD beta, let alone use it. Secondly, Teslas are notoriously expensive to insure.


It’s not about the rate for a Tesla it’s about the rate for Teslas without and without autopilot. Direct Line, the largest car insurer in the UK, still offers a 5% discount for Tesla with Autopilot in 2021 over those that don’t have it.

That’s a very clear indication for near parity.


Are you talking about the discount that they introduced with this disclaimer [0]:

> Direct Line said it was too early to say whether the use of the autopilot system produced a safety record that justified lower premiums. It said it was charging less to encourage use of the system and aid research.

That was back in 2017 and they probably have better numbers now, but it's not exactly an indication of autonomy and being safer-than-humans. These sorts of discounts are also widely available for other ADAS systems.

[0] https://www.reuters.com/article/uk-direct-line-ins-tesla-idU...


It’s still available in 2021, that’s plenty of time to collect all relevant data.

And again I am not using it as evidence that Autopilot is better, just that it’s within the same ballpark as human drivers. Offering a significant discount bounds how bad autopilot can be.


Do they also give discounts for assistive driving features like adaptive cruise control? Because a fair number of insurers do.


> Tesla is overall near parity with human drivers in terms of casualty rates

Not it is not. It's on parity with human drivers IN IDEAL CONDITIONS. Autopilot is frequently flummoxed by rain, snow, or other high glare conditions.

It's kind of cheeky for these companies to claim their driver assist technologies have a better record than humans when they get to disengage every time the conditions are less than ideal. Humans don't have that option.

Hell, I could claim I'm an expert at anything as long as I get to tap out once I'm in over my head...


Weather conditions only contribute to ~16% of fatal road accidents. https://ehjournal.biomedcentral.com/articles/10.1186/s12940-...

So, that doesn’t actually change the near parity question much. Further, the question isn’t if the 100% self driving is available today for all roads and weather conditions, the question when is it good enough to start saving lives.


If the problem is millions of traffic accidents from individual human drivers then it will be far cheaper and faster to bet big on public transportation, high speed/light/etc. rail, instead of a fully autonomous self-driving car future that has always claimed to be just 5 years away (for the last 15 years).


I can’t think of any affordable public transportation system ready to replace rural roads. What exactly are you suggesting?

On the other hand if you’re keeping rural roads then self driving cars could make a huge difference even if cities banned cars.


Levandowski is arguing in favor of retrofitting in order to deploy now, versus spending the time and money on future design with an unknown ship date. And he argues the hardware is good enough, it's the software that needs improving. It's approximately 180 degrees opposite of Tesla's strategy.

Is self-driving a mostly solved problem? I'm not even sure what the context is, long haul highway, or mixed (pedestrian, biking, non-self-driving autos)? Humans get confused with edge cases: poor road design, poor and non-standard signage, difficulty predicting other people's intentions in particular when human driver and pedestrian can't see each other's faces to know if they see each other as well as hand signaling to reduce the ambiguity, etc.

I come from an aviation background where the approximate equivalents for mixed city is VFR, and long haul standardized is IFR. And while the big expensive newer plans can do Cat IIIc with autoland in zero visibility: not everything is a Cat IIIc approach (there is no equivalent for take-off), and not every airport has the ground infrastructure to support such landings, and there's a whole area of ground operations that also aren't easily automated. Ergo, I use the best case scenario of IFR as the most likely environment in which we could have autonomous aircraft, and yet we don't. Andn that's when the environment is in normal positive control, as soon as there's no radar, compulsory reporting by a human pilot becomes required with IFR. What if weather forecast is off, and you need to fly to the alternate? What if there's loss of radio communications? All of these are accounted for by IFR, there's a written protocol pilots learn, but human pilots perform those actions. Not flight navigation systems or autopilots.

And from my point of view, the self-driving automobile problem is astronomically more complicated than IFR. Which is now where I come back full circle and opine that I think we're a long ways off from seeing "full self driving" (whatever that is) in anything other than very constrained, idealized (standardized) environments.


> I come from an aviation background where the approximate equivalents for mixed city is VFR, and long haul standardized is IFR.

Your subsequent paragraph is extremely interesting because you talk about how IFR is "easier to automate" – but, a layman's understanding of IFR vs VFR is "VFR is easy for humans, IFR kills bad pilots", so that first sentence threw me for a loop: "mixed city" is insanely hard over long haul trucking, why is it VFR?!


s/IFR is easier to automate/IFR should be easier automate/ and that's mainly because the theory is that positive separation is neither the pilot nor autopilot's problem. And also the stricter standards constrain the environment. Separation is primarily the responsibility of pilots under VFR. In fact even on an IFR flight plan, in visual conditions, pilots are expected to maintain separation, hence the reporting of entering instrument meteorological conditions. It is the very thing humans are good at, visual analysis, that autonomous systems aren't good at. But if the need for visual analysis is taken off the table, that should make automation of IFR+IMC easier compared to VFR.

The analog for pedestrians might be radio communications. Try getting an autonomous system to understand non-standard jargon, which happens all the time among native English speaking pilots. Of course we should just have standardized communication without voice, thereby reducing ambiguity, but how to do deal with legacy? Again in aviation this would be easier to just say, no more voice radio communications - work for this is already in progress for some time. But get rid of pedestrians? How? And there's some historical point for comparison, when pedestrians had an absolute right to cross the road, with horse driven carriages and before the automobile. Once the automobile arrived, so did ticketing for jaywalking.


Depends on what you are trying to solve. Personally I find improving level 2 assistance programs are much more useful. I’m actually able to use and benefit from OpenPilot on a daily basis, and all it does is lane keeping, but it does it so well.


It's a really good point. Maybe we're allowing perfect to be the enemy of good. Where are the worst problems? The edge cases we want to fix? What are humans bad at that autonomous systems are better at? Probably top on that list is patience. Only humans get impatient, frustrated, and have emotional situations cloud their judgement.


Yes exactly. Contrast with my friends who have Teslas who never use the autopilot or self driving functions.


I would never be able to live down the internal shame of typing an email to a CEO with "We're loosing our tech advantage..."


Sometimes I suspect executives cultivate a sort of deliberate illiteracy as a way of keeping others off balance. I have received so many emails ending with the grating non-sentence "Please advice."


hey when you're setting loose your tech advantage you definitely want to advise the CEO!


Why's that?

Edit: ooooh the typo


Separating the messenger from the message is crucial here. Being around in the valley for far too long, I've seen the kind of leadership he is griping about. Focussing on the wrong things, shutting down great ideas, moving too slow and outright refusing to acknowledge failures. Eventually the competitors move up ahead and you fail to deliver. You either pull out of the market, continue building a mediocre product that sees a couple of years of "meh" adoption or you buy the competitor. All of which can be avoided.


Google to Anthony Levandowski: your ethics are too broken, even for us.


Let's not get ahead of ourselves...


Shit, he was exactly right.


One of the most brilliant engineers out there. A true madman with an old hacker mentality that is nowhere to be seen these days, except for maybe George Hotz...

Old days where different, today, it's about leetcode and being overly happy on zoom calls, and playing along investors playbooks... Capitalist only left the hoodies, and that's because another 100m funded startup from their portfolio are selling them.

Feeling nostalgic...


> L,

God, what a douche.


This is (was) the common way to address Larry Page. Sergey is S.

That said, I'm not arguing that you're wrong.


Bad title, email was about Google making custom cars.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: