Hacker News new | past | comments | ask | show | jobs | submit login

Yes. You're missing that the current cap is 0.

To elaborate: "Current federal rules bar self-driving cars without human controls on U.S. roads. States have issued a variety of different rules in the absence of clear federal guidance, and automakers have complained that California’s rules are too restrictive."




25k is less than 1% of most major auto manufacturers yearly production numbers. No article I've read on this news has put that into context and instead praises the legislation as liberating car manufacturers to bring FSD cars to the masses. Most people don't read past the headline let alone look into the facts of the article. Also nothing good EVER gets passed legislation unanimously. Last time the House passed legislation this quickly and unanimously was when they passed SOAPA. Something feels fishy about this. I haven't read the legislation directly but I bet if a journalist went through it fully, somethings would surface that people wouldn't like.


Given that this is from a base of zero, and the cap is set to rise by 4x to 100k by 2020 (which is when Waymo plans to start introducing driverless cars), I do not think that this is at all a slow pace.


GM sells sells 10M cars per year so 100k would be 1% of that. Also who's Waymo? Tesla is planning to demonstrate a CA to NY trip with zero human intervention in just a few months and Elon has said they're shooting for 2019 for L5 FSD for their entire fleet made past October 2016 (they've already put in the hardware necessary for FSD so when the software is ready a simple OTA software update is all that's necessary for you're car to have full L5 autonomy)


> they've already put in the hardware necessary for FSD so when the software is ready a simple OTA software update is all that's necessary for you're car to have full L5 autonomy

That's bull. Every other company working on fully autonomous driving is using Lidar for obstacle detection, but Tesla's cars have none (only radar and cameras). I see no reason to think their software is any better than anyone else's (in fact I would say Waymo almost certainly ahead of them there).

Why should I have any confidence that they will be able to get better results than anybody else using cheaper hardware?


> I see no reason to think their software is any better than anyone else's (in fact I would say Waymo almost certainly ahead of them there)

Waymo has 3 million miles of self driving data. Tesla has 3 BILLION. Perhaps that can be a reason?


Waymo can collect full video and sensor data from all the miles its cars drive. Tesla can collect some high-level logs. It's just not the same thing at all.


If, by L5 you mean, I can get in my car, at a random, drivable over legal roads location, and instruct the vehicle to take me to another (within reasonable distance) location that is drivable over legal roads - I.E. Google Maps has a route. Basically, no steering wheel required.

Benedict Evans, at A16Z, says that some of the smartest people in the automated driving field say that L5 autonomy is 15-20 years out at the earliest.

Let's just say that Elon Musk has been known to be a little "enthusiastic" when setting goals and objectives for his team.


Yeah, no one serious is expecting L5 anytime soon. When someone on the internet mentions L5 it's code for 'I don't know what I'm talking about'.


> Also who's Waymo?

You serious? It's Google's self-driving car company.


Was being sarcastic. Googles been working on their self driving cars for nearly a decade and they've only collected a few million miles of data whereas Tesla has BILLIONS of miles of data and they've only started working on their tech a couple years ago. Tesla's tech is way ahead of Google's. Waymo is overrated


Miles of data is not the best metric. If you've ever tried to build a machine learning system you'd recognize some of the issues here. First off, where are the labels? How are you to interpret all that data?

Say Tesla has recorded video of some dude driving down his street in LA and their system detects a car door opening in his data stream. How can Tesla find out whether or not there actually was a car door opening? Human labelers (for billions of miles of data)? Multiple instruments?

If I were designing a system, I'd want a car with extra instruments on it that would help me validate what the primary instruments detected. The idea of just recording a bunch of live traffic seems pretty lame. (You have no way to validate any of the conclusions you draw from the data and hence no way to improve your ability to draw conclusions from the data.)

Did you see the article about Waymos test environment at Castle? That, along with a boatload of simulation, is exactly the kind of testing you need to do to develop this kind of a system.

Has Tesla ever demo'ed their simulation environment or other tests?


Not sure why you think simulation environment is more valuable than real world data. Waymo doesn't even have as much data in simulated environment as Tesla does in real world data, which is hilarious.


Waymo has claimed they do 8 million miles a day, and it's in simulation that they've made the lion's share of their progress the past few years. In simulation you can model thousands of variations of any particular corner case, the AI doesn't know the difference between real and virtual environments.

Dedicated test vehicles are collecting many terabytes a day. Google mentioned years ago that they're gathering over a gig per second of data from their test vehicles, and it's likely more than that now. Tesla is gathering a gig or 2 of data at a time from it's hw2 equipped cars here and there. I'm optimistic that Tesla can, by the end of the year, get to about where Google was at in 2011.


In a simulation environment you know ground truth. (Where objects actually were vs. what you saw.) You can play the same scenario from slightly different angles. Or different conditions (rain, snow, low light, bright light). You can look at counterfactuals (what would I have seen if I had done X). You can test the behavior of your system without actually setting a computer loose in the real world.

From "real world" recorded data you have none of these things.

This is the difference between observational and experimental data and it's a huge difference.


Agreed with everything you are saying -but my understanding is that the Waymo simulation system doesn't simulate Input, it simulates the objects. The whole idea being that responding to objects is a separate problem from perceiving the in the first place.


Well, I think that's because Waymo hasn't shipped a self-driving car to the general public yet. My best guess is that they want to achieve level 4 before they make the tech available on a larger scale


Whatever their reasoning may be, they're still behind. WAY behind.


My thoughts exactly. When I read that the vote was unanimous I instantly thought "oh oh, what did they sneak in there".




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: