Hacker News new | past | comments | ask | show | jobs | submit login
George Hotz cancels his Tesla Autopilot-like ‘comma one’ (electrek.co)
330 points by gatsby on Oct 28, 2016 | hide | past | favorite | 473 comments



I love this paragraph of the Request:

> The singular includes the plural; the plural includes the singular. The masculine gender includes the feminine and neuter genders; and the neuter gender includes the masculine and feminine genders. "And" as well as "or" shall be construed either disjunctively or conjunctively, to bring within scope of this Special Order all responses that might otherwise be construed to be outside the scope. "Each" shall be construed to include "every", and "every" shall be construed to include "each". "Any" shall be construed to include "all", and "all" shall be construed to include "any". The use of a verb in any tense shall be construed as the use of the verb in a past or present tense, whenever necessary to bring within the scope of the document requests all responses which might otherwise be construed to be outside its scope.

I've seen similar paragraphs on other legal documents, but this is the most thorough I've seen. It's basically just a huge middle finger to all (any?) armchair lawyers who want to weasel out of the order.


See also RFC 2119 that disambiguates the meaning of "must", "shall", "may" and others in specs. It's because of that RFC that you often see those words in caps in a spec.

https://www.ietf.org/rfc/rfc2119.txt


Yup, that is the "don't try to wiggle out of this with some thin reasoning about scope" paragraph.


Need me one of those for internet comments.


IIRC my employment contract has something along the lines of implying that if a word was accidentally omitted (or added) then the sentence and paragraph is to still retain its original intended meaning (?!)


It didn't spell out what the definition of 'is' is, though.


The part about verb tenses is specifically referring to the "what 'is' is" loophoole.


He's currently throwing a temper tantrum on Twitter, and all because the NHTSA wanted to ask him a few questions about the product (you can see it here: https://www.scribd.com/document/329218929/2016-10-27-Special...). Like, how dare the government try to have basic standards for multi-ton hunks of metal hurtling down the highway without human input.

I've been trying to convince people lately that Silicon Valley isn't all delusional technolibertarian assholes, but guys like this sure aren't making my case easy.


Exactly. The NHTSA memo seems extremely reasonable to me. If you're going to put to market a product where the smallest malfunction could easily kill many people (both those using your product and those around them), answering a couple regulation questions seems like a small hurdle to pass. Nothing here is saying "we want to shut you down", it's saying "we want to make sure you pass the most basic safety standards". That's an obvious part of a government's duty to keep it's citizens safe.


You think a threat of $21k fines per day is reasonable. This pretty much guarantees that small businesses can't compete in this market. The financial risk is too high.


If you can't pay 21k in fines, you definitely can't afford the engineering and safety testing required.


$21k per day would add up fast.


I feel $21,000 fine per day sounds pretty like "we want to shut you down". He never presented comma.ai as an autopilot but as a very smart line assist and adaptive cruise control.


"As you are undoubtedly aware, there is a high likelihood that some drivers will use your product in a manner that exceeds its intended purpose."

The government appears well aware of this hand waving garbage that companies are trying with "it isn't really autopilot, keep your hands on the wheel wink." This is exactly government's job and is a fabulous example of a government reacting to ongoing events that have proven that so-called autopilot systems are inherently a danger to the public at this stage.


Yes. There are a number of federal agencies that are a shit-show at best but I can't remember a time in my adult life when I was disappointed in NHTSA.


The NTSB also does amazing work, though it's the smallest Federal agency (~400 people), which I think probably helps greatly.


NTSB has awesome investigators. Some of the smartest people I've met and worked with in my life. They are very serious and dedicated, while having to put up with a lot of roadblocks, nonsense, and other issues from both corporate America and other government agencies.

Can confirm at least though many years ago, the internal NTSB IT staff/programmers/DB were a joke. I guess that's just government though and like much of government, they relied a lot on outside contractors. For years they were a ColdFusion shop, so I'll let your mind start there.


>inherently a danger

Can you expand on how they are an inherent danger, rather than a possible danger? I only challenge you because Tesla has published several occasions where Autopilot has possibly saved lives by auto-braking.


> Can you expand on how they are an inherent danger, rather than a possible danger?

I do not believe those words (inherent & possible) are in any way mutually exclusive. They might even mean the same thing when describing a latent danger. So I do not quite understand the basis of your challenge. Also, I'm not parent poster.


My understanding of inherent might not be right, then. I think of it as "it is definitely a danger, and thus can only be described as dangerous," which I would argue is not mutually exclusive with possible but is mutually exclusive with "not always dangerous and maybe even anti-dangerous."

So what I mean is, the autopilot could turn out to be safer than manual, in my mind, so how is it "inherently" dangerous.


A swimming pool in your backyard is an inherent danger. That doesn't mean anyone will be injured or killed, but a deep body of water with sheer walls is inherently dangerous.

It would nonetheless also be accurate to describe it as a possible danger, because the danger only manifests in certain circumstances.

> So what I mean is, the autopilot could turn out to be safer than manual, in my mind, so how is it "inherently" dangerous.

That could well be the case but I don't think it eliminates the inherent danger. Maybe it's less dangerous, but cars on the road are just kind of dangerous in general.

I think it's also very generous to assume that the early auto pilot car systems we have today are necessarily safer than human drivers. Emergency braking is almost certainly a net gain. Self driving might be or might not be. A self driving add-on cooked up by geohot? I'm doubtful it's yet gone through the kind of engineering rigor we'd expect of a system like this.


"So what I mean is, the autopilot could turn out to be safer than manual, in my mind, so how is it "inherently" dangerous."

In the same way that the ocean is still inherently dangerous even if you're wearing a life jacket, or how airplanes are still inherently dangerous even though aircraft autopilot systems are effectively mainstream at this point.


It's inherently dangerous because it's a bunch of hardware and software that take over a 2 ton vehicle to turn it into a software guided missile. That's an inherently dangerous set-up that will need a lot of thinking to keep it safe for general use.

Just like a piano suspended from a rope is inherently dangerous. The danger may not manifest at all for that to be the case.


My statement was that "ongoing events that have proven that so-called autopilot systems are inherently a danger to the public at this stage". The important part to note is "at this stage" where at this stage refers to this stage of development and implementation.

Contrary to Tesla's publishing several occasions where Autopilot has possibly saved lives, there have been concrete incidents where people have died while Autopilot is being used. Tesla has backed away from these incidents with the same hand waving that NHTSA has specifically touched on here: that the user is exceeding the system's design purpose.

In this context ("at this stage"), where consumers have historically and will most likely continue to exceed these systems' design purposes and operate them outside of their intended scope, it seems very clear to me that these systems are inherently dangerous to the public. That is, there is no way to make these systems not a danger to the public in their current form.


The NHTSA are fine with auto-braking - indeed, they want it to be standard on all cars in the future[1]. What they're not OK with is companies marketing glorified cruise control as a self-driving feature. Elon Musk likes to lump his implementation of both under the same "Autopilot" umbrella for PR reasons, but you can have the former without the latter - indeed, I'm pretty sure the most prominent example he gave of the former saving a life was wityh the latter switched off.

[1] http://www.autoblog.com/2016/03/17/nhtsa-iihs-20-automakers-...


Proof that so-called autopilot systems are a danger to the public would be evidence that they perform worse than the average driver. I haven't seen such evidence, what are you referencing?


It's the other way around - you need a proof that they perform comparable to an average driver during expected use, and they do not have absurdly stupid corner cases during actually expected use.


Well, humans have absurdly stupid corner cases during actual use, too. It depends on how small the corner is, I think.


Yes and humans require certification before they are permitted to drive vehicles on public roadways.


To get that certification, do they test you first, or do they wait for you to screw up and then take it away from you??


We know, being humans, and that we have the ability to process lots of complex information in a way that's very difficult for computers to replicate. Hard AI doesn't actually exist (yet). We also have a 100 years of humans driving cars worldwide so we understand well what they're good at and what they're not, so laws & safety designs take all of this into consideration.

Each computer system will be encountering new, diverse things in the real world without a good understanding of how they'll perform. There are lots of crazy hard problems here that no one has solved yet. So to suggest we just automatically trust it because humans make mistakes is foolish when the consequences are so high. If someone came out with a surgery "autopilot" tomorrow, would you suggest it start giving triple bypass surgeries right away without FDA approval because humans make errors too?


One of the features of the common human firmware is self-preservation instinct. It lets us trust that our fellow drivers, while still prone to mistakes, won't generally make obviously suicidal errors. Can one say the same about a new ML algorithm running on some board designed half a decade ago? How exactly would one know, without a thorough audit?


They test you first, but if despite passing the test you screw up sufficiently badly later they decertify you.

It's a belt-and-suspenders system.


We've been dealing with those corner cases for thousands of years and we know them pretty well. Given that we all run on pretty much the same hardware and firmware (with minor differences), you could say humans have been thoroughly tested for few millennia, including a hundred or so years of road testing.

So yeah, I a bit of thorough testing of a completely new hardware running completely new software isn't too much to ask for.


When you measure the safety of autonomous cars by counting the miles they travel, and not counting the miles that they intentionally avoid or defer to a human, you have succeeded in creating a metric that looks like it's useful for comparison, but is actually completely meaningless. It's like measuring typing speed while ignoring typos.

Until autonomous vehicles are subject to the full spectrum of conditions that human drivers face, incidents/mile is not a meaningful measure of comparative safety. And until then, the burden of proof should be on the creators.


Not true. Lots of accidents happen in "good" conditions because humans suck at driving.

The way you compare, is you give a self driving car to a person, and compare actually usage for that person, and see if owning a self driving car increases or reduces accidents for that person.

Being perfect only half the time is still revolutionary.


The frequency with which an autonomous driving system relinquishes control is one important metric for estimating the actual risk. Measures like this are better and faster than using the actual tally of catastrophic events, because there is more, and more frequent, information. Engineers working in high-risk fields have been doing this sort of analysis for decades, and there is no reason to make an exception for autonomous road vehicles.


I have a 100% perfect safety record over probably hundreds of thousands of miles of driving a class B truck with my knees instead of my hands, using cruise control, on straight empty highways in Nevada, Idaho, and Utah, with perfect weather and good road conditions. There's nothing revolutionary about my ability to perfectly drive with my knees...I just very carefully selected the conditions where I was willing to do it.


You may have that perfect safety record, but other people don't.

People still crash every day in good conditions. That's thousands of lives that could be saved with our imperfect, sunny weather on the highway only self driving car.

You underestimate how bad human drivers are even in perfect conditions.


You are completely missing the point, so I'll try explaining via a Reductio ad absurdum hypothetical.

Lets say that Semi-Autonomous Cars are currently tested on about 80% of the tasks that humans currently face in the real world, and that through some feat of engineering and design we don't have to worry about the ridiculously messy transitions between Autonomous Mode and Human Mode.

And let's say that for those 80% of tasks, the Semi-Autonomous Cars have a 0.05% accidents/100k miles incident rate.

And let's also say that Humans on average have a 1% accidents/100k miles incident rate.

What you're telling me is that the Semi-Autonomous car has a much better safety record, so give us Semi-Autonomous RIGHT NOW OR WE'RE ALL GONNA DIE!!!

But what I'm telling you is that you don't know enough to make that decision yet, because you don't know exactly how well humans do on the 80% that Semi-Autonomous cars currently handle, you merely know the average accident rate over the current 100% of scenarios. For all you know, Human drivers could have their average 1% accident rate as a result of a 0.0% accident rate for that 80% subset and a 5% accident rate on the remaining 20% that Semi-Autonomous cars can't handle. And if that were the case, then forcing us all to use Semi-Autonomous cars would actually increase the average accident rate from 1% to 1.004%.

Until you fully understand what Semi-Autonomous Cars are capable of, AND know how well human drivers do on that restricted subset, you can't definitively say that current technology is better than humans.


I understand what you are saying, I am just disagree with the facts.

"Human drivers could have their average 1% accident rate as a result of a 0.0% accident rate for that 80% subset "

No they couldn't, because they don't.

I am asserting that for this specific 80% of perfect conditions, humans are still terrible drivers. And that being better than them is EASY, because of just how terrible humans are at driving (even in "perfect conditions").


> No they couldn't, because they don't.

The numbers were deliberately exaggerated to make the point. The fact stands though that until you know what the numbers are, the best answer is not easy to come by.

> I am asserting that for this specific 80% of perfect conditions, humans are still terrible drivers. And that being better than them is EASY, because of just how terrible humans are at driving (even in "perfect conditions").

And I am asserting the opposite: that the appearance of safety of autonomous vehicles is the result of highly selective conditions with near laboratory levels of control, the likes of which are so monumentally easy to handle that even humans, as shitty and inattentive as they are at driving, can handle with comparative levels of safety. And I certainly think it's possible that computers will lag behind humans for another 20-50 years while we slowly develop the massive body of fast-heuristics research necessary to make NP-complete planning decisions with the speed and capability of even below-average humans.


>>humans suck at driving

Proof?


Well, roughly every driving week or so...

Just two days ago, driving on a roundabout some lady enters the roundabout right in front of my car. Fortunately it was a two lane roundabout and the left lane was empty so I could swerve to avoid crashing into her left hand side door (I was coming straight at her).

At the traffic light I asked her if she had not seen me and she went 'Seen what? Where?', so apparently she had not seen me at all, which is pretty impressive given that I was less than 5 meters away from her when it happened.

This sort of thing happens with some regularity. All it takes for a situation like that to turn into an accident with possibly a fatality is for me also not to pay attention.


so we should wait till a statistically-significant percentage of people die before we tell everyone, "yep, definitely dangerous. Lets add some regulation" ?

I wonder if they could shut him down on the basis of car insurance requirements and the legal definition of "driver". If "driver" is defined as the "person" inputting control commands, then either you dont consider the computer a person, and then nobody is legally "driving" the car, or else the software somehow meets the definition of a "person" or driver, in which case its an unlicensed driver operating the vehicle and insurance may not have to cover it.


The NHTSA has been a strong proponent of self-driving technology and has gone out of its way to make sure there is a path to doing it legally, without pulling the legal 'gotcha' that you described here.

The device would probably be SAE Level 2, and should be regulated as such.


beyond the obviousness of what you have overlooked, its also worth pointing out that not all autonomous car usage will be substitutive. What if i just send my autonomous car to go take a picture of something? I might not have done it before, but now I will because it only costs a few cents/mile and none of my time.


Another really cool use case will be sending your car to drive around the block for an hour or go back to your house and park, while you are going out to places. Beats paying the $10 to park for an hour.


Lots of interesting effects there, all the way up to basic urban land-use decisions.

Suddenly the price people will pay for parking is going to be closely related to the price of gas. The maximum value of a parking space in the city is basically the cost to send the car back out to some suburban parking garage during the day and have it come back in later for you. My guess is that the market-clearing price is going to be a lot lower than it is today. Lots of urban parking structures might end up getting repurposed for other uses, if you can't fill them at hundreds of dollars per space per month.

In the short run I could see the average self-driving car's gas usage being higher than a conventional car's, but in the longer run the same self-driving features might make fully electric cars more palatable. Long charging times don't really matter if you can let the car go and charge itself while you're at work, for e.g.


And if that place that it goes to charge itself happens to have a pile of solar and wind, even better! Way easier to do a bit installation in a less populated area than right in the centre of downtown.


By "we want to shut you down" I meant the NHTSA isn't presenting him with so many regulations and requirements that meeting them is impossible, effectively ending the product. They are definitely using the fine to force him to comply with their request to meet the very minimum bar of answering their questions about his product. This, as mentioned, makes total sense to me in terms of what government should do to protect the safety of its citizens.

Edit: grammar


Actually even the $21k fine should be nothing if he got $3M in venture capital. What's questionable is whether somebody can build safely an almost-self-driving car without a good team and with many years of experience in the field.


$21K per day. Four months to consume $3M entirely.


The fine was up to $21k per day subject to a maximum of $105k


Actually, the maximum is $105MM.


The fine is punative. Noncompliance is often not acceptable when it comes to safety and environmental regulations. When I worked in plants, you can bet there were only two groups that would put the fear of God in operations: OSHA and the local environmental regulations agency. Fines could be thousands per hour (or per infraction, which could be collected hourly). Even if it's unreasonable (and for damn sure it can be - Argon metering was my personal worst hated), you better comply until you successfully sue or negotiate it down.

And it is likely necessary. Meeting regulations is far harder and more expensive than just the feel good that you met the reqs. If there wasn't a stick held up as a credible threat, plants would just trash everything. Not because they're evil, but because plants simply can't care - they aren't people. Thusly these agencies are impersonal and brutal in kind.


Not to dispute your point, which I agree with, but --- argon metering? How can that be a safety or environmental hazard? You don't want to lose it overboard because it's expensive, but it's a noble gas, so it should be inert, right? What am I missing?


https://en.wikipedia.org/wiki/Inert_gas_asphyxiation

It can displace air, causing death.

Here's one example of a death, although here they deliberately put the argon in there: http://maritimeaccident.org/2014/08/safespace-argoninert-gas...


True, but so can nitrogen, propane, neon, carbon dioxide, hydrogen and flourine (although you'd need to be pretty unlucky to survive long enough breathing flourine to asphyxiate).

I know argon is denser than air at STP, and so tends to accumulate in enclosed spaces --- but propane is even denser. (Leading cause of catastrophic accident in yachts: gas explosion. It's not like you can have a hole in the bottom to let the gas out in the event of a leak.) And any plant which deals with liquid gasses is going to prioritise ventilation anyway, which should easily deal with any gas buildup.

The parent sounded like there was something specific to do with argon which made the regulators tetchy, and I wouldn't have thought that asphyxiation risk would be enough. Am I wrong?


Propane has a very strong odor added to it (by the propane manufacturers), which means it'll be pretty damn obvious the moment you enter a propane-rich environment.


The fine is only if they fail to provide the requested information by the deadline given. The information requested is not that great and the deadline is entirely reasonable, so the fine is very much not "we want to shut you down," just "we want you to provide this information."


Well, the information requested, if you read the actual definitions, uses a lot of "any" and "all", so if you take that literally it's not exactly an elevator pitch style description they are asking for.

"Describe in detail the features of the comma one"

and

"Provide a detailed description of the conditions under which you believe a vehicle equipped with comma one may operate safely ... [and] a detailed description of the basis for your response to [the previous question] including a description of any testing or analysis..."

and

"Describe in detail any steps you have taken to ensure that the installation of the comma one ... does not have unintended consequences..."

I'm not saying the request is unreasonable, but to say that "the information requested is not that great" isn't really true. To completely answer those questions would take a lot of effort. (Assuming of course that he's done such testing. His canceling the product rather than respond makes you think he realized his answers would not be credible...)


How much is "a lot"? This seems like a couple of engineer-days if it gets down into the dirty details, which isn't huge.

Ironically, if he hasn't done any such testing, the answers would be much easier to generate. "We haven't done any." Not that the NHTSA would take it well....


I could imagine taking a week to do it, and he has 11 days.

I've been involved in writing similar descriptions to provide to NASA, and when you get down to exactly how each module works and need to enumerate every branch, it ends up being a lot of work.


He would have had 11 days to produce something. If he'd wanted to play ball, he could have produced something in the way of a response, then waited for a response to that requesting clarification, then provided more information, etc. I've dealt with similar agencies and often as long as you are being cooperative, the "clock" is fairly easy to reset.

It would appear that he didn't have any interest in cooperating at all, though. Which inevitably leads me to wonder if it's not more of a face-saving maneuver; better to blame failure on the pesky regulators than admit your technology isn't up to the investor pitches you might have made.


I would have assumed that anyone serious about releasing this type of product would have made contact with the relevant government departments and worked with them along the way to ensure they meet the requirements.

Also, the documentation being asked for is something the company should be able to put together fairly easily because they ought to have asked and answered these questions along the way to building the product.


Yeah, determining the requirements of a product you are attempting to produce ought not to be left to the end of design.


Requirements for a product? Sounds too much like a waterfall.

Have you ever considered establishing product direction through use cases branded as "stories" and pulling them out of your ass every two weeks?

/s


These people are too far down the rabbit hole to "get" it :p


No, they claim it to be AI. It's in the name.


> That's an obvious part of a government's duty to keep it's citizens safe.

Which article of the Constitution is that? I don't seem to recall it.


I believe it's in the Preamble:

"We the People of the United States, in Order to form a more perfect Union, establish Justice, insure domestic Tranquility, provide for the common defence, promote the general Welfare, and secure the Blessings of Liberty to ourselves and our Posterity, do ordain and establish this Constitution for the United States of America."

Keeping its citizens safe would likely fall under general welfare.


The Preamble doesn't delegate any power from the several states to the United States, though.


Correct.


And this is how "asshole" gets associated with "libertarians". In order to maximize personal freedom, you need property rights and people to enforce those rights. Just because the government's powers have grown well-beyond what's specified in the Constitution doesn't mean it shouldn't have certain powers.


I'm surprised you haven't learned yet that the constitution grants authority to congress to pass laws, which include laws related to safety.


> I'm surprised you haven't learned yet that the constitution grants authority to congress to pass laws, which include laws related to safety.

Have you actually read Article I, Section 8? You'll have to squint very, very hard to see any authority to pass laws related to safety in it. That's because it's simply not there. The state governments, of course, do have that power, and the Congress arguably has power to regulate cars used in interstate commerce.

But there's simply no constitutional scope for safety regulation of cars built & used in only one state.


Have you read the NHTSA letter? They talk about prohibiting interstate commerce if the product doesn't meet safety standards.

Instead of nitpicking, let's focus on the actual issue. The NHTSA didn't order Hotz to shut it down, it merely asked about the safety of the product, which led to Hotz shutting it down himself.


You say this in a snide way as if it would be a good thing if governments didn't try to keep us safe.

It also raises the question of what you think the Bill of Right's purpose is.


I am not from the US, but they surely have some legislation that makes building and selling killing machines, without at least some government controlled safety checks, illegal.


>killing machines You mean like guns?


In my slightly sarcastic definition "killing machines" are every machine that causes people to die.


The US government is not the only government.


The Commerce Clause is often invoked in this type of regulation


That letter was an easy out for him. When you boast about your tech and it doesn't deliver on the promise (and don't forget the Musk challenge), it looks much better if your exit is caused by government interference than if you were to quit on your own. It's kind of less of a failure.

Granted, it's a huge stretch for him based on the language of the letter, but that's how I read the situation.


I think you've nailed it. In his demo video, Hotz said he aims to do 99% of driving. Musk said, "99% is easy, 99.9999% is hard."


It may have been part of the strategy all along. I always found the schedule really aggressive. He announced it with big fanfare in late summer at TC Disrupt with availability end of 2016. I don't think he ever thought that was realistic.


I tweeted a joke yesterday that the Engineering Society had changed it's slogan to 'Move fast and kill people', I'm pretty happy that NHTSA has its feet solidly planted on terra firma and requires a bit of process around releasing something as important as self driving vehicles.

The more software becomes mission critical the more we will need level headedness and quality rather than tantrums.


Not mission critical, rather life critical. I've worked on several life critical systems as a security researcher, helping to develop controls in tightly controlled environments to keep people safe. Work on software controlled switches in industrial machines mostly.

As I've followed these developments and those of Google and Tesla I've been wondering at times if I live in some alternate universe. The level of diligence and care that must go into systems which have the potential to maim an kill are nothing to laugh at. They require serious engineering which takes time. You can find the best and smartest humans on the planet and they'll still fail at building reasonable controls for life critical systems in controlled environments. When you are building these controls for life critical systems which will operate in the diversity of our roadways, honestly the task gets very hard.

I'm not against progress and I do think that self-driving cars are the future. However, it may be wise for those trying to march towards that future at breakneck speeds to take a step back and consider that these systems should first be implemented as backups to humans. Develop the solutions through preemptive braking and accident avoidance where humans are still very much engaged in the driving activities or focus on building expertise in controlled environments. On that note, kudos to Google for doing just that. Their first phase has focused on low speed situations and have been restricted very much to lab work whereby they have paid employee's serving as observers and backup.


I've worked on CNC mills and lathes, design of the logic and the hardware driving the servos as well as some fuel estimation software for airplanes.

The degree of care that went into the e-stop design of the lathe and mill controller and the number of checks and tests of the fuel estimation software are stuff that I'm (hopefully justified) proud of, that's not stuff to cut any corners with. In both cases that took me longer than Hotz claims to have spent on creating the whole self driving car code, there is no way you can do this safely with that mentality. It is absolutely irresponsible to take that attitude towards systems like this, life is not a video game with 3 more lives if you lose this one (or cause others to lose theirs).


This. There is a huge amount of safety evaluations, precautions, training and failsafes we have to go through just for using a Class 4 laser at work in a controlled environment, where the worst possible outcome is permanently blinding someone working with it.

For a machine with the potential to easily maim and kill, to be completely autonomously operated in a chaotic environment where it interacts not just with other computer controlled machines but also with humans, even children, I honestly hope that the amount of safety precautions required will grow very quickly.


Google I'm totally cool with. They've been at this a long time and still don't seem very close to actually releasing a product.

I like Tesla's and what Elon's accomplished has been impressive but he seems hellbent on "winning the race" to release a self driving car and that worries me.


I wish it were possible to buy a Tesla without any autopilot features except basic cruise control, because I know how damnably difficult the edge cases are for a self-driving car, and I'm perfectly capable of driving a car manually, thank you very much. Tesla's cars are so good I'd almost be willing to pay more for one without autopilot.

Almost.


While driving, you have to do a double-pull on the lever to activate the autopilot, a single pull just activates the basic cruise control. So you can still buy one and just never activate it.

Alternatively, earlier editions of Model S shipped without autopilot, which was then sold as an aftermarket upgrade as late as this spring (when autopilot was an optional $2,500 package on a new Model S).


Fair enough re: the older Teslas. As to the new ones, even if I never activate autopilot I'm still driving around in a massive autopilot data collection engine for Tesla Corp, which bugs me.


Reminds me of wanting to buy a TV without all the "smart" stuff. In that situation modularity is preferred by most people with at least minimal tech-inclination. I actually encountered some cases of manufacturers charging more for a lesser, non-smart panel.

A car is a bit more complicated to leave stuff out I am sure and the volumes far different. It would likely require at least more resources to have separate parts of lines for some of that as well as different check procedures as so on. But like you said, paying more could help offset that, though it's a gamble for the manufacturer.

I'm more in the camp of please do not release anything driving a car until you guys can figure out how to secure basic software, keep websites running, parse input properly, and so on. In other words, I'd rather them wait decades if that's what it takes.

Personally, I'd rather be killed by a person rather than a person's dumb choice trusting technology. What concerns me in all these situations is that I did not at any point agree or vote to allow someone in a self-driving car to drive on the same road as me. Until they can mathematically prove 100% it is safe in all conditions and situations, I'm not buying things like "driver assist" if a big part of the justification is that AI is better than a human driver (if so, then why assist?). Others may disagree, so be it - I'm a weird guy. I also hate cruise control. And fat-free ice cream.


It's a good concern to have, but I also worry about waiting too long here. At the end of the day, there's going to be problems that the testing would have never found regardless of how much testing there is, and we need to be psychologically prepared for that.

It took airplanes a long time to become safe (through a long and scary process), and now they're safer than my front porch. Whatever the outcome, self driving cars will very likely be safer than human drivers, and that means more lives are saved than lost, even with the occasional horrific problem. It is entirely plausible that self driving cars are already safer than human ones.


If 10,000 people were killed on the way to making driverless cars safe, and this wild west approach ended up getting them into mass market use a few years earlier, it would save the lives of thousands of people overall.

Is this a bad thing? Is it not justified by utilitarianism?


> If 10,000 people were killed on the way to making driverless cars safe, and this wild west approach ended up getting them into mass market use a few years earlier, it would save the lives of thousands of people overall.

This is a perfect example of the biggest and IMO most compelling critique of utilitarianism -- namely, that it's too susceptible to bullshitting.

People decide what they want to do, then completely invent (or, in more sophisticated cases, carefully fabricate) numbers and time-frames in order to hide their unethical opportunism behind the facade of objective moral assessment.

This is especially problematic when you have to estimate things that are pretty much impossible to estimate without completely bullshitting. For example, "number of people killed by not following certain safety standards", or "impact on time-to-completing of some particular safety standard". We're talking about a novel technology, so there's no way anyone has even remotely high confidence in a sufficiently bounded estimate of these numbers. It's all just bullshitting.

> Is thus a bad thing? Is it not justified by utilitarianism?

These are two VERY different questions.


Lots of things are susceptible to misuse. That doesn't invalidate them unless they are in fact misused.

Moreover I don't see how you can conclude on a whim what's possible to estimate and what is not. Given these things take expertise in the relevant field and more than just casual analysis.

Finally, it's not like there is a small margin of error. Can you even really comprehend 30,000 deaths every single year caused by conventional cars and drivers? This is fricking 9/11 times ten happening every year, and you're not even willing to consider that speeding up r&d might save a few lives?


> Moreover I don't see how you can conclude on a whim what's possible to estimate and what is not

Yes, exactly. Like in so many cases, coming up with good values for the parameters is often the most difficult part of the analysis... that was exactly my point!

> Given these things take expertise in the relevant field and more than just casual analysis.

Except in this case, there are a lot of experts, and they all seem to be taking roughly the same approach.

> Finally, it's not like there is a small margin of error

It's not clear to me, at this point, what your assumptions even are. What percentage of those deaths would be prevented by self-driving technology? How many new types of accidents would be caused by self-driving technology? By what percentage would that number decrease if the regulatory environment were more friendly? And what's your basis for these numbers?

Also, it's perhaps telling that there really hasn't been a single example of regulators preventing self-driving up until this one, which is rather extreme. Google, Uber, and the scattering of startups and car companies have all taken a more cautious road to deployment of entirely their own making.

> and you're not even willing to consider that speeding up r&d might save a few lives?

There's a big difference between speeding up r&d and rushing unproven tech to market.


I would absolutely consider speeding up R&D to save that number of lives. I'm 100% behind self-driving cars. I think self-driving cars will eventually reduce the number of road deaths significantly.

I don't think the case has been made yet that today's tech will reduce the number of road deaths. I don't think it's unreasonable to be skeptical of the claim "this works great, trust me".


I'm very skeptical from no-name nobodys like Hotz running comma-ai and very non-skeptical of companies like Tesla.


So what do you propose as a guiding principle in place of utilitarianism?


That would kill the driverless car concept.

See, we already have cars, it's not like we're going from 'horses and buggies' to cars, it is going from one good solution to an even better one and even though there is a risk at being stuck at the present local maximum for a little longer going down in order to find the peak of this particular mount improbable would not be an acceptable path for the majority of the public.

And without having a time machine or a functional crystal ball you won't know if it is 10K, 100K or a few million people that get killed before you get it right so the bar has been set at 'it needs to be better than the present level before it is acceptable'.


Your scenario would save lives, but we have no way of knowing if we are in that scenario.

If you know that after ten thousand deaths, the technology will be good enough to start saving more lives every year, then your utilitarian argument works. The problem is that you can't possibly ever know that in advance. It's equally plausible that the last 0.001% will take decades to solve, or even remain out of reach indefinitely. Then you just threw away ten thousand lives for nothing.

This is why the burden of proof needs to be on the new system, and why the precautionary principle applies.


i understand where you're coming from, but there is a counter argument to what you're saying, which is that although there may be some dangers to things like the comma one and tesla autopilot, their release may save more lives than they edanger, having a net positive to humans. humans are pretty bad at driving


That's even MORE of a reason for the NHTSA to ask these questions. If the first-to-market autopilot system is too dangerous, that will destroy the public's trust in any future autopilot system, no matter how much of a net benefit it provides.

It's like how IUDs (birth control device) are disproportionately unpopular in the United States despite many benefits - all because of one particular brand that caused infections with horrible repercussions:

http://www.motherjones.com/blue-marble/2012/09/why-are-iuds-...


This is an ends justify the means argument. How many lives have been saved TO DATE with self driving cars? How many lives will be saved IN THE FUTURE with self driving cars?

What is the acceptable casualty rate to get from where we are now to where you think we will be in the future. Is your family an acceptable casualty? If not, then why is mine?


"Is your family an acceptable casualty?" is a pointless question. It can be applied equally well to the opposite argument: what is the acceptable casualty rate for not advancing safety in this way? Is your family an acceptable casualty?

Continuing on with the status quo is just as much of an action and a choice as pushing forward with automation. Both choices are going to get people killed. We can either evaluate the choices and choose the one that seems better, or we can ignore the question and possibly have more people die than is necessary, but there's no option where your family isn't at risk.


I don't necessarily disagree with you but one thing that I think a lot of people are seeing is you are relinquishing control of your car to a computer. When I think of computers I often think of things that I CAN MAKE WORK but that aren't inherently trustworthy. Sure they're great for things like math and stuff but when I try to run a new piece of software I expect crashes and they generally happen. NOT ALL SOFTWARE. But a good bit of first gen software.

So do I really want untested first gen software released without extreme safety standards and heavy testing? No. Do I think the net result of that happening would be a safer driving experience? No. Give me the iOS 9 or so of self driving car software or the unix. Not the Amazon Fire phone.


I totally agree on this. Putting my life in the hands of software makes me nervous. I've seen how software is made. But I do it anyway....

I'm not sure if it's entirely relevant here, since the comment above presumes that these systems are safer than humans, and the reply doesn't seem to be questioning that.

I think it's very likely that these systems will rapidly become far better than human drivers, and probably already are in the domains where they're meant to be used. But we definitely can't just take that on faith and hope it's being done properly.


This argument is a red herring. We know the current casualty rate and have accepted it. Nobody gets into a car with the illusion that it's 100% safe.


And if technology reduces that casualty rate, why does the fact that it's new suddenly make it unacceptable?


I didn't say that. My question was: what is the casualty rate currently and what is the casualty rate in the self-driving car future? I am asking what you think this reduction actually is and how high of a price you're willing to pay to see that reduction. I am specifically asking what you think the reduction is so that we can have a discussion on 1) whether that is a realistic reduction and 2) whether that is, in turn, worth the price. It has nothing to do with whether or not it's new.

Is that reduction a result of self driving or advances in other safety (airbags, seat-belts, crumple zones)?

It is clear that your mind is already made up and that you are a self driving car evangelist. Please endeavor to approach these debates with an open mind and in good faith.


My mind is made up only because I'm pretty sure the casualty rate will be sharply lower. Humans are horrible at driving.

As for your question, you basically destroyed the conversation when you asked "Is your family an acceptable casualty?" If you just wanted numbers, you shouldn't have made it personal like that. That's all my point is here.


> As for your question, you basically destroyed the conversation when you asked "Is your family an acceptable casualty?" If you just wanted numbers, you shouldn't have made it personal like that. That's all my point is here.

This is a really fair point but I also don't think it's a bad thing to say this. I think people distance themselves from possible problems by calling things a numbers game but they don't factor in the fact that they could be part of those numbers.

If I say that 27000 people per year die from listening to Mambo Number 5 then I say that we can lower that to 15 if we remix it to take out the deadly frequency but we will have to test it on 20 people to find which is the deadly frequency for sure. That sounds great. A couple people might die but what's that to save 27000/year?

Well, take that and say we will have to test it on 20 people and 5 of those will be your family. That makes the decision a lot harder. Logically it's exactly the same but emotionally instead of "people" it's 15 "people" and 5 of your family.

So, it's important to mention it could be your family because if someone was alright with it because they are looking at the pure numbers, they should be brought back to the reality that we all actually live in a little.


I agree with the effects but I disagree with the desirability! We should be making these choices rationally. Taking an action which causes some people to die but saves a more overall is typically a good thing. Asking, "what if it's your family?" just makes it harder to make the correct choice, and that makes it more likely for your family to be a casualty.

I guess this all boils down to what you think of the Trolley Problem. I'm one of those who say, yes, of course you pull the lever if you have no way to save everybody, it's obviously better.


> Continuing on with the status quo is just as much of an action and a choice as pushing forward with automation

Of course, but public perception is, you know... not largely logic-oriented. Whether or not you personally find @abduhl's argument convincing, there's still that emotional hurdle you'd have to clear to convince significant numbers of people to sign up.


Of course. But acknowledging and working with that illogical public understanding of things doesn't mean I can't argue against it when I encounter it online.


To estimate your questions, probably no lives have been saved so far through self driving, possibly one extra guy killed in the Telsa crash which you could say was maybe half the systems fault and half the driver not following instructions.

In the future, 1.2m people die a year so say self driving reduces that to 0.2m/year, that's a million lives saved per year.

You family is thousands of times more likely to die in a conventional motor accident than due to a self driving vehicle. If unnecessary red tape slows things down and causes 100k extra deaths is that ok? Even if someone you know is amongst the 100k?

That said there's an argument system like Hotz's should be tested first. That's probably not going to slow things that much.


If unnecessary red tape slows things down and causes 100k extra deaths is that ok?

That's an imaginary scenario. You have no idea if that's the way it plays out. Let me give you another scenario: the first self-driving cars kill a bunch of people, and no one wants to touch them for years after they finally become safe; delaying the adoption of cars, and causing millions of additional people to die.


Very unlikely. Self-driving cars are potentially enormously profitable; with such profitable products I think you'll find that the opinion of an outraged public matters somewhat less than it customarily does.


As a great man once said, "Not taking any action is also an action."


I hate this bullshit argument that 'humans are pretty bad at driving.' Yes there's a wide distribution of driving skill levels, and this is dependent on the cars people drive and their maintenance records. But please don't lump me in the same bag as obviously shitty drivers, of which I've seen scores of.


> I'm pretty happy that NHTSA has its feet solidly planted on terra firm

Makes sense, since they regulate the activity that is the 4th largest cause of death in the US.


Have you considered your desire for process in context of the Trolly problem? https://en.wikipedia.org/wiki/Trolley_problem

In other words, is it acceptable to move fast, knowing it will cause deaths, if we could be certain the net lives saved would be greater?


I think it's fair for a regulatory agency to ask a company to at least count how many people are on either track before releasing their product.


I'm happy NHTSA is getting involved and regulating this, however the skeptic in me wonders if they're doing it for purely altruistic reasons or if the big software players in the industry are essentially lobbying for it to help them with the regulatory capture side. Apparently thats all the rage these days, position your business so as to make competition illegal or as financially cumbersome as you possibly can.


Holtz's Twitter rant is at [1].

Cruise Automation was very lucky to be acquired by GM before they got a similar letter. Similarly, Otto being acquired by Uber. They were both proposing to sell similar add-on systems. Their valuations would have been much lower after receiving such letters.

NHTSA has it right. "It is insufficient to assert, as you do, that 'your product does not remove any of the driver responsibilities from the task of driving'. As you are undoubtedly aware, there is a high likelihood that some drivers will use your product in a manner that exceeds its intended purpose."

Did Tesla got a similar letter? They have exactly that problem and exactly the same excuse. Tesla claims “Autopilot is an assist feature. You need to maintain control and responsibility of your vehicle.”

Here are three dashcam videos of Teslas crashing on autopilot. These are the three where there was a stopped vehicle partially blocking the left edge of the lane, and the Tesla plowed right into it. Tesla blames the driver. Looks like the NHTSA isn't buying that.

[1] https://twitter.com/comma_ai [2] https://www.youtube.com/watch?v=rJ7vqAUJdbE [3] https://www.youtube.com/watch?v=xoSNw_n1Xgk [4] https://www.youtube.com/watch?v=qQkx-4pFjus


Are you sure that they didn't get similar letters? If Tesla or others got a letter like this they wouldn't have publicized it, they would've just actually answered it. There's no way in my mind that Tesla didn't talk to regulators about their stuff at all.


Tesla probably has a dozen people whose sole job is to interact with NHTSA.


Tesla does, and there has been some back and forth of personnel employment between NHTSA and Tesla as well as both have seemingly intentionally poached from each other.


Doesn't really surprise me. I mean, where else do we expect NHTSA to get people qualified to regulate self-driving cars from? I'd rather they came from industry than just not have any background at all.


Academia? where they have for many years gotten a lot of the expertise to write regulations (in all parts of government). There are tons of industry experts invovlved in these processes...but don't assume that NHTSA has no experts in this, Tesla didn't invent the self driving car, they just sold it. Research on it has been going on since the 80s and much of the delay came down to legal rather than technical issue in years past.


> Did Tesla got a similar letter? They have exactly that problem and exactly the same excuse.

Is it the same problem? One is an aftermarket kit vs being built-in to the car.


Also, Tesla likely has many dozens of millions of drive miles into validating their perception and control models. Comma.ai does not.

The level of rigor Tesla has applied to this problem, as cavalier as it may seem at times in the news, is many, many times that which was applied to what Comma.ai is building.

Tesla's mode == aggressive, but diligent.

Comma's mode == aggressive, and flippant.


> Did Tesla got a similar letter?

Did Mercedes? Did Volvo? They have the same systems in place in production vehicles.

Mercedes even advertised their vehicles as autonomous.

http://blog.caranddriver.com/mercedes-benz-pulls-e-class-ad-...

> Here are three dashcam videos of Teslas crashing on autopilot.

Glad you had the opportunity to bash Tesla.


Yeah, they probably didn't - because large auto companies have time and resources devoted to safety and compliance.


And bribery


Bribery to overlook certain numbers and results perhaps, but I think it'd be a unlikely to hear that the NHTSA just completely skipped on all regulatory checks entirely, which is what geohot wants.


I meant it more in the sense of bribery to get the regulators to squeeze the little guys as they always do. Big companies love regulation because it kills their competition and lets them progress at their glacial and expensive pace.


If bribery was an option would VW have gone to such lengths to hide true emissions rather than slipping someone a backhander?


I wrote the below less than 12 hours ago for another post still currently on the front page and it is just as appropriate here with the simple change of a company name. The arrogance of our industry seems to be growing.

>One of the biggest flaws of the tech industry is the belief that being a tech expert is enough to disrupt other industries in which they don't have any expertise. Sometimes they get lucky and it works despite something like Uber's or Airbnb's total ignorance of the law. Sometimes it fails like with numerous cryptocurrency companies relearning why the finance industry has so many dang regulations. Seems like comma.ai is falling in the latter camp.


There's still time for AirBnB and Uber to fall in the latter camp. Some have more runway than others.


Come on... Hotz doesn't really represent anyone but George Hotz. He is a very unique individual that was blessed with a very mathematical mind but very little social grace.


Sure, but every individual is part of a pattern, and our outgroup is becoming more convinced that there's one pattern in particular at play here.

Take a look at the front page of Hacker News right now: in addition to this story, there's "Soylent halts sales of its powder as customers keep getting sick" and "Uber drivers win employee rights case" (presumably over Uber's fierce opposition). With 30 items on the front page, that means we've got 10% of the news in our field which could plausibly be parsed by outsiders as "technolibertarian assholes". I've been watching public opinion of the Bay Area startup scene gradually eroding in the last half-decade or so, and this is a big part of the reason; it's a real problem that we need to start thinking about.


It has definitely been a noticeable problem much earlier than that, my guess probably a half-decade after the dot com bubble burst. I was in school at Palo Alto Senior High School and I gradually began to notice really offensive trends in the way "silicon valley people" acted. When I left school the damage was already done. I remember one of the last examples I saw before college was in a Facebook group called "humans of Palo Alto" (which I objected to from the start), there was a picture of two men in nice clothing holding cardboard signs that said "need venture capital money". Their quote was "you know the Drake song 'started from the bottom now we're here'? Well, that's about us"


It's not just outsiders who take this view.


I was hoping you'd show up in this thread ;)

It's definitely not just outsiders as you (and I) can attest; I went with that line of argument to try and convey some sense of urgency about the problem. It's easy to dismiss internal dissent about SV libertarian groupthink as not important, but "outsiders" have the potential to bring the party to an end if they collectively get really grumpy about what we're doing. I thought that might be more motivation.


Unfortunately for all of us, this will likely continue until people get killed in large enough numbers for some common sense to prevail.


Here's something I always wonder about humans; Why is it only groupthink when you disagree with it?


And we've now, at the behest of these companies, redefined statists down to:

- thinks it's OK for neighbors and local government to have a say in whether people run a hotel next door, possibly sharing a roof or even a wall

- thinks it's OK to require pretty reasonable background checks ala Austin for taxi drivers

- thinks it's OK to have some safety regulations around self driving cars sharing our roads beyond hey, it worked in my car on this specific stretch of road so, you know, yolo


Congratulations, by waiting a year to implement self driving cars, through overly burdening government regulations, you just killed ten thousand people.

What about those people? The people who WILL die in car accidents next year unless we do something about it. Screw them, right?


This isn't a self-driving car. It's a driver assist. It depends on an alert driver behind the wheel. What we're learning is that these systems can, in some circumstances, be the worst of both worlds --- a human driver lulled into complacency by technology that was not designed to handle complicated road conditions autonomously --- a system that itself generates both human and machine error.


There's an interesting similarity with Air France 447 in your description -- a crash caused by information overload and the normal assist system going into an alternate mode.


Noone is arguing about the benefits of self-driving technology in general.

The question at hand is: Does this specific piece of equipment actually work as described, and move us towards that goal, or is it a piece of cobbled together junk that will cause more accidents? If you want to sell this specific piece of equipment on the consumer market, you'd better have a good answer to that question, and be able to back it up with more than "hey, it worked for me this morning!"


my god, idlewords couldn't have written this better


I don't think this can be fixed. Silicon Valley thinks its the solution, when its more the problem. You can't drastically disrupt major pieces of the economy, expect to keep large swaths of the pie for yourself, and having no safety net, those who used to have jobs end up disenfranchised and permanently economically disadvantaged.


It's the exact same thing as the banking crisis, only with a bit of a techno sauce. See, if it works then they'll make bank, if it doesn't society will pick up the cost.

It's a bet without much of a downside.


Privatize the profits, socialize the failures.


The Uber lawsuit doesn't fit in that pattern. Cab companies are also being used over the same issue[1], since drivers were always considered contractors. Uber followed the established model there, it's not an example of SV arrogance.

[1] http://articles.chicagotribune.com/2014-03-26/news/chi-suit-...


How about just not caring on "correcting" people's impressions of our culture? It seems like it's a challenge any way, probably because it's a fool's errand. Yeah a lot of people who are brilliant don't give af, why get suckered into believing this is a bad thing? In fact, you don't have to have an opinion about it.


He's a good hacker but he's not particularly mathematical...


But when startup and engineer community praises him and puts him on a pedestal, doesn't it say something about them (us) as well?


I don't think we should be proud of a country that scares genius away.


Wow, thank you for posting the document. That seems like an incredibly reasonable response compared to what I'd expect from a US gov organization. They're basically ensuring he's done due diligence that the product he's selling isn't going to kill/harm people... I found the whole thing extremely polite and reasonable.

Since he hasn't done the due diligence he's throwing a fit. Sigh... Looks like self-driving isn't something that can be accomplished Today without a good amount of resources.


I can't believe the nerve of some of these Silicon Valley randroids. If they're so great, why do they think a little bit of regulation is going to kill them?


At a guess the list of questions showed Hotz just how far away he really was from having a shippable product and since the 'fun' bit was over it caused him to throw in the towel. Prototyping stuff and proof of concepts are fun, productizing much less so. Moving to China will not really change that.


And, the list of requests in the special order are not ridiculous at all. They are a bunch of basic questions that I would hope someone making driver-assist hardware would be able to answer.


The first three roughly boil down to "please send us a copy of the manual".


Which is something he should have in a pretty much finished form if he's about to release a product.


I second that. Having worked with blockchain companies, I have seen much scarier letters than this and if the blockchain messes up no one dies.[1]

[1] At least not as a direct cause/effect relationship.


Is this a joke? If not, you clearly didn't read the letter; it's more than "a little regulation", it's a huge set of requests on an unreasonably short timeline.

With regard to the more general point of your post, even if it's nominally possible to eke out a profit under overburdening regulation, you're still allowed to take a principled stand against it.


The timeline is not unreasonable at all.

They were planning to ship by the end of the year -- obviously the regulator should get involved before it's in the hands of users.

I've seen requests from regulators before. If you don't have or can't produce exactly what they asked, you call them up and tell them some parts will take longer. The point is to have a dialogue; the threat of sanctions is just to make sure you work with them.


Two weeks is not reasonable for the demands made in the letter. That, plus a $21,000/day penalty, is a clear "fuck you".


There's no $21,000/day penalty. That's the statutory maximum possible. If NHTSA wants to fine Hotz they would have to propose a fine, let him respond with a defense, and then convince a court that it is the most reasonable solution to impose the fine amount recommended. There's no way on earth it would come to anything like $21,000/day.


Two weeks are totally reasonable. If you intend to sell the thing in less than two months, that information should already be ready


Why would you have all the random crap the NHTSA demanded ready to go if this wasn't a pre-existing regulation?


What's random about it? It's basically:

- What's in the installation + operating manual? - Items 1-3

- Is it safe? Did you do testing? How? Items 4-6

- What vehicles does it work on? What happens if you screw up during installation? Items 7-8

- What happens if an owner screws up and puts in an unsupported car? Item 9

- Did you think about whether this keeps the car compliant with other safety regulations? In particular, did you think about the legality of blocking/removing the mirror? Did you do testing? Items 10-12

- When are you selling/shipping it? Items 13-14

- Anything else you want us to know? Item 15

Every single item is a completely reasonable question to ask. I'd certainly hope comma.ai knew the answer to each of these questions before they sold it to the public.

Again, if 2 weeks wasn't enough time, then negotiate with them and give them what you have. Threats of daily fines are often boilerplate that get tacked on to ensure that people take the request seriously; I'd challenge anyone to find an example where the regulatee cooperated with the regulator in good faith and still got fined for not responding.


#10-12 is gg. Removing the mirror for your device? Wat?


The requested things are pre-existing regulation requirements, simply worded with leading questions to prevent unnecessary back-and-forth.

For one of many examples, public information seems to show that his device affects the rear view mirror in a manner that violates specific pre-existing regulations, they ask him how does it affect the rear view mirror and how does it comply with those regulations.

There are only two reasonable responses for such a question - (a) he submits the evidence he has (and had before receiving that letter!) that his product is compliant with these regulations; or (b) he acknowledges that right now he cannot demonstrate that the product is safe and will not sell the product.

For such devices, documentary evidence (including testing results, and all other "random crap" requested) is an integral component of the device - if you don't have all this done, attempting to sell the product is prohibited. That is pre-existing regulation.


It's an unreasonably short timeline if you're a bakery owner. Not when you're about to release a driver-assist hardware. He should have most of the required documents and information already prepared, or else he really has no business being anywhere near car hardware.


He chose the launch date without making sure he was legal first.


I think this is the most sane, non-politicized way of stating what happened here. You can speculate as to the motives of the regulator (short timeframe, high costs, certain friends), but the fact of the matter is this was a major flaw in his plan which he didn't see coming.

[edit] unless his real plan was to go to China all along...


Do I have to check with every relevant government agency for permission every time I do something? If an endeavor not specifically outlawed, it should be presumed legal.


If the endeavor is selling a product for use in a highly regulated industry where the failure case is death, then of course you have to work with regulatory agencies. It's not like the automotive industry is a mysterious minefield hidden in uncharted territory.

If he couldn't answer these very basic questions 2 months away from shipping these things, WTF is he even doing. If he hadn't proactively initiated communications with NHTSA long before now, WTF is he even doing. These are all excruciatingly, painfully obvious things to do, and the fact that it never occurred to him to do them brings into question his judgement about pretty much everything, from technical issues to basic life choices. At this point I don't think I'd trust this guy to mow my lawn, much less develop autonomous vehicle software. His reaction to all of this and how he deals with adversity is just as revealing about his competence as everything he's done so far.


> WTF is he even doing

Trying to innovate without being encumbered by the glacial pace of government regulatory bodies?

His refusal to put up with the NHTSA is an indicator of a low tolerance for bullshit, not incompetence.


His particular endeavour is specifically outlawed (e.g. blocking the rear view mirror). The NHTSA is simply trying to establish whether or not he complies with the existing regulations - clearly, he doesn't.


Wat. If you intend to sell it as a product. Yes. Yes you do.

Double so if it's a product that is very likely to kill its owner.

And square all that, if your products is either used every day or can cause large amount of death, accidentally.


If you read the letter, it's a lot more than "answering a few questions".

He gets two weeks to provide an enormous amount of detailed information about the product, I assume under threat of legal consequences if any of it contains an error, interspersed with lots of reminders that they can and will close down his business if they feel like it.


He gets two weeks to provide an enormous amount of detailed information about the product, I assume under threat of legal consequences if any of it contains an error, interspersed with lots of reminders that they can and will close down his business if they feel like it.

He has announced the release of the product, was taking orders for it and said it was going to be shipping by the end of the year [1]

The timeframe here is driven by Hotz who apparently has not had any communication with the relevant regulatory agencies. Any attorney he hires would tell him "Okay lets push back the release to mid next year while we work with the NHTSA on resolving their concerns" and the NHTSA would probably be perfectly fine with working with him.

"Move fast and break shit" doesn't count when you're putting lives at risk.

lots of reminders that they can and will close down his business if they feel like it - yes and they should.

[1] https://electrek.co/2016/09/14/comma-ai-claims-to-have-packa...


If he doesn't already know the answers to these questions, he's got no business selling this product. Everything that they're asking for—essentially the owner's manual, product requirements, and test methodology—would be readily at hand for any serious operation.

You can't just wire up an Arduino to the CAN bus of a 4,000-pound missile and sell it on the open market willy-nilly.


"You can't just wire up an Arduino to the CAN bus of a 4,000-pound missile and sell it on the open market"

best self-driving comment ever.


> He gets two weeks to provide an enormous amount of detailed information about the product

Which either he should have readily available, since he's the creator, or he should have known, since he should be able to demonstrate the safety of the device.

> I assume under threat of legal consequences if any of it contains an error

Usually you hand over what you have, then you negotiate with the regulator for a time frame to answer the more difficult questions. If you're unsure about something, tell the regulator up front.

The correct response is not to throw a tantrum and shut everything down.

> interspersed with lots of reminders that they can and will close down his business if they feel like it

That's the textbook definition of 'regulation'.


> The correct response is not to throw a tantrum and shut everything down.

I think the correct response is to work with them 110%. Having that government body happy with you isn't a bad thing - you could become the reference spec for self-driving if you please them to an appropriate degree.


> > they can and will close down his business if they feel like it

> That's the textbook definition of 'regulation'.

Right, and it's one of the big problems I have with this form of regulation.

I think it violates the idea of "Rule of Law", as opposed to "Rule of Man", which was one of the major factors behind the rise of modern free society.


Can you explain? You seem to be implying that you believe that the NHTSA is capriciously making demands of Comma but not to other similar companies in a similar stage of market-readiness.


They certainly could, which is part of my point. Whether they do or not depends on the honesty, bias, and personal ambitions of the people running the agency.

The empirical experience is that regulatory agencies over time gravitate towards favoring the big established players in the industry they're regulating.

The economist term is "Regulatory Capture".


You seem to be critiquing the idea of a government run by people and the idea that congress can defer rule making authority to people more educated in the particulars of an industry than the congress at large.


I'm fine with government run by people.

Giving some people a lot of discretionary power of others does worry me a lot. It might well be unavoidable sometimes, but it's really naive to not see the potential for abuse.

The calculation needs to be "if this agency is run by the usual corruptible and imperfect people like you and me, will it be a net gain for society?", rather than the "assume a selfless set of civic minded experts..." mindset I think a lot of people have.


> honesty, bias, and personal ambitions of the people

As long it's humans that write and execute laws, this is strictly unavoidable. It doesn't depend on the particular details of how we do regulation.


Under Rule of Law, one set of humans write the laws. If you break them another set of humans decide on your guilt and punishment.

Set 1 (legislators) can and do write laws to favor and disfavor individual actors, and I agree that that is in part unavoidable.

Set 2 (judges and juries) are appointed randomly after it's become a legal matter, so it's impossible for any inappropriate influence in either direction before that.

And there is no "set 3" of regulators that issue orders during the day-to-day running of the business.

So I think you're right that this problem always exist, but in my estimation it's a few orders of magnitude bigger under a "Rule of Man" regulation scheme.


> no "set 3" of regulators that issue orders during the day-to-day running of the business

You mentioned 2 branches of government, the legislative and judicial. But the third branch, the executive, does (in part) exactly what you described above.

Taken to the logical extreme, your view means we should replace every government adminstrator with a court, including for mundane decisions like whether to grant a marriage license. That system would be horribly inefficient and probably not much more consistent or effective since random people off the street won't have the domain knowledge a regulator has.

Also, regulators are representatives of the people and are subject to laws just like everyone else. The escape hatch of the legal system is always available -- if you think the regulator got it wrong, then take it to a court, and a judge and jury decides who is right.


The marriage license argument is textbook strawman. That is not regulation of an industry, and you know it.


I agree, but you proposed a stringent definition for 'Rule of Law' that seemed to eliminate the possibility of government regulators possessing independent decision-making authority over citizens and businesses alike.

I mean, how do even mundane things like zoning codes and work permits work in this universe where there are no regulators and gov't bureaucrats? Everything must go through a court? Why are you so sure a group of 12 random people will come to decisions more effectively than a bureaucrat in a domain that requires specialized knowledge?

In any case, I fully agree that sometimes regulators, or regulations are bad. But this particular case is an example of regulation working as designed. The NHTSA has been at the forefront of working with the self-driving car industry to make sure there is a legal path to developing, testing, and releasing this tech. Textbook example of regulators doing their job without imposing undue burdens on the regulated industry.


> how do even mundane things [...] work in this universe where there are no regulators and gov't bureaucrats?

I talked about regulators. You added bureaucrats on your own.

Perhaps NHTSA is a great organization. I know nothing about them specifically. I'm talking about general principles.

> Why are you so sure a group of 12 random people will come to decisions more effectively than a bureaucrat in a domain that requires specialized knowledge?

I'm saying they'll be more impartial. Don't know about effective.


Not quite. Regulatory capture doesn't mean you like the big companies, it means you like the industry you regulate. So if you regulate coal mines, you probably think coal mines are good, whether they're large or small. If you're a highway regulator, it probably means you think highways are pretty awesome, but that doesn't mean you think only Detroit muscle cars should be allowed on the highway.


I disagree.

One of the common ways this works is that the regulator keeps adding more and more complex regulations.

It's counterintuitive that a major company welcomes or instigates added regulations that will cost $20M/year to comply with. But since that makes it more expensive to start competing companies, they can make a lot more money from their increasingly secure oligopoly position.


How do you think regulations get enforced?


He's got two weeks to provide the detailed information he should already have or a reason why he doesn't have it. He could even provide an estimate (see instruction #4 in the Special Order). The threat of legal consequences is if any information/reason is missing or if the order is completely ignored. Getting your company shut down by the government only comes after a lot of back and forth communication and is usually due to willful negligence on the company's part.


Considering the gravity of what they're building, they should already have detailed answers to all these questions. They shouldn't have launched a startup without spending massive time and effort ironing out these details. Two weeks is plenty of time to organize your thoughts.


This is why lawyers exist, and the job they are trained to do. A competent one would have told Hotz on Day 1 that this letter was coming, and would already have the documents ready in a folder with a little pink ribbon bow.


I don't see a problem with that. In fact, I want government regulators to do that. He should have anticipated this process, and been working with them since the beginning.


With that kind of talk, only those "well connected" can readily do that.

Hmm. Regulatory capture? Yeah. That.


Yes, being "well connected" is critical to working with regulatory agencies. A minimum of 56 kbps is recommended.

Seriously, with agencies like the FAA, FCC, EPA, FDA, NHTSA, etc. you can just call them up and you will be connected to a regulator who can help you with every step of getting your product past regulatory requirements. The Small Business Administration also has a lot of people dedicated to helping entrepreneurs with regulatory oversight.


If by 'well-connected', you mean people who have the means and resources to properly test their products, then I agree.


"Product" ? I'm seeing a demo he wasn't selling.

And when a big govt stick comes a'knocking, unless you're also big and well connected, you will get shut down.

This was clear as a way to "do insane amount of documentation in a laughably short timeframe, or fuck your business with a fine-stick."

Yeah, GeoHotz is an ass, but this is inane. Great way to kill small businesses.


So what would you have the regulator do here--wait until the product is on the market first?

He's publicly stated they were aiming for year-end release.


That may be what he stated. I can say that I plan to launch pigs from a catapult tomorrow. Doesn't make it true.

I would understand filing that documentation prior to selling. But I'm also familiar with using "Filing with govt" as a tool of oppression that large companies try to pass as a barrier of market entry. Lawyers are rather expensive, and requiring more volumnous paperwork requires more lawyers.

And speaking of that, what is the repercussions of what you send to NHTSA? Say he sends something inaccurate or glosses over details. What is the liability of sending something wrong to govt? Hmm...

Yes. Wait till he's selling them. And when there's a claim, act on it. Not before.


One problem with that model is that if the product ends up being unsafe and someone dies, it becomes the regulator's fault that they failed to prevent an unsafe product from being marketed and sold in the first place. Heads will roll.

We as a democratic society decided awhile ago that the benefit of knowing a car product was probably safe outweighs the frictional burden of testing them before they can be sold.

There's a lot of insinuation here that the regulator is actively trying to kill his product, but no evidence. This seems like a run-of-the-mill request, one that the NHTSA probably sends out thousands of times a year to various automakers and auto parts suppliers.

As a counterpoint, I'm sure Tesla has gotten these letters as well, but the NHTSA didn't prevent them from putting AutoPilot on the market.

> what is the repercussions of what you send to NHTSA? Say he sends something inaccurate or glosses over details.

The letter specifically says that estimates are ok. There's nothing wrong with telling the regulator 'We're not sure yet, give us a month to figure it out please?'.


If that was true, then they could have left off the "or else, pay $21k/day for noncompliance". I've seen plenty of big-companies use forced compliance to kill smaller and more agile upstarts.

I remember the tactics used to kill most butcheries and local meat shops. Same games, with onerous and idiotic requirements of "compliance" that the big guys can do. Of course, the big companies got legislation to enforce their standing.

This is not opening a dialogue, this is a shakedown to kill a product.


The $21k is standard regulatory boilerplate and simply notes the maximum statutory rate.[1] If the NHTSA really wanted his company dead, they'd have already gone to a judge to get an injunction to shut him down.

Yes, regulatory capture exists, and incumbents often benefit from it. But I'm having trouble seeing how this particular case is a 'shake down' -- the NHTSA is simply asking him to follow the law, which he should be doing anyway.

[1] https://www.federalregister.gov/documents/2016/03/01/2016-04...


When he kills a bunch of people in China, you're going to feel silly about having made this comment.


Guy can't navigate the regulatory environment in the US, goes to China ... this is the setup for a joke.


So it's ok for people to die because a business couldn't be bothered to test if their product passed the most basic of safety criteria? Because that's what you're saying.


First of all, he's not being asked for an "enormous amount of detailed information". He's being asked some really simple questions about how his product complies with existing regulatory requirements. Which apparently, it doesn't. Worse still, by abandoning the product he's made it clear that he had no intention of complying with existing regulations and has an astounding disregard for his customer's safety.


Yeah it seems a lot of the community here thinks that if he just threw together a few bullet points in an email response the NHTSA would sit back, say "Okay, great have fun!" and not, you know, make a big deal out of it. I, uh, disagree.


We'll never know, because he's not even trying.

Here's a nicely readable transcript of what they wanted:

https://news.ycombinator.com/item?id=12817061

None of them sound particularly difficult to answer. Maybe they are difficult to answer in sufficient detail to satisfy the NHTSA, but you can always start out by answering them as best you can, and then seeing if they want more. There are people on the other end of this letter, not some unthinking machine. Toss off a quick response and say, "If you have any other questions or need further detail on the answers given here, please feel free to contact me at any time."


If he gave them the docs he certainly has on this stuff, and said "i am glad to sit down and walk you through any other answers or questions you may have", they'd be fine with it as an initial response.

I think you have pretty much already assigned a negative agenda to NHTSA, when they've just requested some info.

While they may have followups if they don't like the answers they get, of all the agencies you find, NHTSA is not one that tends to have a serious pre-bias towards anyone.


I more thinking "why the hell doesn't he have the answers already?"


Having mentioned this elsewhere, I have first-hand experience going between SMEs and X/Y/Z bureaucratic entities for half a decade and there's a big difference between "having the answers you think are satisfactory" and "providing answers that they think are satisfactory." GI Joe pointed this out long ago in that "Knowing is half the battle" and putting it into a work product that accomplishes its task is the other half of the battle in this case (with no guarantees the responses provided will end the inquiry / threat of fines).


That doesn't really answer the question. You don't need first-hand experience to know that government red tape is onerous. It was flatly predictable that the NHTSA -- the government agency concerned with roadway safety -- would make such inquiries. So why didn't Comma work with the NHTSA and get their blessing before announcing a release date?

I mean yeah, it sucks that the government is so darn picky about things, but that's not an excuse to just skip the process. They should have hired someone like you from the start to ensure that their ducks were in a row when they were ready to release.


This, it's not a secret that NHSTA is interested in self-driving cars and would be asking the serious questions that he should already have responses for.


Yeah, the extremely short timeline here combined with excessive fines puts this squarely in "F-You" territory.


They specifically state in the instructions "If you are unable to respond because you do not have all or any of the precise information needed to respond, provide an estimate." This would imply that they are willing to postpone the fine given that he is actually putting together the required documentation. The required documentation is not even excessive, he should know most of that stuff off hand and should definitely have documentation for it if he's planning a release this year.


I agree that the fines are set at a level more appropriate for a large corporation, but I looked at the questions, and they're all things I would want to know as a customer, and I would expect for them to mostly already know (e.g. "What weather conditions have you tested this in?").

There's only one question which could be considered too broad, which is compliance with safety standards, but it's generously worded as a yes or no question. He could simply answer "no, I haven't analyzed my product against against the safety standards".


As others have pointed out, the short timeline is mainly due to Hotz himself and his plan for putting the product on the market so quickly.


completely agree man. I'm pretty positive his organization failed to reach out to the NHTSA for product approval and now he's upset that they recommend pulling the product before it is tested in their VRTC facility in Washington. [1]

[1] http://www.nhtsa.gov/Research/Vehicle-Research-&-Testing-(VR...


> He's currently throwing a temper tantrum on Twitter

He said the NHTSA made "no attempt at a dialog". What does he call that letter? Ridiculous.


I agree. I love how he thinks NHTSA should take a test drive. This is delusional thinking. It's going to be nearly impossible for an SV startup to create aftermarket parts for automobiles because the US dept of transportation is not interested in beta releases. Him being known as an "iphone hacker" probably doesn't help his cause. One accident induced bug could cripple the driver and startup. Sure, SV loves disruption, but the US dept isn't delusional.


The problem is that they (Silicon Valley) think the reason their startups move so fast is that they are some sort of new meritocracy, and that other industries move slowly because they're held back by a "good old boys' club" mentality.

The truth is that rapid advancements in software development happen because most software has a failure rate which would be considered appalling in any real engineering discipline.


Yeah because his tech probably can't do 1/5 of what hes claimed it can and the NHSTA would've found that out almost immediately.

this all a pr move by andreeseen/horowitz to pump all their holdings in companies doing selfdriving tech. they prolly knew he'd never succeed.

jus odd to throw in the towel after the gov comes knockin esp after a $3 mil investment like these investors are smart are we honestly supposed to believe they didn't know this was going to happen

eh or maybe $3mil ain't nothing and they's jus threw it at him because jus maybe he could do it if not it's a win win

probably both.


Did he delete all of his tweets? Just found his verified account and it's empty.



I've been trying to convince myself that it's possible that some day people won't be tribalistic assholes constantly playing us v. them in situations that don't need it at all, and posts like yours leave me sad and shaking my head knowing it's way, way in the future.


Sure, but now he left from China. It's our lost not his.


Why does he have to answer that. It's just research for now?


My read on this tweet[0], retweeted by Comma.ai, is that they will be shipping an actual product by the end of the year, so I could see why the NHTSA would want answers before then.

[0] https://twitter.com/anakkurt/status/785677337819947008


Wasn't he just soliciting beta testers on Twitter? That doesn't sound like "research".


Beta testers on the road with self driving cars amongst unsuspecting regular drivers, what could possibly go wrong?


It's the government that's stifling innovation by burying it with so called regulations to the benefit of the monolithic automotive industry.

His follow up tweets imply he's moving on to China, since they're less likely to be a major hamper of innovation.

And we wonder why the United States' influence is in decline...


For once this really isn't that. I've watched the whole 'comma' saga with some amazement that he'd actually take out his barely ready for closed road testing pile of cobbled together stuff out on a public road.

Really, this sort of thing should be done in a much more controlled environment before pulling the trigger on releasing it on the unsuspecting public.

Read the bloomberg article and be amazed.


> China [...] less likely to be a major hamper of innovation

Another way to look at it: western foreigner looks to beta test a product with the lives of Chinese citizens. I'm not sure how well that's going to go over.


The comments in this thread make me realize that there is a large amount of HN readers who have never had to deal with the government.

Yes, the penalty is scary looking, but it is par for the course. You don't get exposed to it if you are doing amateur/freelance/e-commerce.

As soon as you touch critical national infrastructure (Telcom, NTSB, Healthcare, finance) then you have to deal with the government, which never forgets to remind you of the threat of authorized violence and financial penalties.

If you want to play at the high stakes tables, you have to pay the blinds. If that's too scary, then stay at the smaller tables.

The unfairness commentary here is pretty naive.


I feel like another part missing in the comments here is that most US regulations allow an individual pretty extensive free reign when it comes to risky ventures (see experimental aircrafts, make your own guns, electronic devices and a couple others). The regulation hammer really kicks in once you start trying to market it to other people, or 'what are you actually selling to people.'


Yea I totally agree with you here. If he hadn't said he was selling this product and branded it so, he most definitely would have some interesting arguments under most of these regulations and verbiage in the letter. What happens if I install a little motor that controls my own pedals and steering wheel? What if I blogged about it? What if I tried to sell it? I think there were a lot of poor decisions made by hotz


Something seems fishy to me. Cruise which GM bought for $1B a few months back was working on similar tech. Having only raised $3M he could have EASILY sold his tech/team to Chrysler or Ford or Hyundai, or any other car company for $100M and made his investors super happy and himself very rich.

So why "quit" when the feds ask you to deal with regulations that involve passenger safety. Its not like he had to spend $20M on clinical trials.


It is not "similar tech" at all. Yes, both approaches involve some machine learning (which is a huge field). But Cruise's approach is similar to Google/Uber's approach in that it relies on LIDAR. I'd assume (but might be wrong) in that it also relies heavily on premapping, as does the Google/Uber approach.

Comma's approach was complete end-to-end learning with just a single camera (to give you some context, research problems involving two cameras are already considered distinct (but related) research problems from those involving one camera), which is an extremely different approach (the Google/Uber approach can be thought of as more "handcrafted"). From the deep learning researchers I know of there was heavy skepticism of Comma's approach. End to end learning makes for nice demos but is still an active research topic.

From what I can tell, Mobileeye is somewhere in between an end-to-end approach and a handcrafted approach, so I would guess Tesla is similar to Mobileeye on that spectrum.


There are more details in this lecture Hotz gave at Berkeley this September: https://www.youtube.com/watch?v=Hxoke1lDJ9w

One of the things that seemed missing from the lecture when I watched it is how data was going to be collected for handling exceptional situations/collision avoidance. Hotz also glossed over the question in this interview from July: https://www.youtube.com/watch?v=2zy_07g2IrM#t=35m

I suspect they ran into trouble with edge cases. Also suspicious is this job posting on their website right now:

"Localization Lead Engineer DESCRIPTION

We have over 50,000 miles of video data from cars, and will have millions by the end of the year. Looking for someone to build a SLAM algorithm capable of scaling to the world.

Imagine an API that, given a picture, returns the exact location of the camera with cm accuracy.

Build this

REQUIREMENTS

    Strong math background.
    Ability to write concise, reliable, and readable code quickly.
    Github with stars is nice.
    Computer vision experience a plus.
Basically someone who could have written https://github.com/mapillary/OpenSfM or https://github.com/raulmur/ORB_SLAM2"

http://www.comma.ai/positions.html

Hotz has been claiming that they would not need mapping and SLAM like Google's self-driving car. The whole "Looking for someone to build a SLAM algorithm capable of scaling to the world" sure sounds like "I guess this time he couldn't find anyone on IRC to finish the hard parts for him and let him take the credit" that lawnchair_larry posted elsewhere in this discussion (https://news.ycombinator.com/item?id=12818219).


Yeah. Those are all really hard problems. Visual SLAM is far from solved, especially if since this is only monocular SLAM. And even if he did solve that to a pretty decent level (which is already multiple top-PhD theses worth of research), that would only get him to the point of where he would be is he had just mounted LIDAR on cars and said "now what?"


Is that IRC jab related to the PlayStation cryptosystem breaking ?


> Looking for someone to build > a SLAM algorithm capable of > scaling to the world.

This has got to be a joke. Text like this proves he was out of his depth


Anyone who thinks you can easily up and sell a small pre-product company for $100M lives in a different reality than I do...


Yeah this seems like an easy out that allows him to save face.


Unless he tried this already and anyone who took a serious look saw that he had nothing of value.


Every major player in the automotive industry is probably further along in development than this product is. They've probably been running cars on the road with similar capabilities for about a year or more now.


Suspicious especially given Hotz's comments about Cruise: https://www.youtube.com/watch?v=2zy_07g2IrM#t=40m10s


He posted the answer to this on Twitter: "Would much rather spend my life building amazing tech than dealing with regulators and lawyers. It isn't worth it."

Money isn't everything to some people. He has the true hacker ethos.


For me, hacking is about understanding systems and altering them in a way they behave favourably to you. The right thing to do when a regulator fires letters to you is to build a firewall of managers and/or lawayers to ensure they don't penetrate the hacking activity.


True hackers ship.


I find it interesting that he was on This Week in Startups only a few weeks ago calling every company he found unconvincing "losers" https://www.youtube.com/watch?v=2zy_07g2IrM

Maybe they're all losers and you're a winner, but perhaps being an elite hax0r isn't all that's needed to go to market...

If you can't reply to a regulator letter, how are you going to deal with supply chain or cash flow issues? Bad reviews in the press or on Amazon? People who want refunds? Not to mention the literal horror of a car crash. To borrow a term injected into this election season, you gotta have Stamina.


Keep in mind that given Geohot's promise of "shipping" comma one by 2016 EOY, it's impossible to keep that promise if they have to deal with regulatory agencies and certifications stuff. And there are probably other countries and markets with much lax requirements, I think this is probably the main reason he cancelled the product on U.S.

On the other hand, I am really concerned that he and his team didn't appear to think about working with the government agencies beforehand to sort out any regulatory requirements in order to prevent this kind of situation. Shocked!


>> On the other hand, I am really concerned that he and his team didn't appear to think about working with the government agencies beforehand to sort out any regulatory requirements in order to prevent this kind of situation. Shocked!

Well consider the attitude of SV hackers toward the established auto industry. They think we're a bunch of... I dunno... rust-belt, old-school, last century, bumblefucks who don't understand technology. Given that attitude it probably never occurred to him that there might be people or even regulators taking this stuff way more seriously than him.


He sounds really immature and irresponsible. If you look at his code on GitHub, it's full of sloppy stuff, commented-out code, and worrisome little loose ends with comments like "should I check this value?"[1] Even his commit messages are full of sloppy stuff.[2][3] That's fine for hacking on game systems, emulators, or various consumer electronics (and he's clearly quite good at it, given his accomplishments), but would anyone really want to trust their life to code like this? How much testing has he actually done on this thing? By his own accounts, it didn't even work until less than a year ago.[4] The NHTSA is right to question him about safety if he's planning to put this on the market for anyone & everyone to use on public roads a matter of months from now. Choosing to shut down and go work in another country after receiving just one inquiry about safety is a sign that there could be serious problems. A car is not a toy.

(Oh, and in case anyone responds with "well let's see your code!", please note that I'm not the one making asking people to trust their lives to my code, making wild claims about what I've built, nor issuing challenges to the likes of Tesla.[5])

[1] https://github.com/geohot/kvm-kext/blob/master/main.cpp#L642

[2] https://github.com/geohot/kvm-kext/commit/082b7ca99cba4c3b9c...

[3] https://github.com/geohot/kvm-kext/commit/96441be079562b0dd0...

[4] http://www.bloomberg.com/features/2015-george-hotz-self-driv...

[5] https://electrek.co/2016/04/06/tesla-autopilot-comma-ai-geoh...


It's difficult to be sure about George.

He spoke so derisively about other companies that failed to deliver, and now because of the inevitable paperwork that comes with a product that takes over your car for you at times (level-3 autonomy iirc), he too fails to deliver.

If he leaves to join Tesla, it seems pretty irresponsible, given that he's raised $3.1M and has a team of employees relying on him.

But that's all speculation until we see what comes next.

Perhaps I'm unimaginative at ~1:30am, but I can't imagine what he wants comma.ai to do, unless he sells or licenses the product to automakers who can do the paperwork for him.


He could return the unused funds still.


Wow, sometimes I wish HN came with a "context, please" button. ;)

It's a self driving car (or rather a prototype of an "autopilot" feature like Tesla's), AFAICT.

From [1]:

> After a couple miles, Hotz lets go of the wheel and pulls the trigger... Hotz shouts, “You got this, car! You got this!”

> The car does, more or less, have it. ... Amazed, I ask Hotz what it felt like the first time he got the car to work.

> “Dude,” he says, “the first time it worked was this morning.”

[1] http://www.bloomberg.com/features/2015-george-hotz-self-driv...


It does have such a button! It's the comments button. Of course, it relies on enterprising users such as yourself to make it happen. You have created what you wished for!

And just to confirm, your summary is correct. It's not self-driving, but it is lane keeping and intelligent cruise control adequate for fully automatic highway driving in normal circumstances, like Tesla's current Autopilot is. It was to be an aftermarket accessory that could be installed by the buyer, presumably hooking into the OBD2 port and controlling existing servos. The initial release was to focus on a small number of Honda and Acura models which already have lane assist and cruise control features, but not as sophisticated, so it was able to take advantage of that existing hardware and extend its functionality.


And that last line is exactly why he got the letter from NHTSA.


Wow. If he really moves his project to China to take advantage of a lax regulatory environment even though he couldn't pass the scrutiny in the US, he's being very immoral and cavalier about people's lives in China.

I don't understand how a startup like Cruise can get past this regulatory hurlde in the US but Hotz can't.


I am no expert in Chinese transportation laws and regulations. But I wouldn't say it's a lax environment there - it's more ad hoc to be more accurate as I've seen regulation coming out from time to time in response to events.

On the other hand, Shenzhen is the world capital of electronic / hardware components sourcing, manufacturing & production. Geohot's trip to Shenzhen may or maynot be an indication of his intention of releasing comma one in China but rather an expected visit to check up his product manufacturing.

Also, perhaps Cruise have already been dealing with government agencies and perhaps in compliance of regulatory requirements so that we don't hear about same trouble for Cruise.


If China's transportation regulations are anything like their import/export regulations..."lax" isn't the right way to describe them, more aptly described as "influenced in your favor by money paid to the right people".

Disclaimer: Worked for a company exporting an "organic" product from China - which meant paying the harbor master a rather large fee per ship to have them labeled as such. When the product arrives in the US and other countries, it's treated as being whatever it is labeled as.


I assume he was just visiting for manufacturing purposes.

What's interesting in China is they have in the past executed people for white collar crime in addition to regulatory officials as well. This though can happen for a variety of reasons unrelated to regulation, rather corruption, being in a rival faction of the wrong people, etc. Anyway, I am not saying it happens a lot or every day, but rather it seems tolerance with regard to regulators and corporations is not necessarily unlimited, especially depending on who's pockets you are lining and where you are from.

https://en.wikipedia.org/wiki/Zheng_Xiaoyu


If that is his intention, it is also risky from a business perspective. China does have laws and regulations, even if they haven't yet set many laws on self-driving facilities. Understanding those, and appealing to the Chinese population, are very difficult business and marketing tasks, much greater than trying to develop a business in a country he knows, in a culture he understands.

Google, Uber, and so many others have struggled and are struggling still.


It will be fun when he realizes the lack of road markings and overall road quality/traffic mess in China makes it much harder than driving in your avg. US road (especially in California)


...which means that if he's successful his product will likely be more robust than ones worked through in less challenging environments.

Here in North Texas I just took a weird exchange between two major highways (I-30E to Loop 12N) and at night, poorly marked lanes, very aggressive exit/entry points, and the first thought that popped into my head was "There's no way a Tesla autopilot - or anything - on the market today could navigate this at posted speed even in ideal conditions."

I'm sure it's a bit over-the-top of me to come up with such useless tests, but I love driving and wish there was as much emphasis on actual driver education as there is on innovating driver-less vehicles.


Programming a neural network to learn to drive is fun; dealing with lawyers is not.

I don't think it is a matter of passing the regulatory hurdle; it's a matter of not wanting to deal with it.


That's absurd. You hire people to work on regulatory concerns, then.


So did he naively expect that he would not have to deal with the legal issues?


His rep (from people I know who know him personally) is such that actually doing anything that doesn't involve a computer sending him into paroxysms of rage sounds completely in-character.


Someone should have told the people who funded him.


Led by a16z [1], and I think the partner is Chris Dixon.

[1]: https://medium.com/@cdixon/comma-ai-e62eea5fa8d2


Even if he won't encounter such "regulatory hurdles" in China, he'll encounter cultural ones. If anything I'd expect that to be harder.


My guess is that there are other problems with the comma one system, and he is using this as an excuse to back out (not so) gracefully from his previous statements.


They may turn a blind eye in certain cases but China doesn't screw around if something goes wrong. You do hard time or worse.


Not to mention pretty risky for himself. If his invention kills someone in China, he will probably end up wishing he was dealing with US legal system (and/or US prison).


Tell that to the millions of people who have already died because self driving car technology isn't here yet, because the government is stopping it.


I actually think that the government has been surprisingly open and helpful for self driving technology (and I'm a Libertarian so I don't say that lightly).

The media on the other hand... I am sure filling the 24x7 news cycle up every time one of these cars gets in an accident and never covering the advancements has a chilling effect.


A huge proportion of commercial self-driving car projects have senior technical staff who were involved in the DARPA challenges and the surrounding research projects.

It's hard to over-state the role of federal R&D funding in the development of self-driving tech.


Right now, as of this moment, self driving cars are much safer than human drivers.

Every accident that happens and person who dies is a death that could have been easily prevented if companies had been allowed to go to market last year.


> Right now, as of this moment, self driving cars are much safer than human drivers

Maybe if you limit the scope to the sunny streets of California. Maybe. Yours is a ridiculous statement that is not backed by any facts. Humans are adept at driving in conditions that self driving cars are incapable of handleing. How many self-driven miles have been logged in torrential downpours and on snowed-over roads?


If self driving cars aren't good at driving in the rain, then you can just NOT drive them in the rain.

Even in sunny conditions, humans are still bad at driving.

If half of our driving time can be replaced with safer self driving, that is still a huge win.

You still save lives when you make safe driving conditions (sunny California) even safer.


Given how reasonable the NHTSA request looks, to give up so quickly is a bit suspicious. In products like that it is likely that the first 50% looks much more approachable, and then as you try to reach something that can actually be shipped to customers, you have to solve huge problems incrementally, and it is very hard. So looks like a bit of an excuse to stop it: if you have the working thing you try harder before giving up IMHO.


I agree. I think he hit a wall with his tech, realized it and was looking for an out.


Nobody who is familiar with George is surprised by this. It's unfortunate that his ego has a scapegoat, however. I guess this time he couldn't find anyone on IRC to finish the hard parts for him and let him take the credit.


> I guess this time he couldn't find anyone on IRC to finish the hard parts for him and let him take the credit.

I feel like there is some backstory here I've never heard.



Probably has to do with when he participated in the PS3 or iOS "hack" scenes.


'lawnchair_larry knows way more about this than I do but there's a pretty significant schadenfreude thing happening in security-nerd-land right now.


IRC? Is there more to this?...


a little more detail from techcrunch: https://techcrunch.com/2016/10/28/comma-ai-cancels-the-comma...

It seems that Hotz said, “dealing with regulators and lawyers… isn’t worth it.”

Which seems very unfortunate/shortsighted due to how useful this tech could be. Why not hire someone to deal with them for you?


That may be just a fig-leaf to plaster over 'we couldn't do it'. It's one thing to release software, it's quite another thing to release this software and have it tested, audited and certified safe and that's definitely part of the process of delivering software that is critical to the well being of the users and the other people you're sharing the road with.

And that second part is part-and-parcel of wanting to operate in this space.


Exactly what I think -- here's the "deus ex machina" that he needed to wash his hands of the whole thing, after having talked himself into a corner where the only other way out was to deliver a real, working, product. The letter the government sent was simply asking for more information and a temporary pause in sales; not any permanent injunction at all. It reads like they know he's kind of a hothead but they're doing their best to convince him that this is in everyone's best interest.


An ex-colleague of mine used to say the difference between a software engineer and a real engineer is that the latter's signature is criminally binding.

We can refuse to be real engineers (it is a great responsibility), but then we should drop the title.


I'm 100% in agreement with that. Engineers carry responsibility for their work (and do so with pride).


We never had the PE title.

In any case, a software engineer still has legal liability. Maybe someone who works on mission-critical systems can comment more specifically here. Maybe it's contract based vs standardized.


I agree. Prototypes are easy, production level products/systems are hard and take time and money. There is also another angle of just hoping to sell the prototype and "team" and let someone else do the hard work of making it real.


And he would have needed to have performed the development under an appropriately rigorous development process.


I wouldnt doubt that regulators and lawyers may be the biggest hurdle in a project like that. But Im surprised that he did not have a team working on that for him. Elon Musk had to go thru that - and probably still is working thru a lot of that mess. Maybe it will inspire him to hack the system - and improve it.


> Why not hire someone to deal with them for you?

It's either taking a long time in order to be in compliance / certified or very costly or both.

Hotz mentioned a 2016 end of year shipping date, it's impossible to do that if they have to "dealing with regulators and lawyers".


Then it's impossible to do legally, and anyone who doesn't realize this isn't going to succeed in a highly regulated industry.


Hopefully his example is shown to every engineering class along side the Tacoma Narrows bridge collapse example.

He is truly an example of an irresponsible engineer and a great lesson to learn from.


The dude is brilliant no doubt... But this space requires brilliance and a lot of spine. That is what makes someone like Musk so unique. Someone thats willing to risk it all, and won't let doors closing in their face and public opinion hinder them. I hope Hotz can put his ego aside and go work with someone that can handle the bureaucracy and red tape. He is surely a prodigy and it would be a shame if the world can't benefit from his ideas.


I'm not involved in the machine vision field but I found his commitment to open and transparent academic publishing very admirable.

He published a paper with their summer intern here: https://arxiv.org/pdf/1608.01230.pdf


This is not peer-reviewed. This is a tech report. Anybody can post these.


Well, you usually need a university address or some endorsers to post to the arxiv, but true, most people could probably post that.

On the other hand, the arxiv is also the default method of communication in quite a few fields (physics and maths, mostly), to the point that while grant committees etc. look at your peer-reviewed papers, I essentially only check the arxiv for new developments, not the plethora of properly peer-reviewed journals.


I looked over that paper, and honestly it is nothing that impressive. Basically it just described applying off-the-shelf algorithms to a particular kind of data. It probably would be published and is interesting to look over, so I don't mean to deride the paper itself, but still it is just not that novel. Nothing about how their end-to-end self driving works or how it's better than the many competitors. So to me this was hugely underwhelming, really.


Also, the entire deep learning academia is already extremely open and transparent with publishing.


Haven't we heard this story countless times from projects on sites like Kickstarter before? There is a huge difference between tinkering in your garage and creating a production product.

Whether it's something that falls under government regulations like Hotz's project, or transitioning from a couple of 3D-printed models in your basement to an injection molded factory, making things (at scale and in the real world) is hard.


I'm just amazed at what a bad name that is. "Comma One"... Did they never think that it would be called the The Coma One after even the slightest inevitable incident? Or that a segment of people will be innocently misspelling it as "Coma" everywhere? and how quickly it could catch on?

It's so obvious it seems almost Freudian ...


It's not a great name, but I'd give him the benefit of the doubt and assume it's "all things AI". Think classification language, like "Car, Self-Driving". They could be "Car, AI" and "Assistant, AI".

Less charitably, it's step one toward Tres Commas.


Given that he referenced the HBO show in his Techcrunch presentation, I'd say the reference to the three comma club is probably the most likely.

Also, one comma would be $1000, which is what the thing was supposed to cost.


I don't know if it was ever intended as a consumer facing name, but even without the "coma" puns, it's too wonky for customers, do you think?

Besides, isn't one comma 99k worth of room? It's marketing lingo aimed at the hubris of venture capitalists.


/ or yeah a bit from Silicon Valley, got it.


I can imagine overhearing the mass market consumers: "Oh, the name is inspired from classification language ... "


Yes when you point it out, it's a tremendously awful name.


Do I think Hotz has a viable product? Yes. (A comment over at Jalopnik linked to a Tweet where an actual human rider / journalist experienced the Hotz vehicle and was quite impressed)

Do I think there's significant political pressure to have his product, ahem, driven into the ground as to not be a viable competitor to larger firms? Yes.

Do I think it's a sign of immaturity to pack up your stuff and leave when confronted by a small challenge? Yes.

Do I think dealing with the NHTSA (and potential NTSB) is a small challenge? No.

Am I still a fan of George Hotz as both an inventor, innovator, and persona? Yeah, I can see where he's coming from.

Do I think it's hilarious that a vocal contingent here criticizes him for taking a "path of least resistance" (regulation) in his development and iteration process? Absolutely.


Do I think there's significant political pressure to have his product, ahem, driven into the ground as to not be a viable competitor to larger firms?

I think it's likely that he's getting exactly the same treatment one of those "larger firms" would get if they did the same.

The difference between him and those "larger firms" is that the "larger firms" would've been planning for the regulatory checks from the start. They would never have been blindsided by a request for information because they'd have known this stuff is a legal requirement and would've been proactively having discussions with regulators, figuring out what they'd need to provide, etc.

And speaking from what I've seen working at a startup in a heavily-regulated industry: federal regulators aren't black-suited startup-killing robots. They don't seem to care much how new or "agile" or "lean" or "disruptive" a company is. They care that the things or people protected by the relevant regulations... are actually protected. Crazy idea, I know, but it certainly does appear to be the case most of the time (cue people lining up with anecdotes to angrily tell me I'm some kind of statist apologist whatever and shouldn't be listened to...).


> Do I think Hotz has a viable product? Yes. (A comment over at Jalopnik linked to a Tweet where an actual human rider / journalist experienced the Hotz vehicle and was quite impressed)

Journalists being shown demos in controlled circumstances are a notoriously unreliable source of information about the viability of a product.


That's a fair point, and we shouldn't over-weight one journalist's experience.

However, I think that in this case, the author of the tweet has a lot of credibility (https://twitter.com/AlexRoy144/status/791996855114694657) as the holder of multiple performance driving records. You're right that it is only one anecdotal perspective, but it's kind of like a prominent Apple blogger tweeting that a new Microsoft product has phenomenal UI. It's just one perspective, but (to me) it's interesting enough to merit further consideration.


> You're right that it is only one anecdotal perspective, but it's kind of like a prominent Apple blogger tweeting that a new Microsoft product has phenomenal UI

To stretch your analogy: when the FCC sends Microsoft questions about the product's RF emissions, does the prominent blogger's endorsement really matter? Firstly, RF is out of the blogger's wheelhouse and secondly, hypothetical Microsoft ought to have planned for regulatory compliance from the beginning, or at least have the technical specs recorded somewhere.


Oh come on, do you understand the concept of journalism? It's a reporting profession. It's not goddamn PR.


I'm just saying that there are many cases where "a single instance of a journalist having a good experience with a revolutionary pre-release product" turned out to have been the company deliberately misleading the journalist. I'm not claiming that happened here, but I am claiming there is a non-negligible chance it happened.


Fair enough, I'm not trying to say the profession is 100% on the up and up and you're right, people can be manipulated. That doesn't mean it's reasonable to prima-face like you did and dismiss a journalist's impressions (being cited on an Auto enthusiast site mind you) as worthless. It just struck a nerve because I'm a pretty cynical person but do my best to weigh what I've seen / gathered, and while I do admit I think George is an interesting underdog type character and sympathetic in that regard (so to speak), I'm noticing a lot of open hostility toward the product being cloaked in dislike of the person.


I'd agree with this. The journalist in question (Alex Roy, https://en.wikipedia.org/wiki/Alex_Roy) is an auto enthusiast and race driver, and is well respected even in auto enthusiast circles, which are notoriously anti-self-driving-cars.

Journalists can be manipulated, but Alex Roy's tweet (https://twitter.com/AlexRoy144/status/791996855114694657) is more an endorsement by a car enthusiast with 30+ years of high performance driving experience than a journalist, IMO.


Thanks, I appreciate your context because I think such quality of sources matter. In this instance, I thought it was appropriate to include a 3rd party of sorts who really doesn't have skin in the game.


I'm legitimately not sure if this is sarcasm or not.


I have an issue with Hotz, since failoverflow conf about them breaking the PS3 crypto, I think he oversell his ideas to the uneducated iphone crowd and medias. I understand his attitude in wishing to push things forward, his disdain about large corps, governments, but he fails to deliver calm and solid proofs; mostly diss.

Domains like medicine, roads,... life critical engineering makes me feel the hacker way, reckless plan (path of least resistance) is wrong.


Let us know when you think again.


I guess that makes Hotz one of the "jokers" now?


The attitude in the tech industry is to ship buggy products before they are complete. I'm so tired of wasting my time with buggy crap.

I think it's awesome that a government agency said that this practice is not acceptable when someone's life is at stake.


Here's what the Special Order demanded:

1. Describe in detail how the comma one is installed in a vehicle and provide a copy of installation instructions for the comma one.

2. Describe in detail the advanced driver assistance features of the comma one, including how those features differ from the existing features of the vehicles in which the comma one is intended to be installed.

3. Describe in detail how a vehicle driver uses the comma one and provide a copy of user instructions for the comma one.

4. Provide a detailed description of the conditions under which you believe a vehicle equipped with comma one may operate safely. This description must include

a. The types of roadways on which a vehicle equipped with comma one may operate safely;

b. The geographic area in which a vehicle equipped with comma one may operate safely;

c. The speed range in which a vehicle equipped with comma one may operate safely;

d. The traffic conditions in which a vehicle equipped with comma one may operate safely;

e. The environmental conditions in which a vehicle equipped with comma one may operate safely;

f. The amount and type of driver inputs necessary for a vehicle equipped with comma one to operate safely.

5. Provide a detailed description of the basis for your response to Request No. 4, including a description of any testing or analysis to determine safe operating conditions for a vehicle equipped with comma one.

6. Describe the steps you have taken or plan to take to ensure the safe operation of a vehicle equipped with comma one, including but not limited to automated shutoff of comma one features and owner education.

7. Provide a list by make, model, model year or year of production of each vehicle for which you support or anticipate supporting use of the comma one.

8. Describe in detail any steps you have taken to ensure that installation of the comma one in any supported vehicle does not have unintended consequences on the vehicle’s operation.

9. Describe the functionality of comma one, if any, if installed in an unsupported vehicle.

10. Have you done any analysis or testing of the impact or potential impact of comma one on the vehicle’s compliance with the FMVSS? If yes, please describe the analysis or testing in detail and provide supporting documentation. If no, describe why not.

11. Describe in detail how the comma one impacts a vehicle’s rearview mirror, including whether it requires removal of the rearview mirror or the extent to which it blocks or obstructs the rearview mirror.

12. State your position on how the comma one does or does not affect a vehicle’s compliance with FMVSS No 111, Review Mirrors (49 CFR 571.111), and provide any supporting information or documentation to support your position.

13. State the date on which you currently plan to begin selling the comma one, and provide a list of all retailers and/or websites through which you anticipate selling the comma one.

14. State the date on which you currently plan to begin shipping the comma one.

15. Provide any other information which you believe supports the safety of the comma one.


That's an incredibly reasonable list and something anybody active in the self driving car space should be more than happy to answer in the required detail.


I agree. Imho it even sounds more like a questionaire that provides them an overview of the system rather than a really extensive safety analysis.

Even the simplest (non-safety-critical) systems in automotive requires typically a lot more documentation. E.g. if you have to prove that your development process is automotive spice compliant then you need documentation for everything - from detailled requirements to test conformance. For safety critical stuff it's even more.


What is your estimate of the required detail? Mine is 100,000 pages. Is your estimate, I don't know, only 100 pages or so? Look at their definition of the word "describe."

This letter is designed to kill the company. That is a result I support, but, the people saying "This letter is designed to kill the company" are entirely correct.


Designing a self driving car on a hobby budget probably isn't doable ($3M doesn't cut it). 100K pages is probably on the high side, but let's say the order of magnitude is correct (25K, 50K, 100K, it's effectively all the same).

I think that he could have gotten away with showing a substantial effort towards answering the questions and getting some kind of experimental license to verify the technology is viable, I emphatically do not believe that the goal was to 'kill the company' given that all these questions have - for the automotive industry - reasonable answers and that if Hotz was serious about this that he should not have been totally blindsided by this request.

If he was then he should probably have researched the space a bit better before embarking on the project, it's one thing to be a 'hot hacker' but it's quite another to go into this business without the required knowledge about what being in that business will entail.

Trying to imagine SpaceX/Elon Musk backing out of the rocket business because 'the paperwork is just too damn complicated and lawyers are no fun', ditto for Tesla, Google or any other party that is trying to re-vamp some branch of industry.

If the tech is for real the reporting requirements are a reasonable extra cost to be born by the company, the amount of money available for this tech would dwarf the cost of the reporting.


Why do you assume this at all?

You can respond pretty much with anything that accurately answers the questions. If they want more detail, they can then ask for it.

I don't get where this strange idea that if he doesn't provide 100k pages, he's going to be subject to a fine immediately comes from. It's 100% completely and totally wrong.

If he literally answers the questions, he's fine. If they want more detail, they can ask for it.

The letter is not designed to kill the company, and the people who think so have probably never dealt with any regulatory agency.

My source on this, btw, is that i've dealt with many regulatory agencies many times.

They aren't psychos, even when they are adversarial.

To give you an example, when involved in a pretty adversarial issue with the DOJ, i cannot say they were anything but professional.

When the DOJ wanted more info on something we said "hey, can we just sit down for an hour and chat", and they'll usually say "great, let's do it", and then maybe they'll say a few weeks later "hey, thanks for doing that, we have some more questions about x, can we set up another hour" and so on. Are they always like this? No. (am i the most experienced person in the world? also no, it's also not what i do anymore at all, but this was my experience with pretty much every agency, every time)

But the vast majority of the time, they literally are just trying to gather info to decide what to do.

It's only when they are trying to actually get a particular result that things change. But you'll pretty much know when that happens, and this ain't it.

The only agency i've ever seen just be outright hostile is the CPSC.

The NHSTA, in my experience, is pretty much one of the most level headed and professional agencies you will find.

If you have actual evidence to the contrary, where they have "shut a company down", i'd love to see it.


Yes, if a company receiving this letter isn't quite sure what level of detail the NHSTA needs, they could make a polite call to the explicitly-listed "call us if you have questions" person and ask about the expected scope, to make sure they're on the right track before they submit the final documents. Federal regulatory agencies are made of people.


Hondas response to 34 questions was 22 pages long, plus presumably supporting documents.

https://www.nhtsa.gov/staticfiles/communications/pdf/Honda-r...


Good grief, that's nothing. I've seen security review questionnaires for enterprise SaaS contracts that were longer than that. If George Hotz or one of his employees really couldn't produce a response like the one Honda gave, there's no way anyone should trust his product. I'm glad he canceled. I don't want his crap on the road anywhere near me.

If I were one of his investors I'd be pissed right now.


The documentation that the NHTSA is requesting are things that would already exist for a properly designed product. They are not asking for it to be generated de novo (and in fact, it cannot be done in that manner if they are operating under a proper design process).

This letter is designed to give pause to (e: previously I wrote "kill") a company operating with complete disregard for proper operations in a regulated industry.


This letter is designed to kill a company operating with complete disregard for proper operations in a regulated industry.

We are in entire agreement, but people cannot believe in the superposition of this and "Oh that's just some simple questions from the regulator."

Edit to add: Quoted bit appears to have changed after my quote. I think the old version remains accurate.


They're asking for documentation that any serious vendor would already have. That we believe this would kill comma.ai is really more a testament to our priors that George doesn't keep anything resembling rigorous test documentation.


And even then it wouldn't have to kill comma.ai, it could simply lead to Hotz selling the company to a party willing to dot the i's and cross the t's but retain the technology and Hotz himself as a partner/minority shareholder/employee.


Sorry, I thought I was in and out before anyone had seen my comment. I added a clear edit mark to fess up.

I originally wrote "kill" to mirror your original comment, but I changed it because it (as I perceive the word) presumes a level of intent that I don't think NHTSA operates with: I don't think that NHTSA has any intent to destroy this company. In fact, if comma.ai reached out to NHTSA, I would expect NHTSA to assist (within reason) in answering the letter.

I do think that these are straightforward questions for a group that has its act together. Part of having one's act together in a regulated environment is maintaining open lines of communication with regulators, so that questions would be asked should come as no surprise. Keeping up to date with the regulatory environment would also be expected, and given the recent autonomous driving publication the content and scope of these questions should come as no surprise.

If these questions kill the company, I don't see that as being the fault of NHTSA, I see that as being the fault of whoever is in charge of regulatory affairs at comma.ai.


[flagged]


> I'd like to politely clue you in that...

If you really do want to be polite, then it's better to avoid expressions like "clue you in."


And if you had a clue, you'd know he really had no intention of being polite. ;)

Phrases like that exist to indicate that one is emotionally dissatisfied but at a level where it's bubbling underneath the surface rather than something you'd like to display fully. It gives just enough plausible deniability of the fact for it not to become the center of discussion, while still unambiguously letting the person you're communicating with know that you're pissed.


True but he said he planned to release the product by end of year... surely he at least had drafts of this information. The letter even said estimates were OK.


Look, I don't want to belabor the point but a laundry list of shit to submit isn't as easy as copy-paste from one document to another unless you don't actually care about answering the question and want to get mired in back and forth of "more information needed" or "answer unsatisfactory" when the only grading criteria (NHTSA) is behind closed doors.

A lot of people here are assuming just by turning in "some answers" that it would satisfy the NHTSA. I see it as a much more involved an grinding process. They have the power, not George, so a lot of refrains about how he's just chickening out don't resonate with me.


You're missing the point entirely: knowing that somewhere down the line you will have to have answers to questions like these is an integral part of releasing a bunch of software and a device to transform a regular car into a self driving car if you are serious about the project in the first place.

If this list surprises you then you probably shouldn't be in the self driving car business.


No, you are. I'm pointing out that documentation that is satisfactory to a large bureaucratic entity is not as simple as the majority of the chorus here seems to think. I think it's perfectly reasonable to expect a company selling a public product to undergo scrutiny, sure, and I can sense that his operation is - probably wisely - thinking they're not robust enough to satisfy the hurdles.

That doesn't mean his product is shit or dangerous. It just means that the incredible amount of time and effort required to respond - and still without any guarantees that's the end of the inquiry - may be a huge time sink and a distraction from the primary objective of product development.

I've worked in many fields with entities from local, county, state, and federal RFPs/RFIs/SOQs/etc and even if you're the best in the business and have proof it's not always easy to patch it together in a format desired in a timeline that's desired and call it done.


But that goes with the territory. Just like you're not going to 'disrupt' the aviation industry or the medical world on a shoestring budget (see also: Theranos, you still need a working product even if you do all the paperwork).

And I personally feel that's a good thing. Even if Hotz' software is a-ok I'd still expect him to have it properly documented and vetted before one of his customers hits my vehicle.

The roads are not a playground.

I've done a fair bit of work on vehicles and I'm happy to say that my work passed inspection, and that I would consider it to be irresponsible to see such control over the workmanship and quality as un-necessary interference by busybodies, the primary objective of product development does not obviate the need for a reasonable overhead to prove that you did your homework.

Also, I never meant to say or imply that his product is shit or dangerous, merely that it is not up to snuff for deployment let alone sales at the present stage and that the gap between that and where it should be is too large to overcome for Hotz. In other words: it is not a product - yet.


"I've worked in many fields with entities from local, county, state, and federal RFPs/RFIs/SOQs/etc and even if you're the best in the business and have proof it's not always easy to patch it together in a format desired in a timeline that's desired and call it done."

RFP's and RFI's are not the same thing, they are often thousands of pages of trying to meet hundreds of pages of random requirements.

What you are talking about sounds mostly like dealing with the contracting side of it, which is a very different world.

Have you also dealt with the "Response to regulatory agency concerns" part of it?

Because I have, and while yeah, with say, the SEC, it can be a trip, i wouldn't throw the NHTSA into that category.


Fulfilling regulatory obligations is a known up front critical-path task. It's not like he tried, was met with resistance, and then bowed out. He overpromised and then failed to deliver due to a completely predictable hurdle.


"A lot of people here are assuming just by turning in "some answers" that it would satisfy the NHTSA. "

Satisfy is not the question. Yes, they will request more info if they are not satisfied. But he will not be fined, it will not "shut down the company".


1-4 is simply providing them with a copy of the instructions. 5, 6, 8 and 9 are incredibly obvious questions that should have been answered during the development. 7 is surely something you know just from the business end of things.

I have no idea what FMVSS is but that there are regulations here should be obvious, no doubt this would be something you'd become aware of after a little bit of research. As a consequence you should have an answer to that. This covers 10-12.

13 and 14 he should have known at the top of his head. 15 it seems you could just say you've nothing to add or drop whatever else you have on hand.

Nothing about this strikes me as unusual, unexpected or hard to answer. There is no way this is the reason Holtz gave up unless the product is a complete failure, he hasn't run any safety tests whatsoever or he's using this as an excuse to cover up something else.


I think the subtext of the FMVSS stuff is that the comma one is thought to be replacing the rear view mirror on the car, which is apparently illegal in some states and also prohibited by regulation in cars shipping in the US.


I find it hard to believe that anyone of reasonable intelligence, certainly someone intelligent enough to work on self-driving car tech, could miss something so incredibly obvious.

Modifying a car is one thing but directly interfering with parts critical to safety?


Federal Motor Vehicle Safety Standards


Oh, well that sounds like you'd know about it, if you live in the US. Should make it easier to answer all of this still.


It's also the first thing that comes up when you google the term - I'd think an autopilot startup wouldn't have a problem with that one...


They could have asked for George's middle name and the cost of compliance would be burdensome. It is not the rap but the ride that you can't beat.


This list of requests is, for a self-driving car startup, indeed pretty close to simply asking for middle names.


THe funnier thing is if you search google for NHTSA special orders, you will find this is pretty much the nicest and simplest one they've issued in quite a while :)


> It is not the rap but the ride that you can't beat.

I think that's the reason he's bailing - he doesn't have the lawyer crew, he doesn't want to play the law game - he just wants to create things.


Risk analysis is not a "law game," it is engineering, and a fundamental part of building safety critical products. Software development has historically gotten away with ignoring risk because - outside of specialized domains - the worst of the worst case scenarios were broadly acceptable. But when a developer moves into those domains, the worst case changes from "oh no the website is down" to bodily harm. They should expect to step up their game.

Turning a risk analysis into a deliverable suitable for interfacing with federal regulatory bodies is actually fairly easy. You're just generating a report on engineering work you already did. It's only hard if you want to get away with not mitigating risks (or high levels of residual risk). Because if you document the risk, it serves as proof the engineering team knew about the risk when the product was brought to market.


You're conflating law and risk. One turns out to be a reified political thing reflecting the powers that be of times now and past, the other thing is an engineering concern. I'm quite personally familiar with the specialized domains of which you speak. I would say that our personal ethic of engineering quality totally dominated any legal questions. I'd guess we could have smokescreened any legal paperwork if we really wanted (cough, BMW emissions scandal cough). But, we didn't! We cared, as a company.

Re comma.io - having to play the legal game is a substantial existential risk, only mitigateable with very well paid lawyers and a decent PR crew. That's very different than providing an engineering report. I would guess that if they get rolling in China, they will come back and have the funding and will to hire lawyers to sort the problems.

If you want to contemplate the difficulty of doing engineering vs. surviving the law, consider Tesla's issues selling its car.

It's not the rap - it's the ride.


Creating things that sit on the shelf unused?


precisely.


These particular questions aren't a law game. They are straightforward outputs of good engineering practice.

There is some gap between just wanting to create things and to create things that impact personal body integrity. It is of some concern that in today's software engineering environment that this is even a question.


Wow. I guess the first thing I'd do is file a FOIA request to see any Special Orders they may have sent to Apple, Uber, Google, Tesla, GM, Ford, Mercedes Benz, Volkswagen, Audi, Nissan, Toyota, BMW, and Volvo, and for copies of any documentation received from those companies that would be responsive to the demands in their own order or the order to comma.ai . Then I'd also request an extension in the time to respond until such time as the documents responsive to the FOIA request have been produced and read.

It would be so much easier to respond if you could just do exactly the same thing that the larger players did.

Unless, of course, those other companies didn't have to produce any such documentation. In that case, clearly, every last company working on autonomous vehicle functions would have to receive a substantially similar order, post haste. We wouldn't want anyone crying about selective enforcement, after all.


Something tells me that serious companies involved in this space never received any Special Orders because they reached out to and worked with the NHTSA from the start, rather than trying to fly under the radar and then being shocked! Shocked! When the government actually noticed them.


This seems to make sense, given how I'm in the past few years starting to realize how things actually work in the world..


Why do that when you could just get the information directly from the NHTSA [1]? I expect all those companies have been in communication with regulators from the beginning and produced plenty of documentation. Besides, they can't do the same thing the large players did since they aren't in control of developing the entire vehicle.

[1] http://www.nhtsa.gov/nhtsa/av/av-policy.html


I read a bit of that, and it looks like the document you linked does not support the threat to fine $21k/day for not responding to the Special Order.

There is quite a lot of "voluntary" in there, along with "future regulation". I would guess that a lot of the voluntary safety assessment letters they get essentially say, "this is not entirely safe in absolute terms, but is definitely safer than an inattentive human driver."


How is that going to help him? You think the odds are that a company like Nissan has less documentation than he has?

Also: secret commercial information is exempt from FOIA.


1. Look at what they did to satisfy the regulators. 2. Respond in as similar a manner as possible.

The commercial info is not necessary. They need to learn and follow the forms and protocols, and at least pretend to to be respectful when kissing the ring. If GeoHotz ran a restaurant, I wouldn't want him to be the one talking to the health inspector, either. He has the wrong personality for it. You really want the schmooziest suck-up in the company to do that stuff.

I fully expect that all the companies with compliance departments have reams and bales of responsive documentation. As comma.ai appears to be clueless with respect to regulators, the more sample material that they can look at to crib the correct answers, the less likely it is that regulators would summarily drop the axe rather than try to work with them in a reasonable manner.


"Kissing the ring"? They want his test documents, not his fealty.


They want his compliance, or his $21k/day. The document is all stick and no carrot. And it is a big, federal regulatory agency stick. I think "kissing the ring" applies.


You also they often have very common procedures for these types of letters, right? Usually brought on by court cases, etc. For example:

http://www.fda.gov/ICECI/ComplianceManuals/RegulatoryProcedu...

They also generally warn of fines ahead of time, because otherwise they get people arguing due process.

Seriously, this is not that uncommon. Note the fine warnings in all of these special orders:

http://www.documentcloud.org/documents/1349845-n-h-t-s-a-spe...

https://www.google.com/url?sa=t&rct=j&q=&esrc=s&source=web&c...

http://www.safercar.gov/staticfiles/safercar/recalls/Special...

http://www.autosafety.org/sites/default/files/imce_staff_upl...

etc


It would be so much easier to respond if you could just do exactly the same thing that the larger players did.

The FOIA route is appealing because it feeds into a sense of perceived indignation.

But the most important thing to copy from the large players is never going to be spelled out in their submissions.

I'll let everyone in this thread in on the secret: hire someone that specializes in quality management systems.


Today, I can only paraphrase George's harsh bulling destructive feedback he gave us on our startup:

George, "that wouldn't work" :)


And days before that I was with him in a fundraising dinner and we were like besties. Sociopaths.

(Edited to reflect the point more politely)


That's a very rude thing to say publicly.


>And days before that I was with him in a fundraising dinner and we were like besties. Sociopaths.

Are you suggesting people should be rude to you when your product is shit?


How is that bullying, how is that destructive?


I'm seeing a huge number of assumptions and assertions, but this is still "developing news". I think it's equally likely that this device will be brought to market at a later point when the start-up gathers the resources, test results, and the lawyers that can handle the regulators.


He did release the dataset and some python scripts for training the model.

https://github.com/commaai/research

The last blog post talks about them having a system for the 2016 civic. I guess maybe they will sell it on aliexpress?


Here's Bloomberg article on Hotz's initial prototype: http://www.bloomberg.com/features/2015-george-hotz-self-driv...


Technically it sounds more like he is pivoting to other markets and tweaking the product. I don't think he is giving the 3 million back to Andreessen.


For a community that preaches iteration so consistently it's rather hypocritical and hilarious to see so many responses basically thinking that he's vaporware and moving operations to China because he can't hack it here in the US - Occam's Razor would indicate less regulations in China would be a path of least resistance to iteration of this sort.

Besides, if Hotz can get his car to navigate China before Musk can get his product to not drive in the wrong lane in a fucking parking lot, then we've got game on.


Comma's main advantage over Tesla's Autopilot was, as I understand it, the use of cameras to collect video data. Hotz presented that as the advantage at TechCrunch Disrupt, at least, saying that he could beat Tesla's Autopilot because he had full video data from a small smartphone app that he had test users running as they drove while Tesla's vehicles had no video input.

I wonder if his choice to cancel Comma is related to Tesla's recent announcement that all new Tesla vehicles would ship with a full video camera outfit.


> while Tesla's vehicles had no video input.

This is factually incorrect.

Tesla vehicles rely on Mobileye [1] cameras to do its environmental analysis. These cameras come with the processing hardware to analyze the sensor data they're getting, and spits out environmental information that the driver assistance [2] can use to navigate the car.

Whether Tesla had access to the sensor data directly rather than just the analyzed stream is not public information. Now that Mobileye broke up with Tesla [3], it's unlikely they'll ever have it.

Mobileye has years of experience in dealing with the kind of visual data you need to have on the road, such as crazy HDR (you want to be able to extract useful information when the sun is glaring at you as well as in the dark woods in the night, and be able to switch immediately as when you're going through a tunnel). They've developed dedicated hardware to reliably and quickly do object distance and velocity detection: this is exactly the scenario that a "Real-Time OS" is made for. You cannot afford to be pre-empted and lose 5ms on your analysis and miss an action frame.

Thinking you can pull this off reliably with a non-RTOS and consumer cameras is... naive.

[1] https://en.wikipedia.org/wiki/Mobileye

[2] Mobileye's equipment was never rated for full-on "Autopilot", which is why they were very unhappy with Tesla marketing their systems as such

[3] http://www.wsj.com/articles/mobileye-ends-partnership-with-t...


I should add, by the way, that while Teslas have onboard radar, it is not a primary sensor, but a secondary discriminating sensor.

This was admitted by Musk on Twitter following the fatal crash in Florida: https://twitter.com/elonmusk/status/748625979271045121


It was a secondary discriminating sensor, but the latest Autopilot 8 upgrade retasks radar to be "a primary control sensor without requiring the camera to confirm visual image recognition": https://www.tesla.com/blog/upgrading-autopilot-seeing-world-...


Indeed. And I absolutely do not trust them on it. Their radar hardware just isn't good enough to reliably do that.


Ditto. It also seems strange to be able to retask their radar from secondary to primary control.


Personally, I think it's good to see this. I work on a small part of an ADAS system being developed for a large Tier 1 supplier. They won't even look at you if you haven't adhered to ISO 26262 development standards which add - easily - 2 times the work that a normal development would take. And for good reason.

I'd like to see GH's Failure Modes and Effects Analysis, Fault Tree Analysis, Failures in Time Analysis, etc., etc., etc.

This stuff is really safe and not there yet for a REASON.


When is the open source project to create self driving systems going to start? I feel very uncomfortable with private corporate AI driving me around and making decisions with my family's life in the balance. For me, I need to be able to see the code, change the code, change the weightings and priorities. I've been thinking about FOSS solutions to car computers a lot and it seems like it is really the only way to go if you care about freedom, security and privacy.


Probably about ten years after it's commercially available.


The government threatening him to pay a penalty of $21,000 per day(!) before even selling the product sure didn't inflict affection from his side.

[0] https://www.scribd.com/document/329218929/2016-10-27-Special...


They're just asking for information at this point. The penalty is if he doesn't provide that information by the deadline. I'm sure being threatened with such a penalty isn't pleasant, but it's hardly unusual. I mean, just about every little government form I send in has a section that starts out with, "Under penalties of perjury, I declare that...."


There's gotta be something else, no way one letter makes you scrap a product like this, I fail to believe it


I've been thinking for a while now that the main threat to the existence of self-driving cars in the medium-term future is that somebody, in the rush to beat Google, jumping the gun and selling cars that kill people cause Congress to knee-jerk outlaw the whole thing.


Do third-party "autopilot" systems require ISO 26262 certification? I know that car manufacturers and tier-1s spend lots of time and money getting their safety-critical code certified. Do the same requirements not apply to aftermarket solutions?

Actually, now that I think about this more... does Tesla's autopilot have ISO 26262 certification? I hope that the recent advancements in self-driving cars isn't a result of tech companies bending or breaking the rules, but I don't know enough about when ISO 26262 is required (if at all).


I hope he backs down a bit, consults widely and work with the regulators looking into his company's product, even while exploring other markets.

Still personally rooting for him because I personally think his product is amazing, and is poised to be a huge success if he does ship.

As an outsider, his audacity, backed by proven technical smarts, seems to be the quality that should define the Silicon Valley startup scene, but over time, sentiments here at HN and other forums seem to suggest otherwise.


The following quote from his site speaks volumes about the hubris:

"we didn't do anything wrong, but somehow, we lost -- nokia, or car companies in 5 years"


<<George Hotz cancels his Tesla Autopilot-like ‘comma one’ after request from NHTSA

What is the role of the national highway transportation safety administration wrt driver less card and trucks?

Can they issue rules? Can they prevent deployment? Is it for cars and trucks? Once driverless exists will there have to be a differentiator for auto/truck?


> Can they issue rules? Can they prevent deployment? Is it for cars and trucks?

Yes. They are a federal regulatory agency.


They can issue rules and regulations, levy fines, and prevent deployment on public roads. If you are operating only on private roads(say, a farm, or a mining site), then you are not under NHTSAs purview and can more or less do what you want.


Hotz started an AI company and the autopilot system was a relevant application but not the core product or mission.


"The difference is shipability."


Not everything we do is a success, and there are many reasons for our failures when they happen. The trick is to lick your wounds, learn from your mistakes and get back on the horse as soon as you're able to.

I hope George learns all the lessons he needs from this to make whatever happens next a success.


Why does the document from the NHTSA look like a court order..formatted like something you would receive when getting sued, etc? Is this standard format to request that questions be answered?

It looks more intimidating and scary than a friendly "hey, we want info about your product".


That's to make sure you understand that if you ignore the document there will be consequences.


Reading the letter seemed somewhat tame until the end - fined $21k/day for noncompliance. Ouch.


It's not really out of the ordinary. The fines provide an actual known incentive to actually respond and not brush the letter off.


In the discussion on Jalopnik (a fairly large auto-enthusiast site), someone mentioned that Alex Roy had a good experience riding in a comma one-equipped car (https://twitter.com/AlexRoy144/status/791996855114694657).

I thought this was an interesting perspective - Alex Roy is famous in car enthusiast circles for his driving records over the past 30 years. He's set speed records with drives across the US and around Manhattan, and has set electric/semi-autonomous records in a Tesla Model S in August 2016.

The point being that Alex Roy has probably spent more time thinking about driving, planning trips, and understanding traffic rules than most other people alive today. His perspective is just a single perspective, but given his massive experience with car-driver systems I think it's an interesting perspective.


The commenters on that twitter thread are critical of geohot for "rolling over easily".

I don't think we should presume that the guy who defied Sony, AT&T, Apple, and the DCMA is doing any such thing.

Maybe he discovered a flaw in the design of comma AI or something.


Then maybe he should have said so. Criticizing him based on his own description of why it's canceled is entirely reasonable. He got a letter from the NHTSA and immediately said he was done because he didn't want to deal with it. He's either rolling over for them or he's lying about his motivations, either way it's not good.


Well, it is not really cancelled. He's moving to China.


He said "The comma one is cancelled." I interpret the followup as being that they're doing other products and doing those other products in other markets. Maybe he just meant they'll do it in China instead, but I can only go off what he says here.


It's weird that he would give up on his product after one letter, specially when money is not an issue. Perhaps he already has a buyer for the tech and just wants to move on to something else.


Funny that Tesla doesn't even sell "Autopilot" anymore, as it was an off-the-shelf 3rd-party solution, but the reference persists. Talk about the power of marketing and branding.


Was this just a face-saving way to end the project, or did he really shut down a viable project that he wanted to continue over a simple request for information by a regulator?


Rule number 1. Don't piss off investors. This won't bode well for Comma or the investors. Really bad PR move.


Best way to get some more press and attention? Hype a project and then claim to cancel it.


May I remind everyone we don't know the full details behind what's going on!


doubt he actually cancels it and changes his mind next week, this was more spur of the moment and his investors calmed him down?, way too much attention being brought to it


It appears Hotz will now deploy his system in China, where the regulations on such devices are surely more lax. Looks like a good move for extended testing without pesky government scrutiny.


If it doesn't work the US might be a better place to mess up than China.


I was thinking the same thing...sure there are fewer regulations, but "death penalty" is not outside the realm of possibility for fucking this up in China.


Must be a great morale boost for the employees.


Thank goodness. It sounds like his grand scheme crumbled under a little scrutiny from a consumer safety organization. We don't need another theranos.


Maybe it's a piece of crap and not worth the lawyers


This is the most likely explanation to me


That reminds me of my old comment about this

https://news.ycombinator.com/item?id=12492856

I'm really not overly negative about new things but this was predictable.


YC should take note, as a bunch of recent investments outside of pure software will likely end up in the same problem space.

nuclear energy, supersonic planes, etc. - applying the "disruption" and "agile" method does not work everywhere, to put it mildly.


Good. It seems exceedingly unlikely that this is a road-ready technology, and we don't need yet another software-brained megalomaniac releasing an insufficiently-developed, unproven product into the physical world, where "bugs" will actually just straight kill you.


Egohotz is back!


Classic government. Self-driving technology should benchmarked against how good people are at driving (and how often they kill and injure people), as opposed to being proven intrinsically safe like a car part.


The letter they received from the NHTSA is available here:

https://electrek.co/2016/10/28/george-hotz-cancels-his-tesla...

It looks to me to be fairly mild and entirely reasonable. To immediately respond to this letter with "fuck it, we're not going to make this product anymore" is bizarre, and not the government's fault.


You call an out of the blue threat of $21,000 per day (!) 'fairly mild'?


He was probably expecting a letter from the start and knew that time is limited. geohot may not care about laws and ethics, but there is no way he doesn't know about all the possible consequences he may face in a project like comma.ai. He works on something until either the pressure is no longer justifiable or he succeeds (like unlocking the iPhone). Working in this manner gives him a great advantage.

I also think it's likely that he never actually intended it to become a real product and it's all about the journey and experience.


When it's just a request for information and the penalty is if you ignore it? Yes. All they have to do to avoid the penalty is provide the information requested. Just about every government form threatens you with jail time if you lie, for example, and we put up with that.


The reason for the disparity here is because, while you're right that humans are not particularly good drivers, there is a massive institutional infrastructure built up around things like criminal prosecution and insurance for when a human messes up. We know[0] how to deal with (accidental or deliberate) 'bad' people. We haven't quite figured out how to do that with intelligent systems that blur the line between automatic and intentional.

---

[0] Of course, there is much valid debate as to whether we really -know- how to do this either. But legacy and history carry weight, unfortunately, either way


They didn't ask him to prove it is safer than human drivers. They essentially asked him for documentation showing it does not actively make the car less safe.


I mean, they apparently have a clause trying to entirely deny liability. That's not really a reasonable spot.


Can you show me if Tesla's is fundamentally different? I'm sincerely curious.


Afaik they don't have people signing waivers, so I imagine they must have gone through a lot of regulatory hurdles? They definitely talk about people needing to be attentive in ways, but I don't personally know the extent of it. "Tesla autopilot liability /waivers" didn't come up any relevant stuff.


Well there are two options a politician has here:

1) Figure out how to explain a nuanced argument to their constituents.

2) Block progress until their explanation problem goes away (in this case, by raising the benchmark).


Sure. In the US there are ~8 fatalities per 1 billion vehicle miles. Let's see some documentation from him that show he can beat that rate.




Consider applying for YC's W25 batch! Applications are open till Nov 12.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: