Hacker News new | past | comments | ask | show | jobs | submit login
DMV approves Cruise and Waymo for commercial service in parts of Bay Area (ca.gov)
421 points by ra7 on Sept 30, 2021 | hide | past | favorite | 382 comments



Cruise founder here. This is kind of confusing. Short version:

- Cruise permit is for robo-taxi service, available to public (fully driverless, nobody in the car)

- Waymo permit is for robo-taxi service, available to public (human safety driver behind the wheel at all times)

- Nuro permit is for robo-delivery, available to public (no human passengers)


Is this because you specifically asked for a driverless permit? From the post,

> Cruise has had state authority to test autonomous vehicles on public roads with a safety driver since 2015 and authority to test autonomous vehicles without a driver since October 2020.

> Waymo has had state authority to test autonomous vehicles on public roads with a safety driver since 2014 and received a driverless testing permit in October 2018.


Yes. It's a very different permitting process. As you might expect, the bar is very high to jump from driverless testing to operating a driverless service available to the general public.

More info at the DMV's website (although it's a lot to digest): https://www.dmv.ca.gov/portal/vehicle-industry-services/auto...


What happens when the car runs into a problem that it cannot address (Damage to car, blocked by obstacle in road) Does the passenger call you, or are you monitoring the car remotely for problems?


Feel free to repost this, but here’s what happens in AZ

https://youtube.com/watch?v=zdKCQKBvH-A&feature=share


presumably all cars are persistently monitored 24/7 anyways.


Are there any extra conditions for the company or for riders? Like pre-vetting, or special responsibilities.


Congrats! Curious why Cruise got a permit to operate only between 10 PM and 6 AM? Is it a DMV decision because you applied for driverless operations or did Cruise specify operating conditions?


10PM to 6AM sounds like a great time for testing a new technology.


I'm not looking forward to the self-driving AI vs drunk driver battle.


I on the other hand am looking for the drunk person taking the robo-taxi instead of their own car situation.


Can I ask why? I know you didn't say otherwise, but I think that would be a good thing.


There's actually great synergy(sorry) between self-driving AI and would-be drunk drivers. The only downside may be the need for additional investment in cleaning services.


I was wondering if it could be my job actually to go to a bar, get intoxicated, and take a Cruise home.


Definitely. But just wanted to know if this restriction is self-imposed by Cruise to start cautious or if it’s imposed by the DMV.


Easier to dispose of the bodies maybe


I wonder if we'll ever get to the point where robo drivers will report suspiciously shaped carryon items (e.g. "CA-SF-69420 reporting in. passenger spotted carrying human shaped object wrapped in a tarp. bleep bloop."


This is not reddit or 9gag bro/sis/them/their/nb.


Super awesome seeing you on HN! As a previous Cruise employee, genuinely happy for you and the team on this great milestone!


'No human passengers' -> except for those in all the other vehicles on the road; for cruise it is 'and the passengers of the vehicle'.

On a scale of 1 to 10, how confident are you that your service won't kill someone?


Hope we apply the same metric to human drivers as well and don't allow anyone on the roads till they prove they are attentive 100% of the time and can maneuver like F1 drivers at all times. Monitored cameras every 100 feet would be a good start to find who's driving badly, and also a required driver face camera that records to a blackbox like device that cops can get the video from, and which the driver has no control over.

>Fatalities from Crashes

>In 2016, over 33,000 traffic crashes resulting in fatalities, major injuries or minor injuries were reported on Bay Area roadways.

10% of the crashes resulted in more than minor injuries, so 3.3K serious crashes.


Yes to the being attentive, no they don't need to maneuver like F1 drivers because they are not in a race.

Drivers in the USA are - sorry - absolutely terrible, the driving license requirements are ridiculous.


Your profile says you’re in the Netherlands, which is a completely different place.

You may know this already but here in the US, even though driving is technically a “privilege”, in reality it is a requirement in most locales due to lack of consistent and reliable alternatives. That’s part of why getting a drivers license is fairly easy.

It’s also why your “I’ll have my son shred my license after I turn 70” wouldn’t work unless you were living in a place like NYC or Chicago, or were totally okay with having little social life (dependent on your kids’ and friends’ ability and willingness to drive you to places).


Collectively you decide what kind of country you want to create and individually you have the choice to move. FWIW I spent years on the Canadian/USA border and have driven there quite a bit.


These autonomous cars do maneuver like F1 drivers so I guess they are a big improvement already


Only the same way a Boeing 777 is like a fighter jet.

F1 cars can break 7g’s cornering where street legal cars have trouble hitting 1/3 of that. They do 0-120 mph faster than most cars can hit 0-60.


On a scale of 1 to 10, how confident are you that you'll never kill someone while you're driving?


Pretty good. With well in excess of a million km driven, a valid driving license (something which no software product so far has been able to achieve), zero alcohol, zero drugs and no smartphone to distract me while driving. I see driving as a 'full time occupation'. My vehicle(s) (and that includes by bikes) are kept in tip-top condition, no expense spared, I'm fully aware that it isn't just my life that's on the line. I also have a standing order to my eldest that he is to shred my driving license when I turn 70 no matter how much I protest, and that's only 14 years away.

I'm all for self driving cars and I believe they are the future. I also believe they will be a decade or more in the coming and that we need at least one more level-up before AI sofware/hardware combos are solid enough that your typical self-driving solution will outperform an experienced driver.


Ah, then a fair question to ask might be how confident Kyle is that there won’t be an at-fault crash in the first million km driven.

(Or even better, what he predicts will be the distance driven per serious at-fault crash.)


Fair enough.


70 isn't too old to drive.


Neither is 71, 72, 73, 74 and so on, and then one day you really were too old to drive. In a country where driving is a necessity there is no way around this so you get a lot of people driving who really shouldn't be. To play it safe I've set the limit at 70 for myself because I think that if I don't put a hard line down I will always think 'one more year'. This avoids the gray area entirely, exactly because 70 isn't too old to drive.


I know a 62 year old that shouldn't drive. And I know a 76 year old that is fine. It really depends upon the person, but our dmv isn't fit to detect this.


I know people in their 20s/30s who probably shouldn’t drive…


He's trying to be one of the pioneers of robot cars ... does he care how many people his robot cars are going to kill (drivers of them to the innocent drivers driving alongside them) in the name of progress? PUtting them on the road and learning and improving the AI is the progress needed to perfect them over Years, but its going to be deadly... unlike creating and fixing bugs on a videogame live streaming site.

Maybe he does care, but how much?

This is not to be flippant (i appreciated and used Justin.TV a ton back in the day..canceled Cable TV cause everything was there for one to watch ... 24/7 marathons of your favorite shows) but what I view of the harsh reality of putting these things on the road as Uber's self driving car already killed a pedestrian. Personally, I'm not sure I could in good conscious work for a robot car company (just was being recruited by such a company).


Tho no one is behind the wheel, do you still have people operating the car remotely when there are issues?


Why the difference Cruise vs Waymo?


Two questions: 1. How do we see this detail about permits, such as for example that Cruise are permitted to have "nobody in the car" while Waymo are required to have "human safety driver behind the wheel at all times") ?

2. Where are Cruise's testimonials etc. for the presumably successful San Francisco non-commercial testing of this "fully driverless, nobody in the car" system you now have commercial deployment for?


Thanks for the clarification! For the Cruise rides will they be booked through services like Lyft or the Cruise Anywhere app?


Is there a timeline for when you will be rolling out the availability to ride in one of your vehicles to the general public?


I’m rooting for you!


will you not have a robo-taxi service with human safety driver at first?


We have been running one for employees on and off since 2017.


So is that a yes or no to parent's question?


How could you interpret it anything other than a 'yes' ?


Actually, answering “yes” to that question would have been extremely confusing given how the question is worded! Which makes the snarky request for a binary response even less reasonable.


I take your point but it might not have been snarky - for example, HN has a lot of non-native English speakers and different languages handle affirmation-of-negation very differently.

https://en.wikipedia.org/wiki/Yes!_We_Have_No_Bananas


Running a program "for employees" is pretty different than running it "for everyone"


Serious question: how much death are you budgeting for over the next 18 months?


wow. just wow. in the words of Archer, "phrasing!!"

there's a way to ask a serious question to get a serious answer, and then there's your way.


Yes, the direct way. I'm sure it's in the budget. Would love to know the figure.


While I can appreciate that, let me rephrase my post:

There's a way of asking a question without being a total dick, and then there's your way.

Being direct doesn't mean being an asshat. I have plenty of experience of doing it the wrong way, and yes, I'm suggesting yours is not the right way.

Edit: "I'm sure it's in the budget."

Actually, I'm guessing this would be covered in the liability insurance. So it's probably a much smaller line item in the budget than you are so dramatically inferring it to be.


It's a useless question anyways, because the company gains nothing by "budgeting" for death, and loses everything by settling on a real number. It's like asking Boeing how many families they expect to send payouts to in any given quarter. The only answer is zero, and any answer above zero is itself negligent.


It's not exactly useless, as it is a serious business concern. However, asking it the way it has been phrased is useless, as you've stated, stating from the start that you are planning on paying out fees like this is just dumb. It is good business planning to be aware of the fact that it is a >0% chance of unintended bad things happening, so let's take out an insurance policy to cover for that contingency. There's a difference between planning for the fact that an accident is possible vs planning to cause an accident.


Exception events like are usually covered by Insurance.

However companies do provision for payouts in potential cases whether private/ class action or government regulatory ones , they are legally required to do so and disclose it in their filings.


Insurance means the maths and plans are worked out by someone.


That is really for the insurance companies covering them.

The actuaries at Insurance Co do that kind of analysis for all products for life/ medical to workplace/ product safety this would probably fall under.

The courts also have fairly well known tables/methods for calculating cost/value of life used to compute compensation for injury/death.


You know back in CS class we had to go over the THERAC25 problem a lot. It wasn't so much about the specifics of the incorrect code, but more about the hubris that it "couldn't have been the code".

It seems like with self driving, it's like 45 companies just started making clones of the THERAC25 where they "are pretty sure it's fixed now, this is easy". It's the same damn ego.


Thanks for the clarification, but is there any public resource I can point people to in order to confirm the difference between Cruise and Waymo permits?

I believe you, but skeptics won't be persuaded by "the founder said so on HN".


It's literally the post you're commenting on


> Cruise founder here. This is kind of confusing. Short version:

> - Cruise permit is for robo-taxi service, available to public (fully driverless, nobody in the car)

We need to have fewer executives talking about their awesome safe technology and more executives being forced to use their awesome safe technology.

It is extremely unfortunate that as the condition of the permit the DMV did not force dogfooding requiring that every executive of a company applying for driverless robo-taxi permit and executives' families, including their children, were mandated to give up their drivers licenses and all other modes of transportation ensuring that those executives and those executives families are the constant test subjects of the technology.

Same goes for the likes of Musk.


Between the noise of self-driving singularlists, and that of self-driving impossibilists, it's great to see plodding, sequential progress like this.


Yes. I've been saying that for a few years, as Waymo's disconnect rate improved by a factor of 2 every 18 months or so. This is a hard engineering problem. Now that the "fake it til you make it" clowns have dropped out, there's real progress.


> Now that the "fake it til you make it" clowns have dropped out

I am pretty sure they recently announced "Full self driving v 2.0 for real this time, seriously, guys" and did not drop out


you're forgetting the "Full self driving v2.0 for real this time, seriously, guys. Oops, v2.1 (bug fixes)"


The one that says "this is beta, don't trust it at all"?


Where can I follow along with the latest data on disconnect rates?


From personal experience seeing how their data is generated, I don’t trust Waymo reports at all. They straight up lie about the capability and disengage rates to an astonishing degree. I left several years ago, but I see no reason they would have developed honest practices on the meantime.


I have some personal experience as well, and I didn't pick up on any dishonesty like you describe. There was a consistent emphasis on safety, and multiple levels of thorough testing. When I left, I had generally positive opinions about the project.


They definitely care about safety. I don’t doubt their dedication to preventing accidents. My objections come from their hand waving away disengagements from the official results, shared with both regulators and Alphabet leadership. This was done to make their lead look greater than it was and to hide the slow progress with simple, required driving maneuvers like yield signs and unprotected left turns. My position had me handling driver issues directly and what the reports said did not line up with their, or my own, extensive time in the actual car.


I don't know about the disengage rates, but just the training model of data scares the bejeebus out of me. There was training data released and posted here on HN some time ago. I'm pretty sure the post was talking about how bad the training data was. There were all sorts of crazy things like signage being indicated in the trees or other "to a human" obviously wrong data. However, this was the supposed training data for some ML system. It was at this point that I just shook my head and walked away from interest in the subject.


The reported disengage rate has improved a lot, but Waymo only reports disengagements it classifies as "related to safety", based on proprietary analysis and counterfactual simulations it does not share.


This isn't really true. You may be thinking about the CA required disengagement report rules. From a recent waymo safety report:

"In this paper, “disengagement” refers to any event in which the AV’s automated driving mode is disengaged. This is broader than the definition used in certain California state regulations, where the term more narrowly refers to certain safety-relevant disengagements."


What about the accident rate per miles driven? If that is dropping too I'd see that as confirmation that it really is improving. If that isn't then it's just bookkeeping.


You can read all California autonomous vehicle accident reports here.[1]

Most Waymo reports are "stopped in traffic, was rear-ended". There are two reports this year where the Waymo vehicle was responsible. In both cases, it was backing up while doing a three-point turn in a narrow street and bumped into something at 2MPH. So they need better sensor coverage in back.

[1] https://www.dmv.ca.gov/portal/vehicle-industry-services/auto...


not necessarily, they could be putting the car in more difficult situations as it becomes safer


I should start paying attention to these pesky plots then ;-)

https://en.wikipedia.org/wiki/Gartner_hype_cycle#/media/File...


What's the "disconnect rate"?


I believe it's how often the system requires human intervention... e.g., once per 1,000 miles, or once per 250 hours, or something like that.


Oh, that would make sense. Thanks!


Disengagement rate.


Did you honestly believe you were clarifying in a less confusing manner than the original question with this response?


The parent poster is correct though. "Disengagement rate" is the term used in the industry, and what would get you results in google searches. "Disconnect rate" is not what it's called, so it's helpful to correct that.


I think most reasonable people aren't against self-driving, they don't consider it impossible.

They're generally against "create a self-driving startup in 2021, IPO by 2024 at the latest". That's just snake oil salesmanship.


Also against the "move fast and crash into things, and people" attitude some of them have.


(Tesla)


> The California DMV said in a separate release that Cruise driverless "vehicles are approved to operate on public roads between 10 p.m. and 6 a.m. at a maximum speed limit of 30 miles per hour."

At night only testing at super low speed. It's a good way to get started, but definitely not an autonomous taxi that title implies.


The speed limit for most residential and commercial streets in SF is 25mph - something like 97% of all street segments in SF have a limit of 30mph or lower.


Yes but it means a Cruise would not be able to jump on 280 and take you to SFSU or whatever.


I doubt a lot of passengers go to SF State at night


yes, no college student has ever needed to get back home at night


Significant portion of them should not be legally driving either, being Drunk/High etc.


Which is pretty much the business plan of self driving car service. Provide a method for people that 100% should not be driving a way of safely transporting themselves home.


Well, there is also the option to hail a cab or a uber driven by a not drinking now human


Thats the easy part of self driving.


No it's not.

Source: actual ADV engineer


As an actual ADV engineer, do you think this is a reasonable launch? What do you think the timeline actually looks like for driverless and why?


I think one interesting property about nighttime driving is that lots of objects and markers are high contrast. Lights of other cars, highway markings, what can be illuminated by headlights/etc

(aside from folks crossing the street in non-reflective dark clothing)

I think the worst time to drive - as a human - is right when the sun is coming up or going down, or under tree cover when you go into and out of shadow.


Some recognition tasks might be easier at night but I expect that's not the reason permission was granted for those hours. It's almost certainly because there's less other traffic (and fewer pedestrians) at night.


Aren't they relying on LIDAR tho? Night/day all (roughly) same.


I’d imagine they use both to improve accuracy.


> I think the worst time to drive - as a human - is right when the sun is coming up or going down, or under tree cover when you go into and out of shadow.

Fatal accidents do indeed spike around these times, although they seem to be worse in the evenings than mornings, and more pronounced in southern states than northern ones.


If it is San Fransisco city than 30 MPH is not bad at all. I do not imagine other cars driving > 40mph.


30MPH isn’t super low speeds, it’s plenty fast for collisions to be dangerous.


For reference, a fall from about 30ft would take about 1.4 seconds and you'd reach a velocity of 30mph.


This is a disengenous comparison.

The overwhelming majority of people who fall from 30ft do not attempt to decelerate in the last second or so.

People who text their way into stopped traffic notwithstanding, 30mph is the kind of collision speed you see on roads where traffic moves low highway speeds. Edge cases notwithstanding (i.e. someone barreling through a red or whatever) collisions on roads where traffic moves 30-40 typically happen at speeds 10-20mph below that because people generally brake but too late to prevent a crash.


Changes quite a bit with getting sideswiped while walking around the roadway, my #1 fear as a pedestrian. Looked up some statistics and this is more common than getting hit while crossing the street. Two more things to figure out though:

- How often is an accident while walking along the roadway getting sideswiped, instead of just a full-on crash

- The equivalent of being run over by a car doesn't happen when falling 30 feet. I wonder how much this accounts for getting hurt as a pedestrian.

This is just what I thought of, it is also a threat to cars, but that would be offset more by the velocity of the car that got hit than it would be by the velocity of a pedestrian. (For example, a car that is going 25mph and gets rear-ended by a car going 30mph)

Also there is often hitting the brakes before collision. So it will be slower sometimes.


Do these cars have billboards or indicators of some kind? If not, they really should. In my town I wait for cars to stop before crossing, and make eye contact or get a friendly wave to let me know they see me.

If a driverless car just stops, I don't feel comfortable walking in front of it if there's no indication that it sees me.

How could that be solved?


they should have some sort of hazard like flash when they are stopped for an obstruction like humans in the roadway. that would also reduce the rear end situation where they are unreasonably stopped in traffic due to crappy software.


Drive.AI, before they went defunkt, had LED billboards not unlike what you described. It was a gimmick that didn't take off in the rest of the industry due to the universal presence of a safety driver.


Are you referring to walking on the sidewalk or on the street?


The street when there isn't a sidewalk, the sidewalk when there is one. There are a lot of places without sidewalks.


Well, there's your problem, places built for cars, not humans, should fix that


That's a pretty great way to visualize the energy involved


For car-pedestrian, car-bicycle, and car-motorbike collisions, yes.

For car-car collisions, not really. Yes, getting into a head-on 30-30 MPH collision is pretty bad, but you're more than likely going to walk away from it, especially if you're in the back seat.


"Pitt et al. (1990) examined about 1,000 urban crashes with pedestrians younger than 20 years of age taken from NHTSA's Pedestrian Injury Causation Study (PICS) data. They found that, compared to crashes with vehicle travel speeds of 10 - 19 mph, the risk of serious injury (or death) was 2.1 for speeds of 20 - 29 mph, 7.2 for speeds of 30 - 39 mph, and 30.7 for speeds of 40 mph or more."

https://one.nhtsa.gov/people/injury/research/pub/hs809012.ht...


Really not sure how valid this 30 year old study is anymore!


Presumably from the title it's about a car hitting a pedestrian... I doubt there's been much significant innovation in that kind of safety since the 90s.

Features to prevent/alert to collisions, sure, but I'd assume this is about hitting someone when going X speed; surely that's pretty similar now and then.


In fact it's probably gotten worse with the growth in size of US cars. Larger cars kill pedestrians and cyclists at higher rates.


There were active changes to cars to reduce pedestrian harm. The most visible is probably the ban of pop up headlamps.

Not taking a position on the study. Just felt like sharing.


There are also changes that increase pedestrian harm, like the arms race in the height of front ends of SUVs[1] and trucks.

Recent trends in vehicle purchases also increase pedestrian harm, as the popularity of sedans has waned in favor of SUVs and trucks. These days 72% of vehicle sales are for SUVs and trucks[2], and the trend is expected to continue into the future.

[1] https://www.codot.gov/safety/traffic-safety-pulse/2019/march...

[2] https://www.nytimes.com/2020/05/21/business/suv-sales-best-s...


Saying "72% of vehicle sales are trucks and SUVs" in the context of pedestrian safety is highly disingenuous when a huge slice of that is crossovers that are mostly sold globally and conform to European requirements for pedestrian safety.

Hitting one's head on the windshield is/was the source of a lot of the pedestrian injury and fatalities and modern safety requirements try and prevent this outcome.

Would you rather get hit by a'21 CRV or a '95 LeBaron? CRV seems like the obvious choice.

Of course there's a lot more big trucks to hit you on the roads in 2021 but improvements in pretty much every smaller class of vehicle do a lot to balance that out.

Looking at the data I don't see any clear trend that pedestrians are at more risk today than in the past.

https://injuryfacts.nsc.org/motor-vehicle/road-users/pedestr...


I actually bounced off a car with modern safety maybe 10 years ago or so, trashed a good jacket but I wasn't even shaken enough to bother getting looked over at an Urgent Care, I continued walking to lunch. Not a high speed collision of course, it was a quiet inner city street and I looked the wrong way (one way street, I looked where cars should be if it was a two way street, oops), but I suspect a 1970s car would have been markedly worse for a pedestrian.

One key trick other than the pop-up headlamps going away, was a gap between the bodywork and harder internal surfaces. As I understand it that goes something like this:

Think about a large steel panel such as a car bonnet (hood?), obviously it's no comfort blanket, smacking into that isn't a good idea, but it will bend and absorb lots of energy during impact. Now, think about an engine block, that's not going to bend at all. In a desire to give a more stream-lined look, older cars would mount that large panel almost touching the engine block and other large stiff elements, because why not. Well, dead pedestrians is why not. If you add a gap that gap absorbs lots of energy that otherwise is going to cause injuries to a pedestrian. I can't prove it, but I credit that for the difference between walking away with a damaged jacket and spending the rest of my working day in A&E being told I'm not dying so can't cut to the front of the queue, but I'm also not OK and so mustn't leave yet.


A pedestrian will never actually get into the "hard parts" of a car even on an old car. But the "soft" internal bodywork that holds the radiator, headlights and front sheet-metal are more than enough to cause bad injuries.

You're exactly correct that all that empty plastic in the top front of modern cars and crossovers is there to protect pedestrians (hood designs have been revised to facilitate this as well).


Crumple zones (or their British English equivalent if there is one) are a definite change from "classic" designs but my feeling is they were mostly in place by the 90s... but I could be misremembering.


Crumple zone for pedestrians (i.e. those big bulbous plastic front bodyworks that every modern car and crossover has) are new, like mid-00s new.


Braking distances have gotten much better as tires and ABS have improved. A kid darts out from between two cars and people hit them at speeds much lower than they used to.

On the other hand the height and shape of the average car has changed to make them less pedestrian friendly.... but any one of these vehicles also sold in Europe includes a bunch of measures to lessen damage to pedestrians.

So I feel very comfortable saying a study from the 90s is not relevant because too much has changed on both sides of the equation.


I think that is a good point, as materials and pedestrian safety features of cars have changed. Seems unlikely the relationship between speed and severity would change too though.


Reporting would change too. People don't report crashes where there's no damage other than a hood and some grill plastic (such a crash may have broken a hip back in "the day"). This is even more true now that cars with nonzero amounts of pedestrian safety features have made it to the bottom of the market where people don't have the luxury of being able to mindlessly report everything to insurance and the guy who got hit and has only a minor injury looks at the situation and doesn't see a potential payday.


Have been a statistic in all of these buckets, can confirm that hitting things at faster speeds causes more injuries heh


1/6 of fatal accidents are vehicle vs. pedestrian. I don't have great data on motorcycles, but speed and or alcohol are the most common factors in those accidents as opposed to other vehicles.


People have been crushed and killed just when their stopped car starts moving because it wasn't parked in gear or with their parking brake engaged.

Speed being "slow" or "fast" here is unrelated.


Of course speed is related to safety. See page 2 for the actual data:

https://www.littlerock.gov/media/2484/the-relation-between-s...


My point, however, was that the parent poster didn't seem to think so.


Anton Yelchin


Exactly who I was thinking of as a well-known example.


Robocars shuttling around actual people in SF at night in 2021 seems like it should surprise a lot of people who have been skeptics. The road to full, global, universal autonomy will probably take decades if we are going to use that as the standard, but the standard of being certain that autonomy will eat the world seems to be getting very close to being met.


It's very easy to just distribute waste using these systems. I'm still not convinced by the single-vehicle on-demand for single-rider model.

There's got to be a way to make public transportation stop being the least attractive option in low to medium density urban settings.


First make public transportation as clean and safe as ride sharing, then you'll get demand. Nobody wants to sit next to people that smell like piss/weed/cigs/BO. If you were a Uber/Lyft driver that kind of thing would get you 1 star very quickly.


That's just classism in a thin veneer of pretention.

Frequency and reliability are what actually drive ridership


Have enough of your friends get robbed/raped/groped in public transit and spend enough time on transit with urine in the stations and blood stains on the seats, and you will come to understand that safety, security, and cleanliness also drive ridership.


I'm from Europe so not sure if that's a US thing but that's certainly not such a big topic over here. Yes, public transport can't be 100% safe, being in public never is. But rapes in public transport and blood stains on seats are so rare that I don't know a single person who avoids public transport because of it. It's always just convenience, sometimes price.


Yes, I'm also from Europe. Public transport in Europe is much safer than in some urban areas in the U.S., but that is often true in general for just walking down in the street in a place like Bern or Frankfurt than in New Orleans or Oakland (these are the two cities where I had friends assaulted on public transport).

That's why it's so shocking when these things happen and are accepted by people who actually resist reform because they view enforcing standards of cleanliness and safety as being "anti-poor" and suggest that even complaining about these issues is being classist instead of just advocating for basic human decency and higher quality public transport.


I'm not hearing any suggested reforms, I just hear complaining. Please, give me basic human decency and higher quality public transit. How do you get there?

BART has police, they're shockingly bad, even for police.


my friends are NUMTOTs who grew up in the bay and are unfazed by the poor people on bart. in fact we're amongst them.

I will never come to understand why the cloistered folks of Palo Alto hate poor people so much and will always resent them for it


It's not the poor people you notice, it's the tweakers and homeless people shitting themselves you notice. And the criminals [1]. The poor people look just like any other person on a train so you can't tell they're poor in the first place. Maybe they have crappier shoes and a crappier phone but that's irrelevant because they behave like normal people.

Seriously, have you ever taken a BART train late at night where there was a guy tweaked out of his mind? Or sat in chairs that look like they were shit on 5 times and never properly cleaned? Why do you expect people to deal with that as if it's perfectly okay? Why do you expect people not to take alternative modes of transportation when that's what they have to deal with? This kind of anti-societal behavior is only accepted in the US, and nowhere else in the world.

[1]: https://www.sfgate.com/crime/article/BART-takeover-robbery-5...


Anecdotal evidence: I only rode on the BART once. When we disembarked, my girlfriend got a few feet ahead of me. A strange man turned around and started screaming at her and physically grabbed her shoulders. I heard her scream and ran at him and violently defended her.

My experience with the BART may have been unique, but from what I've read it probably is not.


yes I take bart late at night all the time and I don't care. Doesn't bother me.

and my point is that the ne'erdowells or whatever is correlative of poverty. the conditions present in the greater bay area are represented on bart and I guess if you're a cloistered tech bro and you've never been to oakland that's scary or something


And my friends were sexually assaulted and mugged at gunpoint, yet we do not assume that this is acceptable behavior or should be tolerated because of someone's income level.

Honestly, I don't understand people who try to normalize violence and argue on public forums that it's acceptable that public transit be an open sewer just because it gives them a faux edginess in liking "gritty" public spaces or sense of twisted moral superiority to those who do care about these things.


I really don’t think this is a rich or poor issue. It is possible to have public transportation that can transport rich or poor equally, while maintaining safety and cleanliness.

For example, public transportation that has locked “cabins” for people who want to feel additional security. For cleanliness, constant sanitization of seats and floors. Trash bins available on the train, with workers replacing them as needed during station time.

No one is kicking a poor person off MUNI but if a homeless person pees on the seat, it gets cleaned up immediately. People that feel unsafe can go into the cabin. Unarmed security guards can be placed at every station, not to hassle riders, but to give assistance to people that ask.

This is not some foreign concept — some Asian countries do this quite well.


The reason public transport safety is concerning to the people you are slandering is because they have children.


What… I’ve had so many incredibly bad and sometimes downright scary situations due to being locked in close proximity to people out of their minds on drugs. Those experiences have absolutely decreased my willingness to take public transit. If you want to call that classism, then fine, but that doesn’t make the impact on ridership any less true.


Post-COVID who wants to sit next to anyone


I do, because hopefully we'll wear masks for a bit longer which also prevents the spread of many other diseases.


The economics of self driving cars will ensure a rise in public transit usage.

A few years ago, I was helping my friend consider options for saving money. On the table was if they needed to keep their car. Most of their commute was via public transit. However we looked at the number of transits for which public transit was not viable per month (visiting friends on the other side of the metro area, running to stores), multiplied by the cost of using a ride share service. Comparing this number to the car payment, auto insurance, gas, and maintenance, it was no contest: even a few trips amortized the cost of the car and made it worth it.

Every trip has a cost in dollars, time, and externalities (causing congestion, environmental impact, etc). People will tend towards being efficient with those costs, judged by their relative importance to them personally.

The thing is, owning a car is mostly a fixed dollar cost per month. Once you pass the point where owning one is worth it for a few trips, the cost of additional trips is far less, while the time savings remain high. The only time I consider public transportation is when parking is a problem, as all other things considered driving is preferable.

Once self driving cars reach a point where they are closer to the amortized cost of a trip when owning a car, the tradeoffs start to change. Those who use their cars for few, but necessary, trips will be the first to ditch owning. I suspect many multi-car families will downsize to a single car. Once you get rid of your car, and are using self-driving cars for the trips where they are necessary, the question for every trip becomes "public transit, or self driving?" Different trips will have different ways they play out. However lot of people who are currently eating the cost of owning a car and might as well drive anyways will suddenly be able to save a few bucks by using public transit instead.

The final magic of all this is that if you're going anywhere more than a few minutes away, there are many other people going from approximately where you are, to approximately where you're going. The big killer in public transit speed is time. Time going to a common pickup point, time waiting for it to arrive, time spent not going on a direct route, time spent stopping for others to get on or off. With a sufficiently well used self driving car network, it could easily join a few rides together at a point nearby, and have self driving cars ready and waiting at the point near your destination. Two quick transfers, and that's it. If the self driving car app says "45 minutes, $20, or 50 minutes for $10", lots of people would save that $10. For destinations such as downtown, airport, sport stadiums, there's even only one transfer. Given car pool lanes and such, transferring to an express bus might actually be faster than driving directly.


Groceries are probably my number one reason to keep a car. I can ride busses to and from grocery stores, but the inconvenience level of not only lugging tons of bags to/from but also dealing with your typical city life on the journey makes just using the car immensely more palateable. I've basically driven once every other week for the last year; for everything other than groceries, I walk or take transit.

However, there are a bunch of potential one-off situations that make owning the car (or at least having it on hand) important, too. For example, what can you do when you need an emergency vet visit? You're probably not going to be calling a rideshare or taking a bus for that. Those what-ifs are also a big reason I still have a car, even though I don't technically need one.


What about traveling to one-off destinations that may be a couple of hours away and not accessible by public transportation, like a trail head for a hike? Such trips just never happen in your scenario?


If someone does this say, every weekend all summer long, it's likely that keeping one car in the family will still be economical.

Once self driving cars remove the inconvenience of the last mile and difficulty of coordination, mixed modes of transport start to become really convenient. In your scenario, you might take a self driving cab from a suburb house to a small transfer station, transfer to an express bug/shuttle, which then takes you to a car rental. This hypothetical car rental wouldn't make much sense today, but consider a rental in a small mountain town, renting out off-road capable vehicles. From there you could go on your hike. Though this solution would be a little slower and less convenient than driving directly - on the larger scale for many people dealing with this 2 times a year would be better than the costs of owning a vehicle. Plus for many people it doesn't make sense to own a large rugged vehicle for driving in cramped downtown spaces - using the right vehicle for the job becomes an awesome option.

More popular trailheads for which the drive is fully paved won't be the first area serviced by self driving cars, but would be eventually.

The meta-point is this: Many people survive today just fine without owning a car at all, so it's reasonable to assume that self driving cars will expand the circle of people who's use cases will be fully covered by self driving cars. This will definitely not be everyone as soon as self driving cars come out.

As this circle grows, various niches of transportation will reach a critical mass and be developed. Another example is transportation between major cities. Currently you have inconvenient greyhound busses, or the rare train in the US. Then once you get to your destination you have to deal with transportation within that city. Today it's just easier to drive a few hours, assuming you already own a car.


This is fun to think about, but all falls apart when you have awkward luggage that you want to travel with.

Pet to the vet?

Musical instrument?

Shopping / incidental purchases?


Why can't that be covered by self-driving cars? There will definitely be some places that are truly unmapped and out of bounds for self-driving cars, but common hiking entrances and exits shouldn't be a problem.


I read a perspective once that posed driverless cars would first be adopted on specific commuter routes as a point to point taxi, and it's seeming increasingly plausible there will be such a service available for SFO to select highway-adjacent points around the Bay before long.


If it is SF, I would worry more about people shitting in those cars or injecting themselves with heroine.


until the first pedestrian gets killed...


>10 pedestrians are killed each year by human drivers in SF, and almost exclusively they are the drivers fault (not only by definition, but also based on an impartial assessment of the situations).

People will not care, and I also do not think these automated cars are going to kill any pedestrians anyways.


> People will not care

I don't believe that.

One pregnant woman gets hi, and she and her unborn child die, and there will be so much anger and pressure on the politcians, everything will get outlawed.

Human drivers are individuals... John Doe killed someone, blame John Doe. Autonomous car kills someone, blame all autonomous cars!


It seems more likely that the manufacturer will blamed. Society already has experience dealing with inhuman harm from devices. If a phone explodes on its own, no one is calling for all phones to be banned.


A small number of Chevy Bolts have had their batteries ignite spontaneously, nobody has been killed, and now there are a bunch of parking garages that have banned Chevy Bolts from parking there.

Maybe not all autonomous cars will be banned when someone is killed, but I wouldn't be shocked if the same thing happened to whatever company that built the car in question that happened to Uber after they killed an innocent bystander.


> Maybe not all autonomous cars will be banned when someone is killed

Well that's exactly my point. The manufacturer and model of the self-driving car will be blamed and possibly banned, but not all self-driving cars will be. The first pedestrian death isn't going to be blamed on all autonomous cars.


If a phone explodes, many businesses outlaw that brand and maybe narrow that to a model. With prior example, do you think the same won’t happen with a self-driving car?


That's my point. The manufacturer and model of the self-driving car will be blamed and possibly banned, but not all self-driving cars will be.


Consensus is a gradient, and representative consensus is multiple gradients.


Techies are a very narrow band in any of these gradients.

Regular people are very wide bands, and there's a reason horror sci-fi movies where robots kill humankind are extremely popular.


You're both right and wrong at the same time. The media gets to decide, as always, what we're collectively outraged about. Or to be more specific, the people who control the media. So if you want to predict if people are going to care when someone gets killed by a robocar, it depends on who stands to make or lose money if they do or don't.


When a human kills someone "that's a bad driver, take away their license" when a machine kills someone "that's a bad machine, ban it"


People will care a disproportionate amount, I think.


If you're setting the over-under at 0.5, I'll take the over...


[flagged]


Take a deep breathe, you're going to be OK.


The legacy manufacturers are pretty invested in the self-driving space at this point though, no? Cruise itself is owned by Honda and GM, among others. I believe waymo has similar investors.

I think we'll see a situation more like nicotine vapes. Instead of trying to crush the industry, big cigarette manufacturers just bought them.


Invested to not fall behind. They will snipe at their opponents, and I don’t doubt they would do the numbers on torpedoing their own solution if it meant wiping the field clean. Long-term, self-driving cars taking will mean less cars sold to consumers. Between ride-sharing, commercial operations, etc. the end goal will be less cars on roads.


Tesla already did it. No one seems to care?


I don't remember in incident where a Tesla killed a pedestrian?

I know Uber did, and now they don't have a self-driving car program anymore. not necessarily causation, but it can't have helped.


> not necessarily causation, but it can't have helped

I do think it was causal: when the official report came out it was clear that their system was completely inappropriate for autonomous driving. https://www.jefftk.com/p/uber-self-driving-crash


Because we don't know if it did it. All we know is that a Tesla was involved with a fatal crash involving a person changing their tire on the Long Island Expressway. [1]

There's been no confirmation whether Autopilot was enabled.

[1] https://apnews.com/article/technology-business-6127ae797c528...


Just in case you're not aware of, Tesla has a long history of fatal cases driven by Autopilot[1] with the first case below:

    This entire incident occurred without any actual input or action taken by the driver of the Tesla vehicle, except that the driver had his hands on the steering wheel as measured by Tesla’s Autosteer system. Indeed, the Tesla Model X was equipped with an Event Data Recorder (EDR) which is intended to enable Tesla to collect data and record information from its vehicles and also provides information on various processes of the vehicle’s functioning systems when a crash occurs. The information regarding vehicle speed as extracted from the Tesla Model X provides proof of the foregoing facts[2]
[1] https://www.tesladeaths.com/ [2] https://www.courthousenews.com/wp-content/uploads/2020/04/Te...


I think it will matter whether or not the car was operating in FSD or not. People can chalk up AP failures to idiot drivers misusing a feature, and then blame that person. When it's Tesla claiming to be fully in control, that's a different story. Though I guess they're probably not making that claim yet for 'FSD'.


> Cruise in March submitted the applications for driverless operations, whereas Waymo in January applied for autonomous vehicle deployment with safety drivers behind the wheel.

Wow, driverless actually means no driver here. But how does that work? What if it makes a mistake? Is there some fallback pilot somewhere that can take over and control it remotely?


Waymo has a similar approach in Phoenix. Passengers can't directly intervene but can call tech support and they'll dispatch roadside assistance to override the car and drive it to safety. Will be interesting to see if Cruise copies this same model.


Waymo also has remote driving navigators/coaches who don't directly steer the car, but can tell it where to go navigation-wise.


Didn't they have "chase" cars in the beginning? Maybe I'm thinking of a different company. Cars that would follow around the driverless cars and could respond immediately if something happened.


IIRC they did chase cars when they started doing fully driverless operations, yeah.


Isn't Waymo also starting the same service in parts of San Francisco?


with safety drivers = not the same


Nope, they announced driveless bay area testing several weeks back. A recent press release confirms this. [0]

Waymo is allowed to test at all hours, in rain and light fog, within a limited geographic area.

[0] https://news.ycombinator.com/item?id=28710098


Testing is different than a permitted taxi service.

The permitted taxi service allows them to scale up and charge a profitable fare. In the testing phase, their fares are severely reduce.

Cruise can charge full price without a driver, but Waymo only has the permit for a backup driver at full proce.


Anecdata here, but every waymo car I've seen here in SF for the past several weeks (probably 30+ cars) has had a safety driver.


>> can't directly intervene but can call tech support

"Hi, yeah, car's gone berserk, we're approaching a river, all doors are locked, can't get out, need assistance."

"Hi, thank you for calling. Please hold (your breath)."


What would be your plan if a taxi driver went berserk and did that?

What makes you believe the autonomous car is more likely than a human driver to do that?


1. not get in the car in the first place. unless proven out by 10+ years of human guinea pigs not getting killed.

2. life experience, software expertise, common sense.


Lots of people used to feel this way about automated elevators.

https://medium.com/swlh/what-do-self-driving-cars-and-elevat...


Right, it's eerily similar. People say this about automated railways too. And as with the elevator on your hundredth trip you are not thinking "Oh no, this is automated, it might kill me", because of course it's automated, you're thinking about whether Jim meant to complement you or it was intended as an insult last night, and did you bolt the back door?


I’ve tried to find the original source of that claim and come up missing. All of the online articles either don’t reference a source or they eventually link to an out of print book.[1]

At this point I’m pretty sure it’s not true. If past public sentiment was so against automatic elevators, there would be at least one newspaper clipping or digitized article.

1. https://news.ycombinator.com/item?id=25585122


Wait, when in the history of taxis were there 10+ years of humans not being killed by taxi drivers?

Certainly not any recent (consecutive) 10-year period.


It's pretty rare though. It's far more likely that the taxi driver will be killed by a passenger.


In a typical year, NYC's human-driver taxis had a fatal-or critical-injury crash rate 3x the overall US average: https://www.ingberprovost.com/how-often-do-taxis-crash/

The US average is about 36,000 auto-caused fatalities per year: https://www.iihs.org/topics/fatality-statistics/detail/pedes...


According to the DMV press release (submitted elsewhere on HN), this is incorrect:

>Waymo is authorized to use a fleet of light-duty autonomous vehicles for commercial services within parts of San Francisco and San Mateo counties. The vehicles are approved to operate on public roads with a speed limit of no more than 65 mph and can also operate in rain and light fog. Waymo has had state authority to test autonomous vehicles on public roads with a safety driver since 2014 and received a driverless testing permit in October 2018.


Full disclosure, I founded a somewhat competing company in the space, comma.ai

Why is this progress? Evaluating progress in self driving cars is extremely complex. Why do people trust that the California government is able to do so?

A milestone is a working product with many user videos on YouTube. This is a press release from the California DMV.


I really respect your skills as a hacker but it is actually quite scary that you would not understand that DMV issuing these permits is actually a strong sign of progress, and that 'user videos on YouTube' would somehow be the yardstick by which you believe we should measure such progress.

Does the same hold for the FAA? The FDA?


It depends what your opinions of those agencies are, and if you believe they have technical competence or not. After the 737 MAX fiasco, I've begun to seriously doubt it.

"you would not understand that DMV issuing these permits is actually a strong sign of progress" explain to me why you believe this. Videos on YouTube convince me something is real and exists. A government press release convinces me that some process was followed, but perhaps one having nothing to do with the solution to the problem.

Self driving is almost entirely a software problem. Testing that software is equally as complex. Do you believe the CA DMV has an army of good software engineers? Data scientists? If not, how did they evaluate it?


I don't think it's necessary to believe that the DMV's evaluation means anything about the quality of self-driving (and I don't think it means much).

The fact that they're permitting it means it will be happening in practice, which will generate lots of data points, which (a) will help the companies improve their software and (b) assuming it's not a disaster, will get people used to seeing self-driving cars that really work, which (from a practical-politics perspective) will make it much easier to get permission from other regional governments, and to expand the permits.


Assuming your argument is that all you need is cameras, at what point does it become entirely a software problem? Once you have cameras that cover a sufficient FoV around the car? Or can pivot so that they can, like human eyes? How do you know that the cameras you have meet that requirement?

Even then, I'm not sure how you can say its always just a software problem when no one has accomplished it yet.


Agreed that the 737 MAX did not help, but this is also a failure of all of the other regulatory agencies the world over that rubber stamped the FAA approval instead of doing their own. It certainly looks as though this served as a wake up call - it should! - and that a re-occurrence is relatively unlikely in the near future (until everybody becomes complacent again).

Self driving may be a software problem, but that does not necessarily mean that we have the means to make that software a reality today. Given that fact that we know that intelligent beings make plenty of errors and that they are much smarter than your average self driving system, the enormous range of conditions and inputs and the relatively limited amount of computing power that can be thrown at this problem I highly doubt that it is a 'mere matter of software engineering' at this point, taking into account the state of the art.

Agreed that testing it is super complex, in fact, I think that testing is the key: without a fully representative test equivalent to a driving test none of this software should be on the street. That's what we agreed upon: pass the driving test and you are allowed to drive. American driving tests are ridiculously simple, so it should be easy to pass such a test and yet that hasn't even been made a mandatory requirement for certifying a self driving solution. Until then I don't think any of this stuff should be on public roads.

I don't think the CA DMV needs an army of sofware engineers or data scientists, they are first and foremost a regulatory body that can hire external experts to give them guidance, and I suspect that they genuinely believe that self driving systems from these manufacturers are now solid enough to be given wider latitude during this experimental phase. I personally disagree with the decision, but I do think it is progress, that a regulatory body would dare to attach their name to such a decision rather than to play it safe, this is a clear shift in perception. Time will tell whether or not it was the right decision, if it turns out it wasn't there will be a bigger set-back than this was a step forward.


I'm nobody and I have huge respect for both of you. I do think some transparency would be nice in regards to how they made the decision and with how much data.

The shortcomings of Comma.AI and Tesla FSD are very clear due to YouTube. If Cruise has a much safer system, that is great, I would just like to see more data.


The difference between FAA/FDA and DMV is that they have subject experts whose jobs is to test and evaluate the technology.

DMV consists of bureaucrats that seemingly just rubber stamp the permits hoping for the best.

In that context, people actually using the technology, testing and evaluating its performance are like FAA/FDA and not DMV.


Subject experts can be hired, and I am assuming there are close contacts between DMV and the likes of NHTSA.

https://www.nhtsa.gov/technology-innovation/automated-vehicl...


I mean, if you just look at the offices the DMV works out of, they look like jails. Classic white painted brick (wtf, who paints brick except for jails) and no windows.

You can't expect the people working there to not be driven to insanity.


This isn’t technical progress but is regulatory progress.

Technical progress is the current limiting factor so I agree that it is more important to emphasize right now.

On the other hand, I disagree that user videos on YouTube is a sign of progress. 1. There is an obvious selection bias as only a minority of people with access to the software actually upload videos. 2. The videos are qualitative in nature and it is not feasible for a single human to go through all of them and form an objective opinion on the state of the art. So it’s prone to confirmation bias too.

I’d argue that there isn’t a common yardstick of technical progress for game theoretic reasons but I’d love to see one.


> A milestone is a working product with many user videos on YouTube

So Waymo then?

https://www.youtube.com/results?search_query=waymo


Seems like it might be a bit difficult to have user videos on YouTube if you can't legally have users. Getting the permits issued is a good first step.


Tesla has tons of videos on YouTube from users showcasing similar technology, including in San Francisco.

While FSD beta may still have issues, I see no reason to believe that this doesn't also have similar issues.


I wasn't aware that Tesla was operating commercial autonomous rideshare in SF. Can you link something? I've seen the FSD beta, but that's neither "L4" (whatever you think of the SAE levels) nor commercial. This announcement is specifically about safety drivers and commercial operation, because both Cruise and Waymo have been operating non-commercial fleets with safety drivers for awhile now.


Tesla is currently in beta everywhere, not just SF. Tesla is taking the path of improving the technology with driver monitoring, rather than focusing on getting a single city to label them as 'L4'


Tesla FSD can't operate without a driver and isn't doing commercial rideshare, so it's not relevant in this discussion of a permit to operate a driverless commercial fleet in SF.


George brought it up in the thread and you responded to him. Tesla and Comma.AI are building the exact same technology. Are you saying that we shouldn't discuss these other companies until they have a permit for SF driving?


George asked why this announcement of the first permit to operate a driverless, commercial fleet is progress, citing Tesla. Unless Tesla has a commercial, driverless fleet somewhere, I'm not sure I understand the objection here. Having the permits to do those things in SF is literally the progress.


There isn't an objection. There is a question. Can the public see videos of this? If not, why not?


Waymo has been doing commercial driverless ops in Chandler for 4-5 years now, so you can absolutely see videos of that. You just can't see it in SF because until this announcement no one has had the permit to do it commercially, Tesla included.


There must have been some sort of testing period and approval process that involved showing the cars are safe? Why can't you see that until after this official announcement?


I'm sure there are still issues, but I'd bet they're dramatically more rare than what we see with the FSD beta.


Why would you think that? If they have much better technology, that they are deploying on public roads, why wouldn't they let the public know with easily accessible proof?


https://youtu.be/sliYTyRpRB8

Obviously this is cherry picked, but you don’t think a car equipped with LIDAR, imaging radar, more cameras, more compute, and temporally dense HD maps that has focused on the same area for its entire existence would perform better than a Tesla?


I would expect a strictly mapped system would perform significantly worse in an area like SF, where roads and obstacles frequently change. Human drivers are not rigid, they are adaptable, so building rigid AV systems is doomed to fail outside perfect conditions.


They're not rigid - they use the map and fall back to onboard system when it's not accurate. That happens plenty of times in the video I linked.


> Tesla has tons of videos on YouTube from users showcasing similar technology

As you know, Tesla's technology 1) lacks lidar and 2) is not targeted towards driverless operation (yet).

The AV industry isn't in a place where it's building light AGI that elegantly handles the driving task through a sophisticated understanding of the world. The systems out there are still fit to their fairly near-term objectives; it's ridiculous to assume that Tesla's camera-only, hands-on-the-wheel system tells you which issues (if any) are faced by systems with different objectives built on dramatically different hardware.


That's like saying the old motorola razr and an iphone 6 are similar technology. While true, it's hiding a huge difference in that "similar".


It's not technical progress, but it's progress nonetheless. There are significant legal / legislative barriers to get through aren't there? This would, at least in my mind, be progress on that front...


Once software capable of driving at a superhuman level (say 10x the average) exists, the regulatory side will be trivial. What regulator wouldn't want to save 30,000 lives?

Sadly, that software is still a ways away. Don't fall for the trap believing that self driving is at all a regulatory problem. It is only a software one.


The disconnect seems to be your mental model of self-driving cars might be biased by your current focus. The way I feel most people see it, self-driving cars means three things:

1. As a developer, the hardware and software to accomplish it.

2. As a rider, the improved UX (i.e. safe, comfortable, and fast).

3. As a current-driver, taking over the liability.

Based on the Comma 3 launch event, it seems clear that you and Comma.ai are hyper-focused on 1 and 2, but have no plan for 3 anytime soon. I think this is the big disconnect: Cruise and Waymo are focused on 1, 3, and 2 (in that order). DMV approval is entirely in the bucket of 'transferring liability', and this announcement is progress on that front.

The chicken and egg problem of 'publicly facing data driven technical viability to then get transfer of liability' and 'transfer of liability to then get data at scale' is a fundamental problem of self-driving. Comma.ai and Tesla are tackling this problem with 100% 'publicly facing data driven technical viability' and 0% 'transfer of liability'. Then one day Comma.ai and Tesla may also try to tackle the 'transfer of liability'. However, why limit the competition? Cruise and Waymo want to start with 'transfer of liability'. Its useful work that needs to eventually be done. Might as well have some companies start early.

Hopefully even if Cruise and/or Waymo mess up, the DMV will use those learnings to improve their validation strategies for future players looking to transfer liability (i.e. Comma.ai and Tesla when they choose to be ready).


Interesting. I was under the impression that self driving cars are not THAT far away from being a thing. Sounds like it's further than I am thinking.


It's a regulatory hurdle. Having passed it field tests and product viability can continue


Huh, Cruise got a permit for "between 10 PM and 6 AM" up to 30 MPH, while Waymo got no time limit and 65 MPH. Presumably Cruise's permit is actually for the 6 AM to 10 PM time period and not the 10 PM to 6 AM one!


Why presumably? 10PM-6AM is a much easier time to drive around. And it's not like they'd be going for daylight hours the other way.


Well, because this is a permit for commercial service, not testing. They already have a testing permit. A commercial service that only operates in the middle of the night would not be very successful.


A commercial service in the middle of the night would be a great complement to scheduled public transit which tapers off in the evening.


Public transit tapers off because demand drops off. There will be a spike around closing time for bars but it seems unlikely that you'd want to launch a whole commercial service just based on that.


> Public transit tapers off because demand drops off. There will be a spike around closing time for bars but it seems unlikely that you'd want to launch a whole commercial service just based on that.

Public transit is also based on an order of magnitude larger transport. You're talking about buses that are made to move 50-100 people or trains made to move hundreds... not a car that will be moving 1-3. The reason many routes get cut down isn't because there's no demand at all - it's because the demand is below the threshold to run the service profitably. If you can cut down on the cost to run the service by cutting the driver out, making the vehicle cheaper to run the route, and so forth... the route will come back.

I think this service will do fine. This is one the issues that SF has in a large part. Very hard to get around at night because public transit is dead before midnight.


Busses rarely run at a profit, if your metro service is very lucky user fares cover half the cost of bus service (lookup farebox recovery ratio), hence some US bus systems going fare free since it makes a minimal difference financially.


> Busses rarely run at a profit

Semantics. Profit, net zero, sustainable with tax dollars, etc.

Doesn't matter. You get the idea dude. It's about routes not being worthwhile because the damn transport option is too costly for just a few people. Autonomous car is gonna be more effective for these scenarios.


Rather than profitability, perhaps the right metric is utilization.

Even if you don't care about the revenue, having only a handful of people on the bus per run in the evening makes it hard to justify the cost of operating the line. Having other options to get home is great.


Sure, but they still care about cost and utility.


> it seems unlikely that you'd want to launch a whole commercial service just based on that.

You're thinking about this the wrong way. There's immense value in getting _any_ commercial service out there, not just in PR but in a full end-to-end test of all the ops/logistical/product/etc that come from having to deal with customers in reality. A good chunk of a decade and billions of dollars have been poured into the research behind these services. I doubt anyone at Cruise or Waymo cares or even expects these limited services to be profitable, but that's entirely missing the point.


Presumably, they want to launch a commercial service to get some real-world data and a foot in the door; the intent is to expand both time and geography.


> great compliment

Great complement. Otherwise your comment becomes very confusing :-)


> A commercial service that only operates in the middle of the night would not be very successful.

I doubt they're going straight for the meatiest market. A commercial service of any sort would be a huge coup for a self-driving car co (as the Tempe launch was for Waymo). In SF's case, it also provides a foothold in a market that's actually economically sustainable. Once the technology reaches the point where more complex and crowded scenes can be handled safely, expanding is relatively trivial compared to spinning up from scratch.


I see where you're coming from. Although it could still be a type of scale-up testing or congestion control. What would be the point of not permitting overnight service though?


Why does night seem less likely? I'd expect many of the sensors work equally well or better (no sun interference) at night, and there will be a lot fewer other cars to crash into/cause a traffic jam when the car shuts down and has to be recovered.


The fact that the speed limit is much lower I think backs up that Cruise's permit is actually less permissive, and is most likely for at-night only.

Waymo has had driver-free cars in Phoenix for over a year, so it would be strange for Cruise (who doesn't have a driver-free robo-taxi service already) to get a more permissive permit than Waymo.


Waymo has a safety driver, Cruise does not.


This release does not indicate a requirement for Waymo to have a safety driver. I don't think there is such a requirement.


Too late to edit, but I was wrong, it seems that Waymo did not request a permit for cars without safety drivers.


They have recently been operating without one.


They've been operating without safety drivers in Phoenix, but not in SF. AFAIK, every Waymo car in SF has had a safety driver in it. Cruise has been testing driverless rides the past few months in that timeframe/speed limit.


I suspect it's just a reflection of what they applied for.


Who is liable when someone gets hit with a self driving car? It was always a joke in college how the dream was to get hit by a university bus and have your tuition paid off in a huge settlement. I'd imagine if a waymo car made its way around the more desperate parts of the bay area like the tenderloin and liability ends up shifting to huge company with billions of dollars, we might end up with a Russia sort of situation in terms of rampant insurance fraud. Maybe waymo et al will just respond by never servicing these areas, which will no doubt open an entire can of worms in the press and among the most virtuous online.


>a russia sort of situation

in russia the rampant insurance fraud ended in everybody getting dashcams to defend themselves. waymo and cruise have way more than just a dashcam - if you can successfully commit insurance fraud while being recorded on a dozen cameras as well as lidar and infrared sensors, you probably deserve the settlement.


Set a mark, people will aim for it.


Aim, sure.

That doesn't mean enough people will hit it to matter.


The cars themselves are studded with every kind of camera and sensor you can imagine, it'd be pretty hard to pull of an insurance scam


> Who is liable when someone gets hit with a self driving car?

Short answer: in the U.S., it will depend on the facts and circumstances. Common law has many drawbacks. But organic adaptability is one of its advantages.


> Who is liable when someone gets hit with a self driving car?

While this is not exclusive of other liability, probably one or both of:

(1) The person hit, if they were breaking the law in a way which made it unreasonable to expect a driver to avoid hitting them,

(2) The manufacturer of the self-driving vehicle, under normal defective product liability principles.

The owner and/or, where different, operator of the vehicle, as well as other people in the chain of commerce may also be liable, especially when (2) applies.


I mean, any pedestrian fatality will have a full 360 lidar recording of the context. I suspect fraudulent claims could be caught pretty easily.


How would you even discern fraud from the real thing? You could just get drunk and act drunker then stumble onto the road, you'd blow wet on the breathalyzer and the story is plausible enough that a public jury will side with you, the innocent guy on a night out or the guy down on his luck, over scary robot car company, then a precedent will be set.


Isn't public intoxication against the law, as is jaywalking? What happens if the exact same situation happens today but to a normal car with a driver?


Except where there is specific law that mentions the violation of law and affects your fact pattern (e.g. stand your ground laws that require someone to be committing a crime for the law to apply to the self defense case) violating the law at the time doesn't really mean much when it comes to who pays up. The law is between you and the state. Deciding who pays up is between you and the other party.

You can't just stumble onto the highway where the normal traffic flow is well in excess of the speed limit and expect to get a payday because whoever hits you will be speeding.

That said, most of this goes out the window if you are committing a handful of crimes that society has decided exempt you from most of the protections of the legal system (terrorism, driving drunk, etc). If you are drunk and someone jumps in front of your car to commit insurance fraud it won't matter if you have 360deg 4k video of it, you will likely lose.


Who is liable now if this happened with a human driver involved? Aren't there any rules about it already?


> Who is liable when someone gets hit with a self driving car?

As is usually the case with the legal system, whoever has the least amount of political/economic power. So either the safety driver or the victim.

https://en.wikipedia.org/wiki/Death_of_Elaine_Herzberg


I signed up for the waitlist to use this in San Francisco. Anyone here work at Waymo and know when/if I might get approved? I'm curious to try it out.


Given the hours of operation, I wonder what happens when your self-driving car shows up with puke in the upholstery from a previous rider


You hit a button in the app and they send you a different car, while the dirty one goes to a cleaning station.

Possibly they give you a small credit for the trouble, and maybe ding the previous rider somehow.


Not that I think this problem is unsolvable, but I do think it's harder than you make it. And a problem only autonomous (really, unchaperoned) cars have.

A car with a spill in it, bodily or otherwise, is probably out of commission for the night. Depending on the type of stain and time before treatment, it could take the car out of the fleet for a few days. Eventually self-driving cars have bus seats? idk.

No problem, we can make the rider accountable. We have cameras in the car. First issue here is you can't trust users alone to provide accountability. Say I put my leftovers on the floor. You get in next and don't notice. The next rider however complains of a strong fish smell in the car. The car is taken out of service and your reputation is dinged since you were the last person in the car.

That's no good, so we'll need humans to review video and find culprits. Spinning up such a team will require software, managers, and oversight. Being alone with your date in a self-driving car is _exciting_ so you definitely don't want your employees to freely copy or share private videos of riders.

Like I said, this stuff is all solvable with centralized teams, but it cuts into the profit fantasy a bit. And it simply didn't exist when every car in your fleet came with a chaperone incentivized to take care of your/their car.


You're way overcomplicating things. Uber already handles this kind of stuff. Driver gets paid on the person who threw up's dime ($50-200), driver is responsible for cleaning it up. It'll just be a centralized maintenance hub instead of the driver. You won't get dinged unless you consistently get reported, because Waymo (like Uber) will follow the Amazon model of customer-is-always-right.

These are all solved problems.


I think you're overthinking it. Yes, those are issues, but they're hardly blockers. Few rides will fuck up the interior, and fewer still will have a dispute about who's responsible for a mess. Compared to having one 'employee' per car at all times, there's still plenty of economies of scale to exploit for profit.


Sounds about right. No real difference from a human-driven Uber


Open up your app. Hit 'Cancel ride', hit 'Unsanitary', wait for the replacement taxi. Really, it's the same problem Amazon had with delivering high value goods when people weren't in. If something goes wrong, assume the customer is telling the truth and try to make it right, but if they're at it, ban them from your service.


I'm sure they have a process for "car unsafe" arrivals.


Ironic how by 'saving money' without the human driving the car, they end up probably spending a mountain more on engineers debugging this software constantly along with cleaning staff and maintenance for the entire fleet, legal staff for regulatory issues, and no doubt a huge insurance bill to pay. Sunk cost is a strong fallacy to see past I guess.


I don’t think you grasp the scale of this change. You think that this “mountain of engineers“ will equal to the (eventual) number of taxi drivers displaced? WORLDWIDE?

Also, you think those engineers, capable of creating an autonomous car, somehow will find it too difficult to install liquid detectors all over the car and combine them with cameras inside to would allow for instantaneous, remote inspection of the car and imposing the penalties/taking the car offline – all of which are possible with existing technology?


All I'm saying is that when you look at other sectors with low cost labor, smart money like mcdonalds had the tech stack to replace their burger flippers with robots just like the auto industry in the 1970s, so there's probably a good reason why they still have human burger flippers today.


Robots that can flip burgers are expensive. Robots that can drive us around already exists and so affordable that we see billions of them on the roads, they are called cars. All the road robots need is a better steering system, we don't need to invent tricky machinery. The burger flipping robots however would need tons of different new movable parts to be invented, mass produced and shipped all over etc. That would be expensive.


One driver, one car. One engineer, thousands of cars?


Has there been any attempts to make hybrid autonomous train-cars? Like some sort of enclosed road/track highway or a caravan? Seems like a more safe goal versus the chaos that is urban environments.


As another responder said, lots of subways are autonomous, but if you're thinking of a car-sized thing on a train guideway, these have been proposed but only a small number of actually been built: https://en.wikipedia.org/wiki/Personal_rapid_transit


Subways are good (not that automated in New York due to union pressure) but a lot of american cities are not designed around mass transit but unfortunately highways. I could see some system where cars drive onto a highway then get put into a caravan or special lane that automates the driving until their exit


There are already automatic subway systems. You could make BRT autonomous pretty easily too since it runs on its own separated grade in a fixed route. You could throw down some sensors in the road surface and call it a day like the technology used in the new disneyland star wars ride.


Electric trolleys existed in many cities in the past.


> Verifying the technology is capable of detecting and responding to roadway situations in compliance with the California Vehicle Code, and a description of how the vehicle meets the definition of an SAE Level 3, 4 or 5 autonomous technology.

How does the company prove this? Are there detailed criteria listed somewhere that these companies should then prove to meet? AFAIK, its all self reported. I mean, if the company is confident it can do it, I guess it's a good thing, but is it enough?


Can this be used to move cargo from place to place without people in the car?


Walmart already has a delivery pilot with an AV company (I think in AZ?). I'm not sure if it's without safety drivers yet though


For reference: 30 mph ≈ 48.2 km/h


> The DMV has now approved three deployment permits.

Waymo, Cruise, and who?



This is off topic BUT DAT FONT SIZE!!! ~ I had to double and triple check my zoom settings in my browser. I'm used to everything from the CA DMV being boring and mundane like Ben Stein. After poking around, I realize that's just the font size for DMV New Releases.... but wow... caught me off guard. Low quality comment, down-vote if you must; I've said what I came here to say.


It is logical to assume self driving cars start off in low risk areas, prove their concept, and gradually take on more and more use cases. Things like retirement villages and so on, where this incremental approach has already had success. Skepticism is easy when thinking about the grand scale but this incrementalism will surely win out in the end.


"within designated parts of San Francisco" - as for Cruise, which parts of San Francisco those are?


The point release says they have issued three permits. Which is the third?



Nuro has little self-driving delivery vehicles. But someone has to come out to them and take the stuff out, so they're not all that useful.


Hm. Not sure I would say that... there really isn't a one-size fits all unloading mechanism across trucking/delivery and there will be a human involved in that process till we really really standardized.

The value is now cutting the labor number by 1/3rd. Need a loader and unloader, no more driver. In food delivery this essentially removes your labor cost since the restaurant and consumer are the loader/unloader.


Thanks. I see them driving around the neighborhood but haven’t noticed anyone in the back seat, driver only.


So, when can I hail a Cruise vehicle?


after 10PM apparently.


I wonder what (if anything) they're going to do with the fact that these cars carry $200K in additional equipment (maybe more), which will be immediately sawed off and broken into by SF's extensive homeless population. BayArea seems like the one of the worst places in the US to leave expensive stuff unattended like this.


California is so progressive, first they legislate away Uber drivers jobs, convince everyone it's for moral decency and then in the same fell swoop, they make their newly 'liberated' employees completely redundant.

It must sure feel good to know that you would be an employee if anyone wanted to hire you when you're waiting in line for the dole.


Driverless cars with passengers operating on the streets with pedestrians, insane taxi drivers, and the prototypical BMW cell-phone-shouting suit with a buzz-cut who just drive around everyone like they're not there. How could this possibly go wrong?


How mad would you have to be to get into one of these?


This is what people once said about elevators, airplanes, bicycles…

How mad would you have to be to keep riding in human-driven taxis after the data shows AVs to be safer?


Honestly for the early ones those people were right.


> airplanes

Have pilots.

> bicycles

You steer them.


And the early airplanes were canvas over wooden sticks, and the early bikes made you fall over if you leaned a little bit funny. I'll let other people break their bodies on the penny farthings of the day, I'll wait for the safety bicycle to come along and not miss out on much in the mean time I expect.


If, after a year of testing, autonomous taxis prove to be exactly half as dangerous as human-driven taxis, would you ride in one?


That would depend on the rigor of the testing being done and where this thing is taking me


Let's say, for the type of journey you need to do (e.g., traveling within a city at <= 30 mph in clear daytime weather), the AVs in question have provided over 100 passenger-years of service, and across that dataset, at-fault crashes are half that of human-driven cars at every level of severity (fender-bender, injury crash, fatality...)

At that point, would you choose to ride in one, or a human-driven taxi?


It depends on the context of the test once again. Is this thing trying to get onto a major artery road in LA? Hell no. 30mph would feel very dangerous when the other drivers are flying around weaving at 50-60mph. The thing would be cut off so much the average speed with the automatic braking every time someone cuts it off would probably be less than me walking on the adjacent sidewalk. Bus drivers in LA have to be aggressive and honk and muscle their way back into the lane for a cars to actually let them back in, sticking their bus out literally threatening cars to it it. I've seen them outright cuss drivers out plenty of times too.

Freeway in LA in a self driving car? Absolutely not. Full stop. I've seen more variables play out on a given day than any training model can possibly consider with good support. If someone says they have good support with their model for an LA freeway I will laugh in their face. The freeway is chaos. You have people with scrap piled 12 feet over their pickup truck doing 40mph in the middle lane, people with BMWs and a cell phone in one hand doing 100 splitting lanes with no signals, outright debris on the road (my roomate hit a log in the middle of the 101 that took out another dozen or so cars one morning, how did a huge log even end up there?), not to mention construction conditions and less than stellar road markings during that making lane identification no small task for a self driving system.

On a little loop through something like a zoo, sprawling corporate campus, or as a vehicle to get me from my parked car to the front of the venue, absolutely I'll hop right into a self driving car. On the road with the current state of the road in my city? I'll let someone else take the literal hit today.


Self-driving cars without a safety driver haven't been approved to operate in LA yet, nor on Bay Area freeways, nor anywhere that the speed limit is above 30mph; I think it's clear they won't get their approval to drive in such conditions until they've proven their ability to do so safely and without disrupting the traffic around them.

What we're discussing right now is riding in one on Bay Area streets where they're already proven their safe and orderly track record.


Do you feel unsafe riding in a human driven taxi? If not (I don't), why would you want to ride in a machine driven one? I simply do not understand the HN desire for sitting in machine driven cars, which have already demonstrated far, far poorer safety than human driven ones.

Computers are good at some digital things. They are not good at others.


I feel unsafe any time cars are involved, whether I'm outside, a passenger, or driving. Cars are much more dangerous than a lot of things people worry about. I'm very happy to see computers making them much safer.

> machine driven cars, which have already demonstrated far, far poorer safety than human driven ones

Citation needed.


At 30mph max in the back seat with the belt on? Not that mad I reckon.


As opposed to having one with a human driver?


I dunno? I guess I'd have to look at the statistics. My sense is that the Waymo vehicles at least are statistically safer than a human driver at these speeds, although not as competent. (I.e. they are less likely to cause an accident, but more likely to encounter a situation they can't handle and get stuck.)


I'd love to see these vehicles handle situations where pedestrians or debris are all over the road. Like in front of a busy bar, construction area, or homeless camp. To the best of my knowledge the environments they've been test driving in have been pretty controlled and orderly and they still manage to find people to run over.


You're wrong twice over: Waymo and Cruise have been testing in SF for a long time, and so far neither has run anybody over.

Uber ran someone over, it turns out they were shockingly incompetent, and thankfully they don't have an AV program anymore.


Wait, I see them everywhere in SF and in completely random places too…


> As opposed to having one with a human driver?

Given some of my recent Uber drivers in the Bay Area, I reckon a blender would be a safer ride.


They're already successfully deployed in a subset of the Phoenix metro area, so not very mad, probably.


How long does it take the "safety driver" to react at 65mph?


Per TFA, they're limited to 30 MPH and will not have safety drivers.

And obviously the launch is conditioned on regulators' belief that the system can handle reacting, to a degree of safety that's similar to human drivers: otherwise they wouldn't get the permit. The reaction time of safety drivers is not what this approval pivots on.

Though obviously the interesting question is: how did the company and the regulators get to this level of confidence in the system's safety


They are referring to the multiple parts in 'TFA' talking about Waymo using safety drivers and a 65mph limit.


Ah, thanks for clarifying. That makes way more sense.

BTW, you seem to have been reading TFA as some sort of snark. It's a pretty well-established bit of internet jargon that's (at least in my experience) more winking than aggressive at this pt.


"In order to receive a deployment permit, manufacturers must [... Develop] a Law Enforcement Interaction Plan that provides information to law enforcement and other first responders on how to interact with the autonomous vehicles."

So they get to write the rules on how law enforcement interacts with their vehicles? Can I do that too?


Yes - you may be able to.

This involves automatically pulling over and stopping the vehicle when a cop is behind the car and flashes their lights. The vehicle will also roll down its windows so the cops can access the interior of the vehicle and have easy access to passengers.

This also links in rider support automatically to communicate directly to law enforcement / rider support can also provide various commands to vehicle then.

They can also open a door, which triggers a an autonomos driving cutout. They can also contact these folks directly to gain access (ie, remote unlock etc).

To retrofit your vehicle so that law enforcement has this type of direct access may take some effort, but I doubt you'd have much objection - there currently is a real issue with drive offs and law enforcements ability to respond to those under vehicle chase rules. If you can set up your vehicle to override your drive-off efforts that will probably be welcomed, especially if it pulls over, stops, can unlock doors to provide LEO access to your person etc.

Separately, rider support can give permission for a vehicle search as the owner of the vehicle, so you'd want to register vehicle with a service that would give consent automatically and then provide instructions on where to find registration and insurance in vehicle. Waymo I beleive does it on the visors.

I'm not sure you fully grasp the implications of Law Enforcement Interaction Protocol efforts. In the future, the car may be able to drive you to a "safe" location for a car search and your arrest.


Which should be codified into law based on police requirements with an eye as to what is legal.

Police procedures should not be developed by way of an agreement between the state and a private company, who is in a position to error on the side of cooperation because they need a license. It is similar to the state granting a locksmith a license to practice under the condition that law enforcement be provided with copies to every key just in case they need them.

There are some serious Fourth Amendment considerations when traveling in a vehicle for hire without a driver. Is a Waymo closer to a private automobile with implied consent, or is it similar to a bus where you can deny consent to a search? Is the provider (or virtual driver) in a position to consent to a search of your property? Is the virtual assistant that opened the doors and consented to a search now in a position to be charged with transporting narcotics for distrubtion? Who's license gets dinged if the passenger has no seatbelt?


The current setup is that Waymo owns and operates the vehicles, and they have protocols to cooperate pretty extensively with law enforcement.

In other words, they WILL stop the vehicle, roll down the windows, unlock the doors etc for law enforcement, give permissions out for a search etc.

There is a separate question (unrelated to Waymo) around if that is enough for example for LEO to search the vehicle. My guess is the resolution will be that like a taxi, LEO can search the vehicle and waymo will build that implied consent into terms of service if needed. Hopefully patting down an individual will still require consent or cause or similar.

The OP seemed to imagine that these services are going to be some sort of lawless area, and they are not, they are going to track and record the start / finish / stops of all your trips, record you in the vehicle, record the space around the vehicle and will cooperate with LEO.


Do you really think this is helpful (much less a charitable) interpretation?

This is asking the manufacturers for a user manual, like how to disable auto-piloting etc.


At a time when it seems that police are unwilling to interface with regular human drivers, to the point they suggest that drivers need to preemptively place both their hands on the wheel so officers don't become spooked, it's reasonable to ask why companies get to specify deliberate procedures. As in, why isn't the attack first and ask questions later standard good enough for them too?


> to the point they suggest that drivers need to place both their hands on the wheel so officers don't become spooked, it's reasonable to ask why companies get deliberate procedures.

I don't understand. Isn't "place both hands on the wheel, show me your license", etc precisely a "deliberate procedure"? It sounds like your complaint is that police are too procedural instead of engaging on a human level, but then you complain that theyre not _more_ procedural with humans?


I'm talking about who is setting the procedures, not simply their existence.

With autonomous vehicles, they're asking how to interface with them politely. Like if they signal one to stop and it's not stopping, here is a phone number to call where someone can gracefully shut it down. As opposed to just deploying spike strips or ramming as is their default.

Whereas human drivers don't get the luxury of specifying any such protocols. And rather than reform their own procedures, police are attempting to compensate by creating protocols drivers are supposed to know and follow before an officer is even at the car.

As a motorist I'd love to be able to choose a protocol that police would follow when pulling me over. For example, call this phone number first rather than just walking up to me and risking yourself getting spooked and me getting killed.


> With autonomous vehicles, they're asking how to interface with them politely. Like if they signal one to stop and it's not stopping, here is a phone number to call where someone can gracefully shut it down. As opposed to just deploying spike strips or ramming as is their default.

This is quite an extraordinary assumption. Do you have any evidence of this that I missed? My assumption was closer to something like "here's how you communicate to an SDC that it should pull over; here's the decision path it will follow when pulling over; here's how to get in touch with the co".

I don't see any indication that an out-of-control car will be given free rein to rampage (in situations where a spike strip would be used on a human driver) while the PD waits on hold with the operating company. Honestly, this sounds like a paranoid fever dream.


The protocols waymo and others are and will be offering will prevent passengers from trying to drive off. They will stop vehicle, unlock it and roll down the windows for the cops etc. They will also record outside and in some cases inside of vehicle.

Again, I think you could write a similar protocol. Pull over, roll down windows to allow cops easy access and visibility to your person, and unlock all the doors so they can open them. Maybe do a dash cam plus interior cam to get cops the footage they need that way?

Waymo also will automatically connect to a ride support operator who can do things like give permission for a vehicle search if needed. Not sure how you could copy that, maybe vehicle owners could put a sticker on their cars so regardless of what passenger wants the vehicle can be searched at any time?


Information, not rules.


Johnnycab: Hello I'm Johnnycab, where can I take you tonight? Doug Quaid: Drive, drive! Johnnycab: Would you please repeat the destination? Doug Quaid: Anywhere, just go, GO! Doug Quaid: SHIT, SHIT!!! Doug Quaid: SHIT, SHIT! Johnnycab: Im not familiar with that address, would you please repeat the destination?


Having to verbally specify your destination address was always a lousy UX. Uber's UX where you enter it in an app is much better and allows using your address book, upcoming calendar entry, or searching by name.


> https://twitter.com/elonmusk/status/1442706542839701510

And Tesla is rolling out FSD beta to cars based on drivers' safety score.


Have you watched the footage of the current "beta"? It's a scam. It runs into stationary objects. It drives into oncoming traffic. It runs red lights, makes illegal turns, drives straight toward other cars, and can't stay on the road. And that's footage recorded and edited by Tesla's biggest fans.

"Full Self Driving" does not exist. Every indication is that it never will.

I still can't believe no one's gone to prison over this. Tesla's "self driving" features are probably the biggest fraud in history with a body count to match.


People don’t fully appreciate how different driving in multiple locations/conditions is to a neural network.

Since most drivers only operate in certain fixed conditions often (e.g. driving to work or along certain routes during the day/night), people are going to get vastly different experiences of the same software.

This is why you get some people claiming FSD beta is a fraud and others claiming that it works well with 0 disengagements 99% of the time.

I think their tech is probably a LOT better than what you’re making it out to be but no where close to living up to its promise of full self driving.


I wouldn't say it's a 'scam' exactly, but 'actually an alpha labeled as beta' is probably accurate.


People are terrible at driving. Releasing a half baked product to get data needed to improve it makes complete sense and deserves to be encouraged, even if people die. Because it the end it's likely that many lives will be saved.


You don't get to smash a 4,000 pound explosive into a preschool because you want to gather data.


If that data saves more lives it seems like a positive to me.


What does this mean? Hasn't Tesla been selling FSD beta for like 5 years now?

I don't really trust anything Elon says anymore. I feel like he's not followed through on commitments about FSD far too many times. I'd personally never buy a TSLA until they get things much more in order (build quality/fit and finish, and actually deliver FSD)


For the vast majority of people Tesla's vehicles are a practical, real, available, easy, modern, electric, car experience that are a near drop in replacement for ICE vehicles because you can road trip with them. There are zero competing offerings available today that have the range of a Tesla and an extensive associated integrated hi-amperage charging network. The only gripe is that FSD hasn't been delivered on the projected timelines (and may never be). I suspect you wouldn't buy a Tesla based on moonshot FSD claims anyway, so what's holding you back? The Elon hate train?


The reason why "the vast majority of people" are not considering purchasing an electric vehicle have to do with uncertainty about the charging experience and long term maintenance and repair costs. It has nothing to do with FSD.

Once the costs of ownership are well understood, that uncertainly will be resolved and the cars can be priced correctly. That will take time -- if I buy a 10 year old corolla with 100K miles I will know (after the usual mechanic inspection) what my expected maintenance costs will be, on average, for the next 5 years of ownership.

You can't really say that for an out-of-warantee Tesla because there isn't a large group of independent mechanics that can service it and because there is a lot of uncertainty about the battery life and service costs. Over time, if Teslas remain popular, and if the company doesn't DRM them out of existence, such networks will develop and we will accumulate actuarial data on repair costs that will reduce risk.

Until then, this market will remain primarily one of price insensitive or risk-insensitive customers -- e.g. wealthier people and possibly government or other large corporate fleet sales. That is exactly what you would expect for new technologies. But for most people, a car is a major proportion of their networth, one they need to get to work, and so they are going to be risk averse (which is exactly what they should be).

My next car will most likely be a 10 year old Lexus or Acura product. Even if it's a different manufacturer it will be an ICE vehicle and not an electric vehicle. Purchasing an out of warranty Tesla isn't remotely close to a viable option as it's not possible for me to price it. But used car prices drive new car prices because most purchasers are not planning on driving the car until it has zero value, they are counting on the used car market to pay for half of their car purchasing costs. Really for new technologies, Tesla should focus on leasing their cars if they want to reach out beyond the price insensitive market.


I'm suggesting that people are buying Teslas. I honestly can't go on a 5-10 min drive these days without seeing one or two (more and more every week it seems like) and I don't live in some particularly affluent upper-middle-class area. Buying them just makes sense to a lot of people for various reasons. Most aren't worried about FSD and the track record so far for Tesla total cost of ownership seems to lean in Tesla's favor. The motor in a Tesla literally has one moving part... they don't need regular maintenance like an ICE engine with lubricants and sparkers and belts and fluids and transmissions. Even the brakes don't wear the same way because induction is breaking the car 99% of the time. The only big question for me is when does the battery lose so much charge that it affects the range quality of life and if that were to happen does the price of replacing it make sense (I am generally pretty optimistic that battery prices will have dropped in 8 years) or does it become a shorter range around town car (kept or sold as such) and we get a new one with fresh range. Even at half the range it's still directly competitive with all other EVs out there today, so that's a thing too.


What percent of car owners drive Teslas? There are ~300 million vehicles on the roads in the US. Pre chip shortage, ~17 million new cars sold each year. Of those, electric vehicles are what, 2% of the new car market and ~0.6% of the total market (used car market is twice as large as the new car market)? That is not "the vast majority of people".

Where I live I see FJ cruisers, Porsches, and Bentleys every day, but I understand that my area is not typical. It's a wealthy area in which offroading is very popular. I also see Teslas and BMWs (more Mercedes than BMW and more BMW than Tesla).

I also understand that those cars tend to be more memorable, so it's foolish to pretend that FJ Cruisers, Porsches, or Bentleys are suitable for the vast majority of the public just because they seem suitable for many people in my area. In the same way, perhaps you should recalibrate your expectations as well when extrapolating based on what you see in your area.

Now what prevents a Porsche or Bentley from being suitable is that these are for niche performance or price insensitive markets. But what prevents something like the Tesla from being suitable is the charging network and repairability uncertainty. With a Bentley, you know you will pay through the nose for repairs which is why Bentleys depreciate so massively. But with Toyotas you know their reputation for durability, which is why used FJ Cruisers cost more than used Bentleys. But how much should a used Tesla go for? I honestly don't know -- and neither does anyone else right now. Unless you have a lot of money to play around with, that's too much of a risk. You don't spend 15-20% of your household net worth on something that you can't price.


> You don’t spend … that you can’t price.

I think you’re massively overthinking things. Most people look at the Model 3, see it costs the same as a Toyota Camry, want to help the environment and have Spotify in their car, skip the dealership, go to tesla.com, see dirt cheap financing available, and make the purchase. If they don’t it’s because they’re not comfortable giving up their gas car yet because that’s what they know. Not because they can’t “price it”. Further in the current market Teslas depreciate noticeably less than ICE vehicles because they don’t need maintenance.

You may not be able to price a Tesla, and that’s fine, but don’t get some impression that just because you can’t that other’s cant and moreover that it’s the sole factor contributing to why they allegedly don’t buy. Where I live nobody is “pricing” their $65k Truck, they just want a big fucking truck 30% of their household net worth be damned! You’re full of FUD.

Lastly looking at overall population numbers is kinda silly. We’re not going to wake up and everyone has replaced their perfectly fine car with a tesla. If you look at market share growth Tesla shot up in 2017 with the Model 3. And it wasn’t just a balloon, their growth has continued. They sold 500k cars last year, a little less than half of their previous total up to that year. Again: people are buying Tesla and more people by percentage are buying every year. A Tesla was a rarity 5 years ago but that’s no longer the case. The fact that we can observe this with only 1.5MM Teslas on the road is pretty cool.


This version is much more advanced than their previous versions, but I generally agree with you. Its nowhere close to being able to get you through most drives without assistance.


The "safety score" is a really really creepy thing. Not only that they are using it for this, but that they're collecting this data in the first place.


Uh, they only collect if you literally approve it on a full page popup that explains what the safety score is and how data is shared.

Just to clarify your position, you want to hand over beta car driving software to people that do not first pass a test for safe driving?


> Uh, they only collect if you literally approve it on a full page popup that explains what the safety score is and how data is shared.

I don't think that's accurate. Tesla collects a whole host of telemetry on all of their drivers. "Shadow Autopilot", etc., etc.

They show zero hesitation in releasing that telemetry to the media, or elsewhere, if there's the slightest hint it will put you under scrutiny rather than them. Even misleadingly so - witness a fatal accident where Tesla said "the driver was not paying attention - the vehicle had warned him about taking his hands off the steering wheel", neglecting the ever so slight detail that it had warned him, once, and that warning occurred a quarter hour before the crash.

Tesla collect all this data constantly, not just for entry into the FSD beta program.


Nobody can drive a car without passing a test demonstrating they can drive safely. If passing the government approved driving test doesn’t make someone good enough to use your software it probably shouldn’t be allowed on the road.


The difference is scalability. Tesla can manufacture full equipped vehicles faster and for much cheaper. There are also already 1.5mil vehicles that could be turned on. They do not rely on HD maps.

Even if they arrive 1-2 years later than Waymo/Cruise, they can quickly dominate the driverless market.


> Tesla can manufacture full equipped vehicles faster and for much cheaper. There are also already 1.5mil vehicles that could be turned on.

This sounds like premature optimization. The real unsolved problem here is autonomous technology (software and hardware), not manufacturing equipped vehicles. No point having millions of vehicles if your self driving technology doesn't actually work.

> They do not rely on HD maps.

Precisely why FSD doesn't work and is still level 2. The AV players have shown fully autonomous driving needs HD maps and (inconspicuously) the only one not to graduate out of level 2 self driving is the one that's not using HD maps — Tesla. Creating/maintaining HD maps isn't nearly as hard as you think it is, but the benefits are incredible for safe driving.


Tesla is 5 years behind and has no plans to begin work on a number of the key challenges they're facing.


> Even if they arrive 1-2 years later than Waymo/Cruise

It's tough to anchor Tesla's arrival based on the rest of the industry's progress. Cruise and Waymo are heavily dependent on Lidar; Tesla is solving a fairly different technological problem.


Tesla is getting some real competition


You can buy the Waymo software as a consumer, and put it in your own car right now?


I'll sell you a self-driving car. Please be advised that it will drive into walls, fail to brake for stationary obstacles, drive straight off the road, and is in fact a Saturn with a brick on the gas pedal. I didn't say it was a good self-driving car.

Tesla doesn't have competition because they aren't in the game at all. They do not make self-driving cars and probably never will.


Tesla's business model (and stock valuation) is based on launching a robotaxi service. I'd say that this news introduces more competition for the robotaxi aspect, not the selling you a car aspect.


no but I can't do that with Tesla software either .... I can with comma.ai though....


eventually, you won't have to buy a car... there will be self-driving cars everywhere waiting for you to hop on.


Marketplace actually had a segment this morning on this and why it's misleading. https://www.marketplace.org/shows/marketplace-tech/teslas-fu...


Zero comments?

When the robotic overlords take over, it will not be to thunderous applause nor loud protestations, but rather quietly beneath the surface silence of billions of people distracted by something more interesting or enticing.


You think no comments within 20 minutes of posting is enough to make that kind of claim over?


It’s self driving cars, not autonomous governments. Considering how distracted and impaired the average driver is, this might be a godsend.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: