Hacker News new | past | comments | ask | show | jobs | submit login
Waymo’s Backseat Drivers: Confidential Data Reveals Self-Driving Taxi Hurdles (theinformation.com)
142 points by ballmers_peak on Aug 26, 2019 | hide | past | favorite | 194 comments



That's useful info. I wonder where they had trouble with erratic steering. One Cruise video shows that where driving is alongside parked cars on a narrow street and the parking is irregular. The self driving vehicle is trying to stay in its half of the street unless absolutely necessary.

Waymo still has to use "safety drivers". So far, nobody seems to have full self driving without "safety drivers". Which makes it useless. That's the big milestone to look for. When Waymo can get rid of the safety drivers, which they tried briefly last year, they're getting close to something useable.

Some products take a long time to bring to market. Xerography - first copy made in 1939, commercial success 1959. Although we're now 15 years after the DARPA Grand Challenge, so we're coming close to that 20 year wait.

Television - first broadcast, 1928, commercial success, around 1948. So 20 years agaon.

Roller bearings - Timken founded 1898, Timken bearings in 80% of US cars by the 1920s. 20 years again. (Although it took a really long time for them to convert railroads. First locomotive with roller bearings, 1923. In 1949, they were struggling to get railroads to put roller bearings on freight cars.[1] In 1991, roller bearings became mandatory for US inter-line interchange of freight cars.) Air brakes and automatic couplers only took 7 years, but that's because the U.S. Congress made railroads convert and standardize between 1893 and 1900.

What took a really long time, in post-1900 technology? (Before 1900, manufacturing infrastructure wasn't really ready for fast deployment.)

[1] https://www.youtube.com/watch?v=R-1EZ6K7bpQ


> So far, nobody seems to have full self driving without "safety drivers".

A common saying around here is that we have two seasons: winter and (road) construction.

Construction zones have pretty much every obstacle to automated driving you can think of:

* painted lanes that don't correlate to the temporary lanes marked by cones

* lanes that don't correspond to pre-programmed maps / gps

* irregular and unpredictable vehicle and pedestrian entrances and exits (construction workers and trucks)

* Areas where traffic is reduced to a single lane for both directions, and must take turns coordinated by humans with signs at each end of the lane

* speed limits marked by temporary signs

* rough, temporary transitions between pavement and gravel

Unless we can somehow get every state to compel every road construction company and every autonomous vehicle maker to use a single communication protocol, implement it at every construction site (so autonomous cars are made aware of these dangers) it's not going to happen.

Oh, and said protocol has to be hack-proof so trouble-makers can't start convincing cars that they're in the middle of a construction zone and force them out of their lanes on normal roads.

It's conceivable that the coordinated effort could happen, but I'm not going to hold my breath (due to the sheer increase in cost to the government) nor will I trust that said protocol will have fail-proof security.


>Oh, and said protocol has to be hack-proof so trouble-makers can't start convincing cars that they're in the middle of a construction zone and force them out of their lanes on normal roads.

Why would it be easier for trouble-makers to fool autonomous cars? As a human driver, I'd be fooled by pretty much any road marking or guy in an orange vest.


Correct.

It’s amazing, how much forgiving we can be for human errors ( accidents every year) but absolutely not for machine/autonomous vehicles, even when, statistically speaking, machines may make better decision much faster(or at-least no worse than human judgement)... I guess feeling/perception of being in control is more important to us...

Other interesting observation I find in every autonomous vehicle discussion is, how we only focus on edge cases... when in reality every tool that we use (including the car we been driving) today are built for general use case and operate under mostly a control environment.

Rather if we think autonomous car as additional pair of eyes and hands when we need it most might serve us well in short run before the technology gets mature over next decade or two.

I’ll be really happy and relaxed if my car can mostly (70-80%) drive it self to daily commute or next trip to LA; expecting it to be my chauffeur is bit too much, personally.


This is why I wonder why platooning technology is so much less hyped. Give me a platooning hardware kit to my current car, enough users after which I can join on my longer trips and literally more than 95% of my self driving car needs are covered. I really do not care if I need to drive a 10 minute stint on the city every now and then. And If I do, I can take a taxi. But getting my hands off the wheel and eyes off the road on the highway is what would have real utility to me.


Platooning kind of messes up highway traffic because people need to either cross a ton of lanes to get in and out of the platoon (if the platoon is on the left), or non-platoons can't get on/off (if the platoon is on the right). If everything was forced to platoon on highways it would work. But that's like 30-50 yrs later, after the tech is introduced, barring some really radical legislation with huge popular and state support.


Not sure if it would be techically possible to have gaps in the platoon after every five cars, but at least it would be trivial to set max size of one platoon to something reasonable.


Sure, yea, I don't mean like it becomes literally impossible to merge in. Just that for any reasonable size platoon, it disrupts more traffic than it saves. Given existing roads and the existence of not-platooned cars on the same roads, it doesn't really work.

Maybe it works for long haul trucking though.


You honestly trust some random driver at the front to make your decisions?


Well, that sounds pretty much like flying commercial flights or traveling by bus. So I guess based on my travel history, answer must be yes.


When I fly, it's a trained pilot. Even for driving is stricter licensing for cdl.


And you think there would be no extra qualifications required for the platoon heads?


  amazing, how much forgiving we can be for human errors ... but absolutely not for machine
A given human fail produces one event. A flaw in autonomous driving software can mean thousands of failure events.

Also, when a human driver's negligence results in injury or severe damage, criminal charges result. That's a deterrent. With autonomous driving, you can't prosecute an algorithm.


Yes. point about the charges would be a thing, something that should be debated...

would "use at your own risk" vindicate the company behind autonomous vehicle? or owner is responsible for his vehicle's actions? i guess never in the history, we had so much advance automations in direct hands of consumer...

As for the failure, I have reasons to disagree... if autonomous cars are working under "unsupervised learnings", my assumptions is, it most likely will makes different decisions for same scenarios based on data on hand.... so thousand's of failure events... though it may look similar may or may not end in same results... similar to how we would react when faced to some unknown situation on road... your scenario might more likely to play out for bad batch of hardware devices/sensors/lidar/camera etc in autonomous system...


>or owner is responsible for his vehicle's actions?

If it's sold as fully autonomous, i.e. significantly beyond Tesla's system today, I don't see how the manufacturer could not have the liability. How comfortable would you be to use a car that could expose you to severe criminal liability because some company made a mistake with their software?


liability is assumed. I speak to criminal prosecution like an impaired human driver would face in addition to financial liability. The automated vehicle would face no criminal exposure.

The company responsible would also have a clear incentive to alter/destroy any damning evidence gathered in telemetry.


>The company responsible would also have a clear incentive to alter/destroy any damning evidence gathered in telemetry.

Not saying it doesn't happen. But now you've gone from a product liability case which rarely has individual criminal consequences to actions that clearly do.

If/when we get to this point, it will be "interesting" though. Outside of maybe the medical area, there aren't many examples of consumer-facing products that, when used as directed, kill people because sometimes "stuff happens." And people generally understand that's just the way it is.

It's not out of the realm of possibility to imagine government-approved autonomous driving systems that insulate everyone involved from liability so long as they're used and maintained as directed. See e.g. National Vaccine Injury Compensation Program. I'm not sure it's likely but it might become a possibility if manufacturers find they're too exposed.


> I’ll be really happy and relaxed if my car can mostly (70-80%) drive it self to daily commute or next trip to LA;

There's a caveat here that this 70-80% must be contiguous and the car must be superhuman-level reliable in that segment. Otherwise, the "additional pair of eyes and hands" significantly increase the danger. If your car suddenly decides that it can't handle something and asks you to take over in the last second, you won't be able to handle it either.


My assumption is that you get to this for some subset of highways in some subset of weather conditions with some special rules in place (maybe mandatory maintenance schedules?).

Which is actually a big win as long highways drives are boring and probably have a decent chunk of more serious accidents.

It doesn't give you the robo-taxi use cases that are what a lot of urbanites care about the most. But it would be a nice safety and comfort add-on for how a lot of people spend many hours of their weeks.


The problem is that you encounter edge cases on every drive, and you need to be ready to respond. The car may be able to handle 80% of the trip, but one of those edge cases will sneak up on you and the car. How long would it take you to regain situational awareness and safely maneuver the car in the event of some unexpected situation after you've been cruising in self-driving mode for an hour? Ten seconds? Five? Can you do it in one? What if the car doesn't realize it can't handle the situation at all?

Like any risk, you also need to consider the impact of getting it wrong. If an audio assistant gives you the wrong answer to the population of your hometown, no big deal. But if your car thinks everything is okay and drives you into a stationary fire truck on the shoulder of a freeway when you are travelling at 70mph, the downside of that edge case is infinitely worse.

Sure, humans can make these mistakes, too. But the fact is that your notional world where computers are able to make smarter decisions than humans about how to drive doesn't actually exist. No one has figured out how to make it work. And they won't anytime soon. They've solved all the easy parts. But it turns out there's a lot more involved in driving than all the billions of dollars poured into the problem so far can figure out.


https://www.youtube.com/watch?v=9SexsvIO4vE internet is full of these examples.

My point is, computer with,

- more data, (historic on how to act on certain situation, live data for event i.e. sensor data, lidar/radar data, images) vs human driver who would not have access or the ability to process these.

- faster and parallel processing vs human driver

- single focus/goal (of driving from x to y safely and making appropriate decisions to achieve it) vs human driver (with "physical limitations", "emotions", "hormones" and other things that makes up "life") is more likely to be distracted...

computer with all of above advantages compared to human driver may able to make better informed decision much faster than human driver can do (and when it doesn't it's hard to know/prove if human driver would consistently make better decision every time for same situation)

having said above, I agree that tech is in its infancy and it's gonna take a decade or two to be matured and even after that human intervention just in time in some cases would be needed but for the most controlled/learned environment (which is 70-80% of total driving on day to day basis) these systems would be immensely helpful.


If someone did this to a human driver, they may make a mistake and potentially get injured, but pretty quickly someone would notice and do something about it. With automation, a cleverly crafted issue could persist for some time causing quite a bit of damage before it's corrected.

https://www.reddit.com/r/gifs/comments/6ofa63/dont_turn_your...


You might be fooled by a man in an orange vest, but I doubt you'll listen to him if he's telling you to drive into oncoming traffic.


If you don't immediately see the incoming traffic in question, yes you will. Otherwise your self preservation rules will prevail and you won't budge.

Note that self driving vehicles aren't different from humans in that respect, except they see much farther.


With self driving cars you have no idea what they would do when presented with an edge case they may have only seen very rarely before. With humans, you can in most cases assume the driver would do something reasonable, especially if there is enough time to think through the situation.


You must be driving in much better places than I do. Human drivers do incredibly unreasonable things even without edge cases.


But they understand less and have less common sense by which to operate.

So, it takes a lot more work on the programming side to compensate.


> Why would it be easier for trouble-makers to fool autonomous cars? As a human driver, I'd be fooled by pretty much any road marking or guy in an orange vest.

Imagine someone hacking the 'construction zone protocol' and spoofing thousands of cars into thinking they're in a construction zone at once. You'd be hard pressed to fool thousands of geographically separated human drivers at the same time.


> pretty much any road marking or guy in an orange vest

That only works if a police car doesn't come by and catch the perpetrator in the act.

With a wireless communication to automated drivers, someone could plausibly feed bad information from a hidden or otherwise remote location.

Beyond that, just as automation allows human-intensive processes to scale by removing the humans, fooling automated drivers can scale much more readily than fooling human drivers.


Apparently, people mess with autonomous cars simply because they're autonomous. It's hard to say whether that's just the novelty factor or something likely to persist.


I can easily imagine some bored teenagers (can even imagine a certain version of younger me doing it) blocking an empty autonomous vehicle from leaving a parking space just for kicks. I suppose coaxing other empty ones into a ditch or waterway isn't too much of a stretch either if the cars are owned by some mega-corp and become some kind of cheap public good like shopping trolleys.

As soon as it becomes a robot, a lot of the social pressure to be a good person falls away. Less so if there are people inside, but I can see empty autonomous cars being given a pretty hard time just for kicks.


Jam rf signals, modify ir cues, etc...


Remote operators solve these issue.

Once you have autonomous cars that driving safely, but can't manage complex situations like you describe, you delegate those for remote pilots that are allowed to operate car in slow speeds. You need 5G network coverage with mission-critical features (mcMTC) to archieve that. BLER 10^-6 and E2E latency < 5- 10 ms. Construction work crews might be required to erect 5G mini cell tower before they can start working to make sure that traffic goes smoothly.

Taxi fleet of 10,000 vehicles might need only 100-200 remote operators to manage the fleet. That kind of reduction in workforce provides huge savings.


> You need 5G network coverage with mission-critical features (mcMTC) to archieve that. BLER 10^-6 and E2E latency < 5- 10 ms.

I do wonder if that's a factor behind Musk's push into low-orbit satellite Internet.

> Taxi fleet of 10,000 vehicles might need only 100-200 remote operators to manage the fleet. That kind of reduction in workforce provides huge savings.

Even if all mileage was human-driven there would be very large benefits if you could really consolidate taxi drivers in call-centres for remote driving. No need to transport or preposition drivers and much less trouble estimating demand.


Starlink can provide 25 to 35 ms latency from low orbits, so I don't think so.


Even 70ms isn't all that long compared to the the official (let alone real-world) thinking-time estimates for braking http://www.brake.org.uk/component/tags/tag/thinking-time , so it might not be an unacceptable delay, at least if combined with professional drivers and reduced speed limits and/or failsafe locally-controlled AI braking.


  fleet of 10,000 vehicles might need only 100-200 remote operators
It won't scale in a predictable way. Let's say there's a major event in NYC (natural disaster or unnatural). You may suddenly need 700 operators at the same time just to deal with NYC and environs.


Let’s say there is an event in NYC. Within milliseconds of the event, all the robot cars can be notified and their human drivers can take over, only the cars immediately in the vicinity of the event need robot operators. It’s not straightforward, but it can be done.


Thus requiring the car to always have a licensed driver in the driver's seat. Basically, exactly what we have now with the safety drivers.


Even if you assume requiring human takeover to be a relatively uncommon event (whatever that means), as soon as you posit it as something that will be needed from time to time, you've significantly constrained the car's usage models. You now must have a licensed, unimpaired driver in the car at all times. Even if they don't have to be paying attention, this means no empty cars, no unaccompanied children, no "driving" home from the night out, etc.


Sure, but maybe we can start there. I would certainly buy a car that could drive itself a significant percentage of the time


Oh, I would too assuming it were relatively affordable. I'd be pretty happy with one that even just let me doze off when highway driving in a limited set of weather conditions.

I was just pointing out that, if you can't guarantee you won't need to handoff to a physically present driver, then there are a lot of things you can't do with the car even if needed interventions are just an occasional thing.


Yeah... Used to work for a small start-up that had a product to basically automate a switchboard used for elderly care. It was fun when the need for manual operation suddenly came around. Didn't have the manpower, nor the actual switchboard.


You reduce the number of vehicles available in those rare cases.


If it’s possible for remote drivers to assist a vehicle with no in-car steering apparatus through areas where the algorithm can’t go, then I think it can be done.

Getting to absolute 100% will require either AGI or an incredible infrastructure investment. Now personally I think FSD is worth on the order of $1 trillion per year to the economy, so it’s the next great Moon Shot, and totally worth every bit of infrastructure investment we can throw at it.

But it makes sense to see how much further we can get with in-car algorithmic driving before the infra investments start coming in earnest to fill in the gaps.

Another possibility is there could be ways for a passenger to assist the algorithm without actually using a steering wheel and pedals as input.

I believe the level before truly perfect FSD allows the car to get stuck as long as it does so safely. Approaching and stopping at a single lane construction zone, for instance.

The current Tesla AP does remarkably well on highways with missing lane markings. A stretch I drive every day is ground down in prep for new pavement and just has the occasional white square marking, but it’s enough for AP to lock in on. It also seems to do fine with cones.

It’s worth noting that construction zones aren’t even particularly safe for human drivers (accident rate skyrockets). So technology to make construction zones more passable overall is important, even if it just enables self driving as a side effect.


It's hard to see how we could really count on remote drivers for anything safety critical considering that the current cellular data network is unreliable, lacks guaranteed quality of service, and has many coverage gaps.


Keep in mind that the vehicle already has all the sensors to do collision avoidance and navigation.

The remote control can be to tell the car "drive this path" instead of direct control of the vehicle over a high latency link.


https://boingboing.net/2019/07/06/flickering-car-ghosts.html

Essentially a POC that the car can see, but is so brief that the human cannot see anything.


Yeah, I think this is going to be a long one. Rodney Brooks, AI researcher and Roomba cofounder, says he doesn't expect a real robot taxi service before 2035: https://rodneybrooks.com/my-dated-predictions/

> When Waymo can get rid of the safety drivers [...] they're getting close to something useable.

I suspect that they can cheat this one a little. As long as they're in an area with good connectivity and the cars are smart enough to pull over if they don't get human guidance, I expect they'll move the backup drivers to a central location. Call it SAE level 3.5. It wouldn't be good enough to sell cars, but it would be workable for a ride service, and would allow them to undercut Uber, etc, quite handily.


I dunno, it's hard to imagine any safe way to drive a car in traffic via cell. I don't see how you could get all-round vision with acceptable resolution, frame rate, and latency. VR might solve the viewing angle, at least, but would greatly magnify the effect of frame rate/latency drops. You don't want your driver getting VR-sick five minutes in.

And that's leaving aside reliability. A 99.9% solid connection is not nearly good enough.


I think you misunderstand. I agree that a safety driver being expected to be able to pay attention and being ready to take over at any moment isn't viable.

But consider a point where the cars are good enough that the remaining risks are ones where they can show that they can tune the hazard detection so that it doesn't miss any potential hazard situations, but may occasionally be overly cautious, and that it can reliably and safely slow down or stop short of potential hazards it doesn't know how to handle.

In that case you might get to a point where it's ok (safety wise, if not in terms of customer satisfaction) if the car stops for 30 seconds until a human safety driver reviews the data and confirms that what the car "sees" is not a dangerous situation.

In that case you might have e.g. 10 cars per safety driver, or more, and most of the time the car might not even stop - if a driver is available to respond immediately it may be sufficient for it to slow down until it gets a response. And you can simply slowly reduce the number of safety drivers as the cars get better. For a fleet service you might well never stop having some people monitoring to respond to unexpected conditions.

Of course, for this to be viable, the car needs to be possible to be made safe without human intervention, but that safety may be achieved by opting to stop or slow down the car in situations where continuing might be perfectly safe (and with the caveat that this may e.g. restrict where it may be possible to let it drive etc.), but where the car can't yet tell by itself.

This of course presupposes specific types of failure scenarios where the car can safely find a way to come to a stop but can't safely determine if it can continue forwards. It's not a given that's achievable with low enough effort (relative to solving the issues that might cause it to fail to spot a hazard) to be worth it.


Exactly. For example, imagine a situation where a tree is down, blocking one direction of travel. Humans would very cautiously share the remaining road space. But a robot taxi would just stop and wait for the tree to move. At that point it summons a human who tells it what to do in broad terms (e.g., "the new lane is here" or "do a u-turn and follow this other route").


The remote assist isn’t to directly drive the vehicle like some sort of video game.

It would be to annotate something the algorithm flagged as impassable in some way such that the car can continue driving itself.

If the car entirely fails to identify a driveable lane, I don’t think you can remote in and actually active steer a human-occupied 2 ton vehicle over 4G.


Right. What happens when a construction crew accidentally cuts the backhaul fiber for the local cellular tower? Or if the autonomous vehicle failure occurs in a tunnel with no cell service?


You'll note that Waymo is very much operating in a limited area. So I think the answer is: they don't offer service in an area that lacks good cell coverage.

Of course, it's still possible that the vehicle will somehow be out of communication when it encounters something it doesn't understand. A cell jammer, say. In which case, it'll do what it will do if it, say, detects engine trouble: it'll pull over and wait.


Run the live video feeds from the vehicles through a style transfer AI that has been trained on Mario Kart, then secretly stream the result to a mobile app disguised as a free taxi driving game. All the app needs is a feature where the app turns off adverts if the player keeps completing taxi journeys without crashing the vehicle and you are good to go.


You would probably enjoy the John Barnes story "The Lost Princess Man", which has a much fancier version of this as a core element: https://www.amazon.com/Princess-Barnes-Short-Story-Collectio...

[Also seen in the collection "The New Space Opera 2".]


>Which makes it useless.

I dunno about that. Maybe it's not immediately valuable in terms of being cheaper than picking up an Uber, but there is still value in getting your sorta-working prototype on the road for data gathering. Your Uber drive isn't going to have a vehicle with dozens of sensors to help your ML bootstrap, that's for sure.

The value is in the data, not the immediate convenience.


Mary Meeker's internet report this year is one place where you can see a summary chart that suggests that not only is technology innovation happening exponentially (which a lot of people agree to) our adaptability is also growing (which I guess it has to in order for innovation growth to continue like that?):

https://www.bondcap.com/report/itr19/#view/156

(however what's most interesting about that slide is that we no longer can adapt to the rate of change/innovation in technology)

So don't think things need 20 years anymore - how quickly before Twitter, Instagram, Facebook and Google became mainstream?


That's not data. No scale, no data.

Actually, the period of greatest technical change in human history was probably 1860-1910. Steel, electricity, autos, airplanes, subways, radio, elevators, skyscrapers, machine guns...


I know it's an out there theory but I believe the problem is much more fundamental. It's switching from an analog operator, to a digital one.

Pervious analog to digital transitions have been easier because we could afford to lose information. (IE Audio, from records to CDs.) It was just a matter of getting the digital sample small enough. Information was still lost, but not enough to matter.

But with driving, it matters. We can't digitize the world to a point where a computer can drive better than an alert trained human.


Note that all of your examples had working prototypes; it just took 20 years to make them affordable/scale them (along with some incremental improvements).

With self-driving cars we have no such prototype and I would not expect one in the next 20 years. Unless we are talking about driving on specially equipped roads/lanes or in some special situations, where we might see them much sooner.


The DARPA challenge mentioned was for providing a fully working prototype, all these years ago. And not a highway, but through an unmapped obstacle course.


It was a very different problem from driving on public roads.


I firmly believe that autonomous taxis are 30 years away. The big benefit will be for long haul trucking. That will be here in 10 years or less. They will have special rest areas where the autonomous trucks will park. From there they will only drive on interstates for long distances. When there, they will park at similar special rest areas. Local drivers will then take over and do the more difficult driving. It could really reduce costs as these trucks can drive 24/7 and we can even have them immediately turn around with another load from another company. This could all be completely automated. These rest areas are also a great place to put in charging stations to move these all to electric. By removing most of the narrow streets, cross traffic and unexpected pedestrians, it will be a lot safer.


It's taken more than a decade to get to autonomous trains and autonomous mine trucks (Rio Tinto have sunk a lot of money into it), so I wouldn't hold my breath.


> What took a really long time, in post-1900 technology?

Fusion reactors. Still not there.


> The data provide the most detailed view yet of the passenger experience inside vehicles from Waymo, which doesn’t disclose the information even though it uses public roads as its testing grounds.

Why would it riding on public roads matter when it's about what happens inside a private vehicle during a high-value R&D experiment? They got permission from Arizona to run the tests and it will likely benefit Arizona's economy in return being the first to get the most training data.


That phrase is just thrown in there to stoke the reader and create a story out of nothing. It indicates a lack of journalistic integrity.


> It indicates a lack of journalistic integrity.

Or, the author is suggesting that public resources should be used for public, not private benefit?


How is someone driving to work in their private vehicle public benefit? The roads are for driving on, nothing more, nothing less.


Not true, the roads are for the use and benefit of citizens.


How about tax-paying non-citizen residents, tourists etc?


They pay for their road use through gasoline and sales taxes.


And Waymo got a deal from Arizona that waived all sales tax and provided for unlimited free gas?! That's outrageous!!


Non-citizens and immigrants, even "illegal", benefit citizens.


Making normative suggestions in the middle of a news article is exactly the lack of integrity that I'm talking about. Rather than reporting the facts, the "journalist" is writing an opinion column without having the decency to label it as such. The validity of his opinions are irrelevant.


Self driving car technology has huge public benefit.


If people aren't sharing the vehicle there's the same number of vehicles on the same number of roads, just less need for parking spaces.

Of course, the potential for reduced accidents should help. Unfortunately, part of why AZ was probably chosen is that it's mostly sunny/clear weather during the year.


Reduced accidents. Increased time for people using self driving cars. Companies no longer needing to pay for staff that pilots cars. Just the time and cost savings alone for private people and companies is hard to be overstated.


Are the people driving really that much more expensive than the computers, tooling and infrastructure for automated cars? How do the maintenance costs compare? There's a lot more to it than just the drivers.


Initial investment is incredible. Just look at how much money and time has already been spent on trying to implement self-driving capabilities and we aren't close yet.

But once the technology is developed and available the cost will probably be pretty low. A couple of sensors and processing. I feel like maintenance cost will likely be smaller since a self driving car in theory at least can detect any issue much quicker then a human.

This is all speculation of course but human drivers are very expensive. Quick googling says about 20k to 45k per year for a truck driver. Even if initial investment in self driving technology costs 50k per unit more it's still incredibly advantageous to do so.


Would you nationalize businesses that involved movement of people or goods, or would you privatize the roads and sidewalks? Otherwise I'm having a hard time imagining how goods would be delivered to stores, how raw materials would be delivered to factories, how bands would get to their paid gigs, and how I would get to work in the morning.


Isn't the whole point of public resources is that everyone can use them?


considering the later contributed to paying for the creation of the public resource and their use is not exclusive nor impedes others from using the same I see no issue.


Like when an individual drives to a store to get groceries?


How is this any different from taxis, truckers, etc.?


Are you suggesting all taxis companies being banned?


>It indicates a lack of journalistic integrity.

It's tiring reading commentators say things like this. Saying a professional journalist lacks "journalistic integrity" is bold. Why don't you step up and make this argument directly to the author, rather than being snide and posting it on a forum he will never read?

Here's his Twitter; have at it:

https://twitter.com/amir


1. I don't comment on Twitter.

2. I have confronted journalists whose practices I disagree with when I meet them in real life, and in general their excuses are not impressive. Most recently I challenged Ivan Semeniuk, science journalist for the Globe and Mail, when he visited Perimeter Institute for "The Future of Science Communication", a panel discussion we were both on. (Semeniuk was endorsing a different common journalistic practice I disagree with, not the same as exhibited by Amir Efrati, but I can't share because it was a private conversation.)

3. Challenging Efrati would be like confronting every panhandler who tells a false sob story. Newspapers are full of this sort of writing, and you could spend your life objecting to it.


because other drivers and pedestrians on public streets have a very strong interest in knowing how safely these cars perform and how they glitch out given that they could cause accidents at any point?


How's that different from any human driver on Saturday night?


Human drivers know what to look out for in other human drivers, we don't know what to look out for in autonomous vehicles. If those vehicles perform particularly badly under some conditions that are not obvious the public would benefit from having transparent insight into how the cars function, so they can be appropriately alert.

A research experiment that exposes the public to additional risk should be performed with the maximum amount of transparency possible. It is frightening that this even needs to be spelled out.


Human drivers have their own low-hanging fruit that can be pointed at for accident reduction. Tired, impaired, distracted driving are horribly commonplace.

Its a low bar, for automated cars to be better than human drivers. Sure they'll have their 'blind spots'. That's no condemnation of the whole industry. Because what we have now is not very good at all (fallible humans, all different). And when automated drivers have an issue we discover, they can all be fixed. Try that with humans.


It’s a low bar for automated cars to be better than tired, impaired or distracted drivers. It’s a very high bar to be better than all drivers.

Even tired, impaired, or distracted drivers still behave in semi-predictable fashions - they tend to overreact.


That's why these vehicles have both a backup driver and workers monitoring remotely from the office...

No one is saying entirely driverless vehicles are ready yet. Even some of the earlier hawks who claimed next year have backtracked.

Nothing wrong with that, all software deadlines are usually 1.5-2x longer than initial optimistic estimates.


As the article lays out waymo has accurate data on where the cars glitch out. Take that data, put it onto a public map, make that available for resdients to browse so they have accurate, data driven insight into what they need to look out for when they encounter autonomous vechilces. There is absolutely no reason not to do this, the amount of effort is trivial, and it would enhance the safety of people living in those areas.

If waymo wants to have the privilege of secrecy they can run experiments somewhere not open to the public. That should be the standard we apply to these companies.


But … we have Saturday Night human drivers, and we have no such rule for them. No complex system of reporting maps of bars and festivals and when to avoid them. Doesn't seem fair, somehow.


You're spreading fud. Waymo have a lot of vehicles on the road and most of the time the human drivers run into the back of them while they're stopped. They haven't had any big incidents.


Yes! The public definitely has an interest in how dangerous other cars on public roads are. Imagine if the data showed that they'd had tons of near misses with cyclists for example.


That's not what the quote is talking about...


Interesting analysis using comments/rider scores. 10% improvement over 6 months is actually pretty good, though. Even if that rate slows as trickier edge cases get handled, It still implies early 2020s for true driverless readiness in select locations. That's pretty much on track with overall predictions for the autonomous market.


10 percentage points, 25% relative improvement (40% non-5 stars to 30%), seems quite good to me.

Although I wonder if seasonality is important here - obviously Phoenix isn't Minnesota but could driving conditions have been worse in Q1 compared to this summer, from the perspective of a self-driving car?


Definitely could be a factor. Other drivers might also be more used to seeing them on the road, which could have a positive impact in some way. I think the overall trend is probably pretty strong at that level of improvement, though.


That's assuming their rate of improvement is constant and there isn't a long tail of difficulty. But there is. The long tail of difficult driving situations is very very long.

An example from my most recent drive: I drove through a common where cows graze. I doubt a driverless car is programmed to slow down near cows.

The only solution I can see is to whitelist roads and start with the simplest (motorways/highways) then gradually expand to more and more roads. I guess that's kind of what they're doing - suburban America is very easy (although you still have pedestrians and cyclists to deal with unlike motorways), and maybe whitelisting is the cause of the routing complaint.


I think standard car manufacturers and insurances would be in awe if their car safety improved at that rate.


Well, there's a ceiling of 100% safety, so it's diminishing returns after a certain point.


If you get nagged to answer questions every time you give a 4/5 you eventually stop giving 4/5 and 5/5 everything because you just want to get on with your day.

The whole "if it's not amaaaaazing it's terrible" thing is idiotic.


Despite the complaints, this is fucking amazing. I remember just earlier this decade I was pissed off at a taxi company for saying a taxi will come in 20 minutes...maybe, dispatcher can’t promise. Now we got Uber/lyft, and then...DRIVERLESS vehicles?? Always seemed like science fiction but the future is here...

The bitch of this business is the long tail of possible scenarios - before people have confidence you need to solve the long tail which is hard because not as much data / much less predictable. Sounds like from article though they are making headway!


10500 rides, 70% are rated 5 stars (perfect). 10% increase from last year.


5 stars isn't perfect. 5 stars is "there's nothing so bad that I want to take the time to complain about it."

Waymo employees, who are encouraged to be especially tough, give reviews that are 47% negative. That's likely closer to the metric for perfect.

We don't get perfection from human drivers either, of course. Though part of the promise of Waymo is much better performance than humans, which it apparently isn't close to yet. And this data is for relatively common cases: for the long tail, one-in-a-million case, the conventional wisdom is that humans would do better.


If I were "encouraged to be especially tough", I can promise you that the taxis I ride in would get a much worse score than 47% negative.


For entirely different reasons, though. Your taxi driver is not going to randomly freak out because they mistook a bush for a pedestrian or didn't understand what traffic cones mean. The bad taxi driver is just going to drive too fast while talking on their cellphone.


> or didn't understand what traffic cones mean

You'd be surprised. I've had a taxi driver turn up train tracks before.


What a strange thing to have leaked. Not details how the tech works, but the database of customer ride reviews?


I imagine ride reviews are considerably less confidential than training data.


Yes, also a few hundreds of terabytes of data are much less convenient to upload and share.


I wonder if they spread out to more areas - where there's more variety of driving conditions - if they could start rating difficulty well enough to estimate when they would be able to operate in each one.

So maybe they could start working in 2 years in sprawled suburbs in hot areas where you don't have many cyclists or pedestrians? Or is that Phoenix already and it's still too hard?


I wonder how many people are upset that the vehicles had to drop them off in a valid passenger load/unload zone as opposed to the usual Lyft / Uber tactic of parking in a no parking / no stopping zone, bike lane, crosswalk, etc because it's most convenient for the drivers and passengers (at the safety and expense of everyone else sharing the space)...

That quote at the end:

> I guess Lyft has me spoiled. I like getting dropped off in front of the place im going too [sic] not just in the parking lot....


Cyclist here. In my experience, the majority of the time a driver stops or parks in the bike lane in an urban area in the US (e.g., I live in Austin), there's a legal parking/stopping spot within a reasonable walking distance, often within 50 to 100 feet. (If this isn't true where you live, consider the difference in the location. There are probably exceptions too. I'm told that legal parking isn't typically close in SF.)

Then again, my idea of "reasonable walking distance" seems longer than most people's. Having spoken to many drivers who have parked in the bike lane, I'm amazed by how negatively some have reacted to me recommending that they park as little as 50 feet away. In some cases the non-bike-lane spot is closer but the convenience of pulling to the side of the road rather than doing a more complicated maneuver seems irresistible.

If Waymo follows the law, good for them. Makes me more likely to be a customer of theirs in the future.


In the right light, this is a competitive advantage for Waymo. Prove that it's possible to have a ride hailing app that strictly follows municipal stopping/parking rules, and then encourage cities to start strictly enforcing those rules and ticketing offenders. Self-driving cars would presumably be better than humans at following those rules (at least, if we're imagining a world where self-driving cars work safely and consistently).


A somewhat ironic form of regular regulatory capture, in that it should have already been captive...


I often see waymo vans near San Antonio and El Camino in Mountain View. It's kind of a nightmare drive for all parties. Curbside parking is allowed, there are no demarcated bike lanes, and much of the road is in suboptimal condition. There is often construction going along sidewalks and buildings, and uber dropoffs are common. You occasionally see cyclists, though I suspect most stick to a side street.

What I suspect people are complaining about is that Waymo doesn't do curbside dropoffs at locations with a parking lot -- not common biking routes. I bet Waymo doesn't have the data to know whether a curb is painted yellow, blue, or red, and just avoids them, while a Lyft driver would probably put on hazards and drop people off at yellow curbs and bus stops.


Mountain View isn’t even close to a challenging environment. I would like to see Waymo try SF on the same routes as Cruise.


I have raised this point many times before.

Uber/Lyft drivers break the law dozens of times a day. In fact the entire experience is predicated on their ability to pick you up/drop you off in places they shouldn't e.g. out the front of your house.

I guess self driving cars will be closer to Uber Pool in terms of experience.


It seems like there's an overall issue where people feel they can't do anything if the car takes a weird route or drops them off in the wrong place.

Another comment mentioned in the story says the car skipped the drop-off location and inched passed a bus stop, and other people mentioned inefficient routing.

I don't know how the system works, so this might be user error of some kind, but plenty of people are not going to want to get in a taxi if they feel like they have no control over where it's going or where they can get out.


This is one of the issues with machines vs. people. For an able-bodied person, like I am most of the time, sure I'm fine with being dropped off half a block away. The driver may ask me and I'll be "fine." If someone is using a walker--not so much.

I have to believe the last block or two problem will be a big issue with self-driving whenever it eventually arrives.


It’s mainly a different lobbying tactic:

- Uber and Lyft and more toe-stepping and will encourage their contractors to paint outside the line, deal with the consequence once the administration has caught up with them and is presented with the fait-accompli that this is voters’ expectations now.

- Google/Waymo has better relationships with local authorities and can obtain the permit to drop people off after they’ve proven they are playing within the line — and can wait for, and eventually finance urban furniture changes.

Both use people’s expectations, but differently.


> The data provide the most detailed view yet of the passenger experience inside vehicles from Waymo, which doesn’t disclose the information even though it uses public roads as its testing grounds.

What's up with this statement? Should I be forced to publicize my phone calls just because I made them while driving on public roads? It's utterly bizarre that the author thinks he is entitled to see Waymo's data.


I'm not sure if I agree with the writer's statement here, and I'm not sure the right outcome is for Waymo to share data, but I don't think your analogy works.

Your individual phone calls are made as part of your general participation as an individual in society. One of the largest companies in the world, which does its best to avoid paying taxes, is using public roads as a fundamental part of the infrastructure for a project to generate data.

Maybe a better analogy is people who grow large amounts of marijuana in national parks? Yes, it's true that the growers are part of the public, and that the public owns the land, but...

I wouldn't have a problem with stuff like this if corporations were taxed at reasonable rates, and didn't participate so wholeheartedly in efforts to corrupt our democracy. Google donates to many truly vile, despicable politicians in order to shirk accountability, hamper regulation, and accomplish just this sort of de-facto subsidy and others like it.


Companies may or may not have agreements in place with local governments, and those agreements may or may not include provisions for compensation, data sharing etc. However, just saying "you used public roads so you have to make your data public" makes no sense.


How about UPS/FedEx? They certainly test new products on public roads and have no requirement to make public the resulting data.


As you can read above, I'm not arguing for the mandatory disclosure of data.


As you can read above, I wasn't implying you were. Merely pointing out another, perhaps better, analogy.


Corporations are made of people that pay income tax and property tax


>Corporations are made of people that pay income tax and property tax

Exactly. and drawing a distinction between the two is essential. In the US it seems to be, an ideal anyway.


>> corporations were taxed at reasonable rates

Corporate tax is incredibly high in the United States. It is why corporations funnel their money into other areas where it is taxed at a lower rate. No one gets away without being taxed. Payroll taxes, property taxes, use taxes, L&I taxes, etc.


I think the important point here is that Waymo is testing a dangerous technology on public roads. Putting the public at risk for their corporate benefit. If I'm a test obstacle to see if your car stops in time to avoid hitting me, I should probably get something out of the deal.


All car manufacturers test their cars on public roads.


No, because you get the use of the roads as part of paying your taxes. These are special licenses, providing resources for private research for free with the costs being borne by the public.


What costs?


There's a flaw in parent's comment in that the employees are themselves tax-paying citizens, but to answer your question anyway: land use, congestion, pollution, wear and tear, etc.


Passengers, who are property of the government, I guess?


The slow progress is not an indictment of self driving. This is one of the toughest engineering challenges ever mounted by humanity.


Yeah, that's a bit of an overstatement.

We created nukes, landed on the moon, took sludge out of the ground and used it to power the world. We connected this world with wires and glass fibers to build a real-time global communication system that also gathered all of the world's information in a singular and immediately accessible place.

Building a self driving car is hard but really not as tough.

How quickly do think we'd get self driving cars if the USA spent 4% of the federal budget on it like NASA received in the 60s. (That's about $40B a year for a decade).


Depends what you mean by "self driving car". IMO, building a fully self driving car requires artificial general intelligence, which is so hard I'm not sure humanity will ever achieve it. If we ever manage to create the kind of AI that fully self driving cars require, self driving cars will be one of the most boring and trivial things that is done with that AI.


My guess is that it (full 100% self driving) has around the same technological difficulty as putting a man on Mars. It also depends on what constitutes "100%" of course.

> How quickly do think we'd get self driving cars if the USA spent 4% of the federal budget on it like NASA received in the 60s. (That's about $40B a year for a decade).

At that cost we could adapt all infrastructure to suit self driving cars, instead of developing self driving cars to adapt to human infrastructure. But I think that kind of cost is always going to be beyond what's acceptable.

I think the discussion is mostly pointless because of diminishing returns: if you can have "99.9% full self driving" for a tiny fraction of the cost, who would want to pay to go from 99.9% to 100%?

Initially human remote drivers will take care of the rest. And then there is a very slow commercial race towards using fewer humans that drives the very slow march to 99.9% and 99.99% self driving and so on. Driving the last second of the last edge case route is basically something that requires AGI (as long as we don't adapt infrastructure).


I agree with most of this, but remote drivers are completely unfeasible with current or planned network infrastructure. You could drive with acceptable levels of sensor bandwidth and latency right next to an unobstructed low-utilization 5G tower, and that's about it. That's unlikely to correspond with the locations one would need remote drivers.


Luckily the hardest situations at least occur in cities and not on highways, and cities have good broadband. But yes, it's still "100%, but only here" a.k.a. not 100%

I think remote drivers will probably have to "rescue" cars without piloting them, usually just assessing a situation and overriding something (driving through an obstacle etc). A passenger (if there is one) could do the same. But sometimes actual remote driving would be required of course.


We won’t know how tough it is until it gets done once.

I don’t think we’ve demonstrated anything autonomous beyond the most trivial kinds of autonomy (e.g. the V2) in all of our technology history.


All great achievements, but none of these is in the same ballpark of difficulty as creating a general artificial intelligence (which is pretty much what self driving cars need to achieve true level 5 autonomy).


Level 5 driving doesn't require AGI at all.

It's an extremely narrow set of problems that have to be solved incredibly well. It mostly just comes down to creating an accurate 3d representation of the world from a bunch of sensors. You also have to correctly segment and label each object in that 3d representation. If you did those two things extremely well, the actual driving logic can be hardcoded.

The problem is that each of these systems has problems so they all have to improve and compensate for each other.


This is incorrect. The hardest part of developing a self-driving car is predicting the world around you in the immediate future. Knowing whether or not that object is a person is a lot easier than guessing whether or not that person is going to jump out in front of the car 1 second into the future. You have to know who is going to run stop signs, when cyclists are about to cut you off, when someone is about to back up into a parking spot.

I don't know whether or not AGI needs to be developed to make a useful self-driving car, but as time goes on I'm beginning to believe that's the case.


This is incorrect.

Predicting motion once you have small time slices and very accurate 3d representations is very very easy. You can easily calculate expected paths. You have to remember that computers see the entire situation at the same time. A bike doesn't just cut off a self-driving car the same way it does for a human. Humans are slow, our increments of time are large and in the hundreds of milliseconds and we can only focus on a couple of things at a time. A computer will notice the slight change in velocity and acceleration within single-digit milliseconds. Then it just has to predict the probability of collision. These calculations are simple.

Deciding what to do in these situations can very much be efficiently hardcoded using decision trees. No one right now working on self-driving cars dares to use a neural network or any other unexplainable & unbounded ml algorithm for policy. You have to be able to hard code in new edge cases as they emerge. You have to be able to study specific crashes or incidents and then adjust the decision-making scheme to specifically avoid that situation in the future.

Truly, the hardest problem is taking in data from multiple sensors, segmenting it, and then labeling it. All in real-time. The sensors are faulty and super expensive. There are also so many different objects out there. If you actually look at the ancillary startups in this industry. They're not working on "common-sense" general intelligence algorithms. They're working to make better & cheaper lidar. They're working on computer vision problems. They're working on image segmentation.


You're focusing on the wrong part of the problem. You're thinking of everything as a giant physics simulation, and completely ignoring the hardest part: humans.

Let's say you're driving through an intersection with a green light, and there's a pedestrian waiting to cross. The robot has the right of way and goes, but suddenly the pedestrian decides to cross in front of the vehicle. Even if the reaction time was 0.00 seconds it's too late to avoid a collision. The problem is the robot didn't anticipate that the pedestrian was going to cross despite not having the right of way. Humans are better at reading social cues than robots. Maybe robots can learn that, but it's a significantly harder problem than path planning and image segmentation. This applies further than pedestrians and also drivers and predicting their behaviors on the road. And if you try to drive cautiously to avoid this potential scenario, you effectively stop and crawl every time you see a pedestrian and are not very useful for moving from point A to point B (not to mention all the pissed off traffic behind you).

The reason it's difficult is because it's an uncontrolled environment, and the robot has to be able to anticipate what other drivers/cyclists/pedestrians will do. Robots have done wonders in controlled environments, but trying to bring them to the real world has always been a struggle.


I doubt that most human drivers are good enough to avoid a collision in that situation. You will always be able to come up with a scenario that will fool a computer; you can also always come up with a scenario that will fool a human driver.

The standard isn't "perfect under all conditions", it's "better than a human". Humans are, honestly, pretty bad at driving. The bar is not that high, perhaps unfortunately.


> Let's say you're driving through an intersection with a green light, and there's a pedestrian waiting to cross. The robot has the right of way and goes, but suddenly the pedestrian decides to cross in front of the vehicle. Even if the reaction time was 0.00 seconds it's too late to avoid a collision.

Why does a robot driver need to anticipate this? Does a human driver need to?


Er, yes? Remind me not to walk close to your car :P


As both a pedestrian and a driver, I certainly have to read social cues.

If I'm walking up to a pedestrian crossing and a car is approaching, I don't just step out into the road, even though I have the right of way. I try to make eye contact with the driver to see if they recognize I'm crossing. They'll often nod or do something similar to signal that they're letting me cross.

A machine has to understand these social cues as well. It might even be helpful if the machine has a way to signal its intentions back to pedestrians.


You can probabilistically predict those events with machines much better than humans can. You don't really know someone will decide to run a stop sign, but you do know when the vehicle is past the point it was supposed to start slowing down. That's relatively "easy", we have been predicting physical object motions with analog computers, even. As the parent says, accurate data from sensors is a much bigger problem. But once you have the data, you can model these objects with dumb algorithms.

Computers can also have a much faster reaction time, so a human may need to predict one second ahead, but computers may be able to get away with less.


> You can probabilistically predict those events with machines much better than humans can.

This is an assumption and has not been shown to be correct or even probable


I question the claim that you need to predict the immediate future. Human reaction times are pretty slow, to the point where when our feet hit a pedal in response to light that hit our eyes two seconds ago, something that is prediction for us could be reaction for a machine. A human has to live two seconds in the future because our appendages and lower faculties are lagging behind in the past.


> I question the claim that you need to predict the immediate future.

Alternatively you could develop braking technology which gives vehicles a stopping distance of 0m, but this might be a bigger technological advance than full self-driving AI, and I'm not sure it would be that comfortable for the passengers....


For me, the definite proof that we won't have self-driving anytime soon is the massive fail that was encountered in the recent chatbot fad.

One key part of driving is communicating - with pedestrians, cyclists, other drivers. This happens through body language and other fairly subtle cues.

When you can't make AI work for responding to questions given in text form on an extremely limited problem domain, how on earth would it work for something that's orders of magnitude less well defined and more broad?


"If you did those two things extremely well, the actual driving logic can be hardcoded."

I mean, it _could_ be hardcoded, but there's millions of edge cases so it's pretty infeasible. I agree with parent comment that full level 5 requires something close to AGI - the difficult part in in getting a self driving car is giving the AI something along the lines of "common sense", the ability to reason about what to do in an unfamiliar situation.


Let's say you do sense and label the entire world (which in itself is an impossible problem). Do you think every single action that a driver takes on the road can be hardcoded?

What happens when a street is temporarily closed but doesn't have the correct signage? What if there's a police office or road worker signalling instructions by waving his hands? What if the lights at an intersection stop working? What happens if there's a car burning on the side of the highway and drivers need to change lanes to go around it?

And these are just some of the problems in a large American city. Think about rural areas, places with more aggressive traffic, places with wildly different written and unwritten traffic rules.


> Level 5 driving doesn't require AGI at all. > It's an extremely narrow set of problems that have to be solved incredibly well.

It does require AGI, at least if you're planning to drive on most of the world's roads, and not only on some "pampered" streets in the middle of the desert or on heavily-regulated and very well maintained streets like in Norway or Switzerland.

As a human a I have a quite "accurate 3d representation of the world" but even so, many times I'm left dumb-founded by what the people driving on the same streets as me are doing. And even if you do manage to replace all those other people with self-driven cars, how do you account of cows ending up in the middle of an interstate road (it happened to me at least once), with wrong street markings or no street markings at all or with drunken bicyclists whom you can't see at night?


It's way tougher.


Waymo has a far better safety records than NASA, and it self driving cars. NASA is allowed to kill people. Waymo isn't. The Internet is allowed to be full of malware. Waymo isn't.


I think it parallels nuclear fusion in a lot of ways. The impact is unquestionable, but the feasibility is an ongoing debate.


My peeve is not the slow progress: My peeve with self-driving companies, Waymo and Tesla alike, is their constant misrepresentation of their capabilities and their timelines for the benefit of stock value and public opinion. The technology doesn't really work, and Google's marketing for the past five years has claimed regulations were the only thing holding them back.


Is the market really being fooled? Tesla's stock price falls pretty often, and in a universe where the hype was real it would be a lot higher.


In all fairness, it seems as if a lot of the companies and individuals on Team Right-around-the-Corner have really backed off in the past year or so. I'm not sure how much of this is the deflating of unrealistic expectations and how much is just a tacit agreement to stop trying to top each other given how many hurdles remain.


citation? This feels like you swapped google for tesla (Elon literally said that, i can't find anything that says Google has said it)


I mean, off-hand, this paywalled article from 2016: https://www.barrons.com/articles/googles-self-driving-cars-f... The search result for it had the text "Google is so confident about its technology that the Internet search giant has already agreed to accept liability if its self-driving cars cause an accident."

Here's someone following right along with the suggestion that regulations, not lack of technology is to blame: https://www.wired.com/story/outdated-auto-safety-regulations... The author, part of the Competitive Enterprise Institute, works for Google: https://services.google.com/fh/files/misc/trade_association_...

That was page one of my search results, but suffice to say Google's been insinuating this for a while, both from the Chris Urmson era and the John Krafcik one.

I specifically referred to Tesla in my post as well. I saw the suggestion that Elon's claims about release dates for Autopilot features were effectively timed to manipulate the market. I'd give credit to that theory, or that Elon just has no clue how far he actually is from success. One of the two.

Both companies horribly misrepresent the fact that self-driving isn't around the corner.


What's HN's stance on publishers posting links directly to HN?

To be clear, this isn't meant as a blast against The Information or ballmers_peak; they're transparent about their affiliation and that this is an article white listed for readers referred by Hacker News. It's also a totally appropriate article for HN.

That said, I kind of wish there was some in-line indicator of a motivated submission source. I guess a downside would be unethical publishers just laundering their submissions through ostensibly unaffiliated accounts, but I'd feel more comfortable knowing that we did have some kind of system here to encourage transparency.


As long as HN has effective vote ring detection, I don’t see any issue with directly posting links.

The crowd helps us here and paternalistic rules can only prevent it from functioning correctly. If the post is good, it will move up. If not, it won’t.


HN doesn't have any rules against self-promotion as long as it's not spammy/low quality content.

Additionally, this is an interesting case as the custom link is an official way to bypass the paywall, which is the more frequent HN complaint: https://news.ycombinator.com/item?id=20414141

Comments are a different story, e.g. if a startup employee that works for the startup comments on a post about the startup, they should disclose it.


The Information is some of the highest quality and well-researched tech journalism out there. They frequently break stories months in advance of other publications, and are one of the few tech outlets that actually warrant having a paywall, and are not just rehashes of press releases. I personally have no issue with publication staff posting content.


A week ago Lyft driver who drove like a maniac. Will he get a saucy exposé too?


If you're planning on cloning him millions of times and putting him behind the wheel of every car, then yes, he does deserve an exposé!


why autonomous cars get so much attention ? Why they are always on news , why they always trend. Apparently for a common man why we are excited when we should be skeptical and more careful. so anything other than excited.

But for some reason these driverless cars do get most attention.


When are they going to just seal off an area so it's self-driving vehicles only, and test them there? In a 100% self driving environment, edge cases are probably much easier to reason about. And if the future is 100% self driving, they should probably get some training data from that environment anyway.


Waymo has quite a large autonomous-only test facility. https://www.theatlantic.com/technology/archive/2017/08/insid... They also use it to populate simulations with realistic traffic models.


With bicycles and pedestrians sharing the common roads, a 100% self driving environment anywhere but the freeway is never going to happen though. They need to be able to handle the chaos.


>> When are they going to just seal off an area so it's self-driving vehicles only, and test them there?

Talk about your all-time overfitting problem.


They already ran plenty of tests in a controlled environment:

https://www.wired.com/story/google-waymo-self-driving-car-ca...


This might just be me, but it strikes me that the biggest problem self-driving cars has is that no-one actually needs them. Don’t get me wrong, you try to sell me a car that has the (fully capable) feature I’d definitely want it. But need it? Well, you’re not going to solve the problem of too many cars and the slow traffic that results with cars. It’d be hard to get special treatment for cars with a premium feature. Then there’s the taxi model, where the aim of the game is to undercut people that Uber has already managed to push below minimum wage. I mean, there’s a margin there, but is it really “Next Google” sized? Even with dedicated vehicles, that’s a lot of hardware to maintain.

Now let’s assume it’s successful and sustainable. Let’s say we also manage to get it to work for freight. What have we achieved? Well, we’ve put a couple more of the jobs available to low-skilled workers on the scrap heap. There will be consequences to doing that, but I doubt Waymo will be footing that bill.


There are 3.5 million truck drivers in the US [0]. They are paid at least minimum wage. Alphabet has 148.299B revenue per year [1]. A conservative estimate of the revenue waymo could earn just by replacing truck drivers is 7.25 (minimum US minimum wage) * 40 (working hours per weeks) * 50 (working weeks per year) * 3.5 million = $50 billion a year.

I think once you expand past the US, and past trucks, this is definitely "Next Google sized".

[0] https://www.trucking.org/News_and_Information_Reports_Indust...

[1] https://www.macrotrends.net/stocks/charts/GOOG/alphabet/reve...


That's a pyrhhic victory though. You'd essentially convert trucking into retail, making a semi-skilled workforce as well as small entrepreneurial one into a just-in time near contract workforce with high barriers to entry to start businesses.


Disabled people need them, very old people need them. Drunk people need them.

Lots of people could do with a driverless car.


So long as I need to watch it, I don't feel a huge need for my personal car to be self driving. It doesn't free up any time for me, so I'm not ready to pay for it.

But if we had "full" self driving, as in the car can drive empty, that opens up new possibilities.

First of all, I could avoid owning a car all together. I could be part of a car sharing/pool system where I can summon a car when I want it instead. Those pools exist but their biggest drawback is you have to get TO the car instead of having it on your driveway.

Second, I could get driven home after having a drink. Where I live the legal limit is 0.02 so one drink equals a Taxi.

Those are features I'd very much be willing to pay for.


I personally think there are very few things we need today more than self driving cars. Driving is one of the most dangerous things we willingly do in our daily lives. There is an insane amount of valuable space in cities devoted to roads and parking, which is all used inefficiently. Valuable time is wasted in commutes and traffic.

Now imagine if every driver on the road followed all rules. Traffic flowed smoothly and there were fewer accidents. Your car dropped you in the middle of the city and drove out of sight till it was needed again. No one ever drove drunk or impaired. You could catch up on work or just watch something during your daily commute.


Now imagine strong wind brought down a tree branch on the road. A strong rain crated a big puddle. Etc. There is a lot more to driving than just following the rules. And I do no see how self driving car reduce the need for roads. Or parking. And the time spend waiting for them to arrive can be significant.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: