What this shows is that Tesla isn’t even on the path to self-driving technology. It sounds like “Autopilot” has no world model (or an extremely basic one)—it ignores stationary objects because it can’t tell whether it’s approaching something on the side of a road that curves away versus an obstacle in the lane. It can’t tell because it doesn’t know what the car is going to do and what the lane is going to do. That’s not the first step to self driving technology. It’s a fundamentally different and simpler technology that isn’t going to evolve into real autonomous driving.
It might be even worse than that. They likely think that they are on the path, because they are running a massive data-collection campaign on the sensor suite installed on the fleet. But the reality might be different. Running a data collection campaign is easy. Engineering products with machine learning is hard. Most of the teams don't even know what they need to know, to do it right. And so they engage into classical cargo cult of "carving wooden headphones". Regretfully, this is unlikely to change soon. It takes a decade to hone skills of building products with any technology. And the majority of people in the field are yet to join it.
I mean, I'm also pessimistic about self driving cars, but I think it's reasonable to assume that Tesla engineers are aware that machine learning is hard.
Yandex Taxi has Youtube videos of their self-driving car. This video of a car navigating a snowy Moscow street is especially impressive [0]. They don't seem to come up in self-driving car discussions on HN.
A system which uses a LiDAR can be conceptually a lot simpler than one without. It tells you where all the obstacles are (stationary or not), so it naturally circumvents the main problem of Autopilot.
I'm actually moderately optimistic about self driving cars. I think, we will see large fleets operating with no drivers in good conditions (and low speeds :) in a relatively short time frame. There are companies in which people had been honing skills of building products with machine learning for decades.
BTW, a bit of a plug. I'm running a nonprofit/public dataset project aimed at increasing safety of autonomous vehicles. If anyone here wants to contribute (with suggestions / pull requests / following it on twitter / etc) - you'd be most welcome. Its: https://www.safe-av.org/ . I'm just starting it/trying it out - simple feedback or following the twitter handle would already be helpful.
That's strange. As one of most popular "crash management" systems on the market, one installed on Volvos is quite good at this, to the extend they put it's test video into their commercials.
"When the adaptive cruise control is following another vehicle at speeds in excess ofca 30 km/h (20 mph) and the target is changed from a moving vehicle to a stationary vehicle, the adaptive cruise control will ignore the stationary vehicle and instead select the stored speed.
"The driver must then intervene him/herself and brake."
Wow, that is some really unclear and weaselly wording. Do I understand correctly that it means "This system might decide to 'follow' a stopped car and crash into it, and then we'll say it's your fault"?
As an open wheel formula car racer, I have to say that my best guess is most people who would purchase a "self-driving" car do not like to drive (if I could disable the anti-lock on my cars I would, and I ALWAYS drive with the skid control turned off. Although I have track tested some of the VW models circa 2010 on short autocross tracks with the stability program on and did not find it a hindrance to driving the cars sideways through the corners. I did not time test it to see which way was quicker, but again I felt in control and not that the car was fighting me; having said that I still drove my GTI with the stability program switched off even in the snow and especially in the rain) and I believe this is a major factor in these crashes.
One of the most difficult things to do in a race, is to go fast/drive at your best the moment the flag goes green; and here it appears to me that we have people who don't like to drive, so therefore they likely aren't very good at it, and suddenly they are faced with a life and death crisis in which they NOW have to save their life by starting to drive the car when they are not even warmed up and they have very little room for error... I mean do the math.
Liking something doesn’t mean you have to partake 100% every time the opportunity arises. I like chocolate cake but I don’t order it every time I sit down to eat. What’s more, a lot of us are very skilled at things we don’t particularly like.
Ultimately what you’re keying into is skill and interest - most people aren’t very skilled at driving and they don’t really care to be.
Um... Just because someone doesn't care to do something doesn't mean they are bad at it. In fact, it's been my experience that the people who enjoy driving the most are the most reckless ones.
Not sure why you think so. If this is correct, there should be many other equivalent or better campaigns currently running at the same scale, right?
On the contrary, I imagine that data is a very significant limiting factor in this field.
Both data and computation are hard problems. AFAIK Tesla should still be leading in the former category by virtue of the number of its data collection devices. Crucially, they are collecting training data (real world human input) with which they will be able to train and test models.
Note, I am not saying that Tesla's current models are good, or that they will easily improve them. Only that you are too dismissive of the potential advantage that Tesla's data affords them.
Do you know, if the data collection campaign that they are running is not a cargo cult? I don't. I hope it is a good and well designed campaign, because it utilizes useful society resources.
edit yes, personally I'm very impressed by the size of the campaign. It is just that again, there is a spectrum of difficulty between running a campaign even of this size and running it right, in a way that is actually useful to building a product. And still, in my opinion, building an autonomous vehicle is much harder in comparison, to running a campaign of that size in a right way.
why don't they'd partner with Waymo? They can build the cars and collect the data, while Waymo can provide the ML expertise. I'm sure Tesla+Waymo would be a hit among a certain type of crowd.
There's active disdain (or at least snide jokes) among the senior Waymo engineers for the approach to autonomous driving taken by Tesla. I can't see a partnership in the cards, probably ever.
How would the engineers at Waymo know Tesla's trade-secret approach beyond the physical hardware ? Tesla is the only major player in the race to autonomous cars that has no patent. Waymo has 145.[1]
The island natives in the pacific got a lot of fantastic things from the servicemen who visited their islands during WWII. In an effort to bring them back, they emulated everything they saw the servicemen doing. In effect, they were just going through the motions without purpose.
They built runways and carved wooden headphones like they saw the aviators wearing.
Here's a summary from Wikipedia:
With the end of the war, the military abandoned the airbases and stopped dropping cargo. In response, charismatic individuals developed cults among remote Melanesian populations that promised to bestow on their followers deliveries of food, arms, Jeeps, etc. The cult leaders explained that the cargo would be gifts from their own ancestors, or other sources, as had occurred with the outsider armies. In attempts to get cargo to fall by parachute or land in planes or ships again, islanders imitated the same practices they had seen the soldiers, sailors, and airmen use. Cult behaviors usually involved mimicking the day-to-day activities and dress styles of US soldiers, such as performing parade ground drills with wooden or salvaged rifles.[14] The islanders carved headphones from wood and wore them while sitting in fabricated control towers. They waved the landing signals while standing on the runways. They lit signal fires and torches to light up runways and lighthouses.
When I test-drove a Tesla last year, the system gave the impression of reacting to the outside environment instead of predicting it. It was always later in applying a correction than I would have been.
It would drift towards the next lane, and while not at the exact last moment - it would steer back slightly after what I would have done. Not confidence inspiring. If it had been an option, I would not have paid for it.
Do we know that "having a world model" is how self-driving cars ought to go?
It sounds like most self-driving systems have several systems that relate to each other (which each, at least implicitly, have a world model). It's hard to know if they should instead have a single world model that all information is extracted from or whether they should improve the integration of their existing systems. I am think you show humans tend to have several world-models at different levels of exactness when dealing with the world (which doesn't prove a car should have that - complete consistency seems to have advantages here - seems is the word).
Perhaps one could say - "despite various kind of progress, it's not obvious what direction to take for full self-driving cars".
Let's be honest, actual mass deployed self driving tech is probably 20 years away at minimum. I personally might be able to enjoy it just before I retire.
At the 2016 SXSW festival the Waymo project director said that deployment would happen incrementally depending on the local conditions.
>Not only might it take much longer to arrive than the company has ever indicated—as long as 30 years, said Urmson—but the early commercial versions might well be limited to certain geographies and weather conditions. Self-driving cars are much easier to engineer for sunny weather and wide-open roads, and Urmson suggested the cars might be sold for those markets first.
Urmson put it this way in his speech. "How quickly can we get this into people's hands? If you read the papers, you see maybe it's three years, maybe it's thirty years. And I am here to tell you that honestly, it's a bit of both."
He went on to say, "this technology is almost certainly going to come out incrementally. We imagine we are going to find places where the weather is good, where the roads are easy to drive — the technology might come there first. And then once we have confidence with that, we will move to more challenging locations."
Reading this comment (and the many like it) I get the feeling that something essential must be getting left out when we're speaking of the "far-off mass-deployed self-driving tech". Mostly because it directly contradicts my day-to-day experience.
I've personally seen a few "self-driving cars" on the streets of SF and I suspect many of you reading this have as well (particularly if you live in the Mission or South Park).
Just last year I was talkng with engineers at the self-driving car company "Cruise" which had a free internal app to taxi employees around with a self-driving car. I personally saw the tech demoed. One of the engineers called one up to go drinking the weekend prior (sans drinking buddies - company policy). He claimed his coworkers come to work in them occasionally. The cars and engineers could be a grand charade but it seems like "self-driving cars" are already used by non-daredevils every weekday.
I learned to drive on backroads of rural Colorado; It may be these cars are safer than I in the sometimes "adverse" driving conditions of San Francisco.
That can be how a distant technology looks, if the stakes for failure are that people die.
You can get early adopters from one tail of the bell curve, hype, working prototypes, and people dying because prototypes fail. There may be a churn of companies... "We know that AlphaDrive and BetaDrive both killed people but we here at GammaDrive want to reassure you that we're totally committed to your safety!" "But that's what BetaDrive said." "Yeah but we REALLY MEAN it this time." "Wait, what to you mean, 'we'?" "Um, nothing..." "You hired BetaDrive's employees when they went out of business, didn't you?" "Uh... Oh would you look at the time!"
With that said I don't know Cruise and maybe they're doing better than Tesla.
A few sightings in silicon valley is an incredible leap to mass adoption.
And looking at cruise, their website is low grade marketing. Mostly concerned about hiring and not about pushing their actual technology. it's amateur hour honestly.
Unless companies like Tesla stop worrying about being the next "Great American Industry" and start focusing on real world problems and their solutions.
If Tesla could develop a network of self driving buses, that would have far greater reach and impact than trying to solve how to make a private network for charging and all case-considered self-driving system for asshole-mobiles. Not that Tesla cars aren't amazing, but they seem like a giant waste of resources for self-centered people. Not to mention in situations like self-driving, buses have completely predictable paths and because of that, it's much easier to optimize for corner cases to work toward a general solution.
With the exception of the handicapped, people don't really need cars. Bikes/e-bikes work great for any type of commute <30 miles and hauling cargo <100 lbs with a trailer. Electric skateboards are also great if you have little to carry and <12 miles to go. Beyond that, self-driving delivery fleets in the vain of UPS and buses make sense. There's really no valid reason to use a car unless you're an invalid.
But guys like Elon Musk are happy to burn the earth to the ground to terraform Mars instead of working on how to terraform places like Arizona or Pakistan because it looks better to them on paper. Or so it would seem.
>With the exception of the handicapped, people don't really need cars.
What country do you live in? We don't all live in downtown SF. In fact, most of us don't. We live outside the city and drive 20 miles to get to work (you're nuts if you think most people are going to spend hours commuting 60 miles on a bicycle every day). And there are no bike Lanes, let alone paths
And it snows.
>There's really no valid reason to use a car unless you're an invalid.
You're on another planet. It's interesting that you talk about finding practical soltutions to "real problems" yet seem to lack any notion of what the real problems are.
Anytime I read posts like that, I just assume they have never been to a place that requires a car to get somewhere within a decent amount of time or it's been so long that they've forgotten.
Yeah, I could get an e-bike, I just have to make 5x+ the trips to the store since I doubt it could hold what I can get in 1 trip now.
How would I drive 20 miles one way to work in the winter when there is zero public transportation here? There's no bike lanes out here, and taking an e-bike on the road will eventually get you killed, that's if you can stand driving 40 miles round trip in the snow / rain 5 days of the week.
I guess I can move, (as been suggested before lol) but now there's a lot of new problems with that that an e-bike won't touch.
It just isn't feasible for most people yet. Hopefully sometime soon, but definitely not right now.
Bikes/e-bikes work great for any type of commute...
Only when they aren't. Winter means snow, ice, and below zero temperatures and my hands simply can't handle the extra cold from the wind. I'm not handicapped and do walk during the winter, but tend to take the bus on the very rainy and very cold days despite dressing for the weather. let alone the fact that I'm no where near skilled enough with a bicycle to feel safe riding on 2inches of ice - regardless of studded tires. Summers have really strong sun, making folks deceptively warm and sweaty. I dont' know about your job, but most of mine want me to be clean and not smell of sweat when I arrive. I always have the chance of cargo: I walk to the grocery store most times. (I am not legal to drive in my present country).
When I lived in Indiana, there were not only problems with actual weather, but then I ran into route problems. The easiest route for commutes was major roads, but unfortunately most of those roads aren't built for people to ride bikes or walk. In some places, doing either activity on those roads meant you got stopped by the cops and told to find another path to take. You were putting cars and yourself at risk using that road. Luckily, there are more paths for such things here.
Not only that, but e-bikes have a limited range in most circumstances. Higher-end ones go far enough, but some of the cheaper models risk losing power at the higher ends. By the time most e-bikes has power to spare on those commutes, here they might be legally a scooter.
I do agree with having a robust public transportation system, however. That actually soves the issues with the bikes, but bikes alone seem won't work.
So first off, I love buses as a means of transportation and in any situation where biking would be unsafe, I would err on the side of caution and use a bus like you suggest so you know, no judgement here, I think that's cool.
And yeah, not having bike paths makes it a whole different story. Drivers can be very irresponsible regarding the safety of cyclists (especially during inclement weather). I would personally be very weary if I had to bike exclusively on the street. Where I live, I can get pretty much everywhere without touching a single street thanks to all the dedicated multi-use / bike paths. In fact, the reason I purchased a e-bike and decided to try to go car free was the installation of a bike path that parallels the major highway here. But all of the bike paths here are a response to how many people here are cyclists and the dedication of the community to take steps to fight climate change/pollution. Though places like Indiana may never have a good bike system without some sort of federal intervention if they are really that anti-biking, but there are much better places to live if you like to bike. Riding around the Denver/Boulder metro area is an absolute hoot and there are places that are even more bike friendly.
Living in Colorado I bike to commute in both freezing conditions and 100+ degree weather conditions without catching a sweat on my e-bike. I'm not some hyper fit guy, I've got a dad-bod and long hair (along with a beard at the moment) and I generally run warm. But after I settled on trying to use the e-bike I got as my primary means of transportation, I started solving some of the problems you discuss.
For instance during the winter, I wear a black down jacket (with a massive backpack that keeps my back extra toasty), a insulated black balaclava with ski goggles, ski gloves if it's extra cold (otherwise just some simple neoprene gloves), long johns and thick wool socks and to combat the wind I ride with my hands in my jacket pockets (though that's admittedly only possible because of the long open curves of the bike path and I never have to make any surprise emergency stops). You might think that I'd be sweating like crazy in all of that, but you have to remember, with an e-bike, my effort for pedaling is similar to that of riding a beach cruiser down a flat boardwalk with a tailwind regardless of wind, hills or cargo and my layering is generally designed to keep me comfortable. At most I will unzip my coat to ventilate my torso. And when I get to my destination, I usually just stop off at some restroom to slip off my long johns - sometimes I'll also switch my shirt.
Also on the note of snow, while I wouldn't recommend it as it's super sketchy and totally unsafe, I've found that getting around in even a few inches of snow on the ground is weirdly doable with an e-bike if you have it in throttle-only mode. The reason being that you can keep your body still and work on maintaining your balance while letting the motor do all the work. It works so well because you don't have to worry about unpredictable tire slippage causing you to throw your weight. An e-bike tire can slip all it wants and I'm still just going to be standing over my bike in a track stand while maneuvering the front tire. Plus with mountain bike tire, you get better traction on snow or mud so it works even better than my track bike.
During the summer, I wear white UPF rated compression tights under highly breathable hybrid shorts and a UPF t-shirt. I wear white nylon gloves, and a thin moisture wicking white balaclava with sunglasses. Do I look like a kook? Sure. I get a lot of weird looks. But I don't mind, the white outfit not only protects me from sun burns, it increases the evaporative cooling of my sweat as well as reflecting most of the light so my skin isn't really being heated by direct radiation. The result is I can get around quite comfortably regardless of the heat outside. And during the summer, it's even easier to carry an extra shirt or pair of short to switch to into a non-ninja-esque outfit. Plus, despite the weirdo factor, it's at least a little fun to go into a bathroom and come out wearing tights like Superman.
Regarding range, that's one of the cool thing about e-bikes, the one I bought for $1300 and most bikes I've seen over $1200 have a battery that can be unlocked and easily removed so I just bring the charger with me (not much larger than a power brick for a gaming laptop), stick the battery in my bag and then just plug it in where ever I'm stopping. The battery fully charges in 4 hours so assuming I don't need to go more than ~25 miles in a 5 hour window, the e-bike works perfect. (Though I will say that there definitely is a big power output drop off when the battery gets under 30% but I rarely get mine under 40%)
The big thing bikes have over buses is convenience and price. Much like a car, you can anywhere you want and because of the speeds (up to 28 mph for class 3 or 20 mph for class 2 motors) my commutes are generally faster than by bus, and I'd say on average, only 40-50% slower than taking a car which might sound high, but that's usually only an extra 10-15 minutes plus I get the fun of riding a bike around. Not to mention in areas where finding parking is difficult, riding a bike can actually end up being faster than driving.
Buses are definitely part of the equation, which was the point of my original comment, but that shouldn't negate bikes as a preferable means of transportation. Especially if you have a situation in which you can get away with commuting by traditional bike rather than an e-bike. They're the most energy efficient standard mode of transportation available to my knowledge.
This is pretty extreme.
For my ebiking to work during summer and winter I wear pretty much the same as whatever the pedestrians are wearing. The only extra thing I take is a pair of light waterproof pants, to supplement my jacket in case it really pours.
I'm a big fan of the added basket on the back of the bike - so there is no need to put anything on my back.
Waymo almost certainly will launch this year a fully self driving service, accessible to anyone, "within parts of the Phoenix metropolitan area, including Chandler, Tempe, Mesa and Gilbert" (https://waymo.com/apply/faq/) which is where their pilot program has been running for several months now.
They already have at least 600 cars in that area and already announced plans to buy 80k cars.
So if by "mass" you mean "millions of cars available world wide" then yeah, maybe not 20 years but it'll take a decade to scale this world-wide.
But to me "self driving works in practice" will be validated within months of Waymo launching in Arizona and then they'll enter a phase of dazzling scale up to other cities.
For comparison it took Uber 7 years to outgrow taxis and I expect self driving to grow a bit faster.
Once the technology works it's just a matter of capital expenditure and physical limits of how fast you can scale car manufacturing.
It'll take many years to deploy this world-wide but I don't think this is the standard most people use when they determine if self driving technology works or not.
Waymo showed off their tech for driving in snow/rain in Google IO this year.
They've also done a lot of their testing in the SF Bay Area, which is a pretty difficult urban area to drive in.
They're obviously launching in Phoenix because it's the easiest major city to drive in, but they're also obviously going to expand to other places once they get successes in Phoenix.
Honestly, to me, the more interesting question is: As self-driving cars slowly expand to other cities, will other cities intentionally start making their lane markings more obvious and doing other things to make it easier for self-driving cars to drive in?
Sometimes, shuffling around blame can cause things to get fixed. Previously, with bad lane markings, you could blame bad drivers for accidents. But with self-driving cars, you can only blame the software. So if a self-driving car company is willing to serve one city with good lane markings but not another city with bad lane markings, suddenly the city government gets the blame for bad lane markings.
Weather is probably one of the easier problems to tackle. Sure, performance will be worse in a snowstorm, but that's true of humans too. Waymo has already showed they can filter out snowflakes. Maybe it wouldn't work in white-out conditions, but in that case, again, humans also can't really drive.
The more complex things that can happen in urban areas though, yeah that's hard. All kinds of weird shit can happen that takes social understanding to know how to deal with. Right now it looks like for those situations Waymo is relying on remote "coaches" that tell the cars what to do at a high level (guidance, but not directly control).
It’s true that the probability of collision is likely lower on the dry, wide, and well-maintained roads of Phoenix, but given the higher speeds (higher kinetic energy), the probability of injury/death per collision is likely higher.
"Phoenix is the capital and most populous city of the U.S. state of Arizona. With 1,626,078 people (as of 2017), Phoenix is the fifth most populous city nationwide"
Explain the criteria you used to classify it as "suburbs".
Phoenix is the first metro area to launch the service.
Waymo (and others) are testing in other places.
From https://waymo.com/ontheroad/: Kirkland (lots of rain), Bay Area (including San Francisco), Detroit (snow during winter), Atlanta, Austin.
I don't see how you arrived at the 1% and 1% of what?
If it took Waymo 10 years to drive in Phonix, Arizona, it'll take them 1000 years to improve enough to drive in Detroit?
Those things are pretty binary: either they can drive autonomously or they can't.
I lived and drove in the Phoenix metro area for many years, and there are very, very few areas that are as dense as most other cities. On top of that, the infrastructure is relatively new, the roads and lanes are nice and wide, and weather isn't a major concern almost all the time.
GP's assertion that driving in Phoenix is easier than most other places is absolutely correct.
His assertion was that it's 100x harder to drive in Detroit during winter than in Phoenix. Is that absolutely correct?
I don't dispute that Phoenix is easier to drive than San Francisco or Detroit during winter.
It's perfectly rational to debut such service in the easiest possible environment.
I just don't think that it's a fundamentally different problem to make this work elsewhere.
This technology already improved by leaps and bounds. At first those cars couldn't finish a drive in the desert, with no other traffic.
I don't have an inside knowledge on this but the fact that they are already testing the cars in more difficult areas indicates they are working on more difficult problems.
It's anyone's guess how far they are on that front.
My guess is that after launching publicly in Phoenix, it'll take less than a year to launch publicly in San Francisco.
Phoenix could easily be 100x easier than a snowy winter environment.
It is one thing to have broad streets, well marked, with multiple levels of marking, e.g., stripes, plus curbs, plus medians/sidewalks, plus trees/shrubbery. All of these well-designed and well-built modern road features cooperate to make a consistently recognizable environment. And the lack of serious weather is a big deal too.
Contrast that to a city like Detroit or Boston, where the streets are anything from modern to ancient, literally paved over the cow-paths, constantly changing with construction, lucky if the lane markings are still visible, pedestrians in all kinds of odd situations (legal and illegal) -- orders of magnitude more difficult to sort the environment. Now add snow in quantities enough to make it often difficult as an experienced human driver to figure out where you are in the lanes, and then snowbanks in odd places after it is cleared'.
Sure, it's still 4 wheels, power plant, steering wheel, roads, but two quite different games.
Going for a walk in Central Park at lunch and hiking up Denali in Alaska are also ostensibly similar activities, but in actual reality, are very different games.
>I just don't think that it's a fundamentally different problem to make this work elsewhere...
>...I don't have an inside knowledge on this...
So, based on your complete lack of knowledge on the subject, you think that the current solutions will easily extend to more difficult applications? I work in a related field (image/signal processing and ML) and I can assure you that there is no such thing as a general solution here. It is absolutely reasonable to expect that your 70% solution may simpler never work for the remaining 30% and you have to go back to the drawing board.
> Phoenix is the fifth most populous city nationwide
Yes, but that same source for population indicates that Phoenix is only the 169th most congested city in the U.S. It's suburbs are presumably even less congested.
In parts of the UK, seniors get free travel on buses. That’s very popular. Even where it’s not free, buses can be great for older people who aren’t able or confident to drive (due to mobility, eyesight, etc).
If your buses aren’t popular with old people, you have bad buses.
In Germany, even among elderly that could drive buses are very popular (even though they have to pay). Driving a bus is a lot more social than sitting alone in your car. They meet old friends or they make new ones.
Lots of us younger folk are cool with it too. The point is that having it as an option, rather than being stuck at home unable to drive, is a big quality of life issue.
I am still relatively younger and I work in an office literally located in a train station. I still avoid public transit as much as possible. The only exception is Amtrak.
That very much depends on where you live. It's generally a lot better in Europe for a variety of reasons: historical, political and geographical. I use public transit quite a bit here in Vancouver but my limited experience of it in SF gives me some appreciation why people might not want to use it there.
When people trot out their arguments against public transportation they always refer to the inefficiencies and completely ignore the human element. No one wants to be crammed in a small space with a bunch of strangers every day.
I'd say the median public transit user in Europe smells better than the median public transit user in SF yes. In Europe public transit tends to be used by a much broader cross section of the population.
No one wants to be stuck in bay area traffic every day either. Public transit is used for a higher percentage of journeys in Europe though because it's a better option relative to the available alternatives than it is in much of the US. If we had free teleportation I imagine you'd see very few people using any other form of transport.
There is probably a degree to which public transit usage is higher in Europe not because it is a better experience than it is in the US but because driving is a worse experience. Based on my own experience it's a bit of both but clearly usage levels vary around the world due in part to varying tradeoffs between transport options based on historical, geographical, political and economic factors. There may also be a cultural element as well where preferences vary but I don't think that's the biggest factor.
> No one wants to be crammed in a small space with a bunch of strangers every day.
I take 15 minutes crammed in a small space with a bunch of strangers over one hour of solitude in a small metal box with windows any day.
Nobody has managed to explain me how self driving cars reduces congestion. It is space that is the limiting factor, and I do not understand how autonomous cars reduce the need for that.
Vice versa, rudimentary economic analysis says that if you do not need to concentrate on driving, you can spend more time in the car, thus sitting in the traffic is not that bad and more people are willing to do that -> more congestion.
>"Nobody has managed to explain me how self driving cars reduces congestion. It is space that is the limiting factor, and I do not understand how autonomous cars reduce the need for that."
Even if it works for just highway driving in good weather--which, in spite of overall skepticism about timeframes seems relatively close--it's hard to see how it doesn't put a lot more cars on the roads. I can't believe I'm unique in saying that I'd go into the city after work or head into the mountains for a hike more frequently if someone/something else were driving me.
Operating costs are still a constraint And those are higher than a lot of people assume if they don't really think about it. But easier driving will absolutely put more cars on the road.
In a dense city with a lot of traffic, public transport will often be faster than cars because it has segregated (for trains, metros etc) or semi-segragated (most trams, many bus systems) routes.
> I take 15 minutes crammed in a small space with a bunch of strangers over one hour of solitude in a small metal box with windows any day.
I am reasonably sure economics will change that.
> Nobody has managed to explain me how self driving cars reduces congestion.
It doesn't, but it allows the work day to begin at commute time. Those that don't adapt will simply have fewer "work hours" to make up the lost productivity.
I suspect the bumpiness of roads will become a public nuisance once people start reading/computing in-car.
Possibly not showering is just more socially acceptable in the US or something? I dunno, I see this a lot on these discussions, but it’d be fairly rare that I encounter a notably smelly person on public transport here.
Being on a bus/tram/train with strangers never really bothered me, to be honest. When I walk down the street or onto a shop, I’m also surrounded by strangers; what’s the issue?
I love "the human element". I might be in the minority here, but I actually like taking public transit to see a city. Meaning not just the geographical features, but also the people in it.
I would prefer a bus or train to a private car any day, assuming it's reasonably clean and not overcrowded.
In places with decent public transport, why not? In many countries, it’s free for people over whatever age to use public transport, and it’s obviously more practical for many elderly people than driving.
Perhaps that is true, but some incremental steps have the possibility of making driving infinitely better.
Imagine cars that refuse to tailgate, drive at excessive speed, drive with drowsy owners, drive on the right-hand side of the road in non-emergency situations, skip red traffic lights, etc...
Don't get me wrong they very well may be significantly less safe than an attentive human driver in good conditions on good roads. However, they are reasonably close and the average driver is not going to improve any time soon while self driving cars will.
The trend of things machines can do better than people really only goes in one direction. Further people are vastly worse at driving than they think, over 1.25 million people die per year due to car accidents. That's insane.
I don't think 'insane' is accurate. What you're really showing is just how many people there are on the planet. You have to drive about 100,000,000 miles before your odds of being involved in a fatal accident approach 1. Or put another way, about 7500 years.
On top of that, when you factor in the utility of driving a car, the risk/reward ratio looks stellar. Humans are better drivers than given credit for, and it will be some time (much longer than a lot of enthusiasts will admit) before computers manage to match that.
Yea no. ~55.3 million people die each year worldwide and ~1.3 million of them are from cars so that's 1 in 43 deaths are from cars.
That's one of the top killers anyway you slice it and every single one of those deaths is preventable.
PS: Truckers drive up to 3,000 miles per week * 48 weeks per year * 47 years (65-18) that's ~6.7 million miles. US death rate is ~1:88 million miles on average. So if they where as dangerous as average that's ~1 death per 13 truckers. However, they are safer than this and average less miles. Still, it's one of the most dangerous US jobs.
If you're measuring the raw efficiency of the self-driving mechanism, you have to correct for those.
If you're measuring whether the actual experience of using autopilot meets the prevailing standards by which we judge "safe enough" as a country, you don't want to correct for those first three.
Especially the "the cars are new and safe overall" factor. Correcting for that is like "correcting for" the fact that all the dangerous lithium ion cells are carefully packed and armored, and coming to the conclusion that sitting in a Tesla is less safe than sitting in a bonfire.
It makes the arguments a lot worse when people don't distinguish those two questions.
I replied to Retric’s claim about Autopilot ”I think these are already fairly close to human fatality rates”
If (and that’s likely) other features than Autopilot contribute to Teslas killing and injuring fewer people, we must compensate for them to judge whether Autopilot is a net benefit.
Net benifit is even more tricky than that as you need to include things like the heath benifits of a more relaxed commute.
All my line of argument is trying to adress is how long until self driving cars are 'ready' which directly relates to how capable they are today. The data I looked at suggested somewhere above 'drunk driver' but below 'school buss driver'. That seems like an extreme range, but they are each human levels of competence. Which is different than they are being portrayed and IMO a sign they are very close to ready.
While the points you bring up are meaningful, we let people drive without those safty features. So, in terms of policy it seems like an ever higher bar, as for example automated breaking systems and lane following raise the bar above human competence, but again are not being installed on older cars.
Now apply my analogy. All that armor around the battery cells contributes to Teslas setting fewer people on fire. We must compensate for it to see if lithium ion is a net benefit.
Which... doesn't really make sense. The safety features are part of the package. You can't get the dangerous part without them.
No cars are sold with a big blob of naked unprotected lithium ion cells. And no cars are sold with autopilot but not high-end crash protection.
A feature being dangerous in isolation is not enough to show that it lacks net benefit.
That's a lot of guesswork. It is very difficult to believe that an automated system with such a limited concept of its surroundings is equal/superior to a human driver. Therefore, we need better evidence, and this means no pulling of numbers from thin air.
Insurance companies which have hard data will lower your rates if your car has autopilot. I am sure they base that on hard data I don't have.
If you have better data than I would love to see it. But, IMO using ballpark numbers is much better than simply having a bad feeling about something.
These cars are not going to fail the same way people do. Computers really fuck up in broad daylight, people get drunk and drive down highways in the wrong direction.
Right now I would trust this over me driving tired or drunk but not sober. Give it 5-10 years and better hardware and I think it will be a better driver than I am.
Unless you've seen something different than I have, it is one insurer offering a small discount in the UK. Nobody in the US. The data is too sparse at this moment for an insurer to take much of a risk. Especially with recent Tesla missteps with autopilot updates.
I wonder if it’s mainly another way of getting access to telemetry data? Insurers definitely do give you a discount if you install a recording device. I assume that lets them filter out some bogus insurance claims. (Or maybe people just drive better when they know the insurance company is watching!)
Really? Last I saw, Tesla was blaming the driver. Their ostensible position (as well as other companies selling assistive systems) is that the driver has to remain alert and in control. They're a bit hypocritical about this but they certainly don't seem to be accepting liability.
It will be an interesting question once systems are on the road that claim to be fully autonomous--and no one's sure how this will work yet. If I'm sitting in a car that's sold (and regulated) as completely hands off I'm certainly not going to accept liability if it does something dumb and kills someone.
That's ridiculous, they might base this on some other benifit such as type of owner, but the industry is not just going to subsidize 100,000+ plus people. I am not saying it's fool proof, but your bias is showing.
Makes me think that Google’s whole Street View project was original designed to create a model of the streets that’d eventually be used in a self driving car.
Here I was thinking it was silly gimmick that I’d barely use. The self-driving Waymos of the future may all work perfectly because if it.
Self-driving cars are a bit of a misnomer and understanding the limitations helps to understand the potential as well.
It's like planes which run on autopilot -- there is still a pilot. In addition to takeoffs and landings, the pilot is there to adjust. Self-driving cars seem to be similar -- hands off 95% of the time (and not 100% as some might expect), but that 5% that they are hands on are critical.
I believe the meaning in this context is just a model saying what objects are nearby, where they are, how large they are, and what their velocity vectors are, and maybe guesses as to what kind of object (car, truck, pedestrian, lamppost, etc.) they are.
Mnyeah, I see what you mean, but I wouldn't call that a world model. Nouveau AI is great for classification, so it can certainly recognise what the individual objects around it are.
However, a "world model" should include the relations between identified objects, in some representation that can then be used for inference.
For example, a classifier can label an object in the path of a vehicle as, e.g. "tree", but a world-model would have to go further than that and provide some context about trees, why they don't normally grow in the middle of roads and why they should be steered around or otherwise avoided.
This kind of world model, that places entities in the world into a complex relational context, is, indeed, beyond our current capabilities (at least for the real world, as opposed to controlled artificial worlds).
I really don't think that's controversial. However, I got a few downvotes on my previous comment so I must have misunderstood the comment I was replying to.
I don't think your original comment was at all problematic.
There's a naturally impulse for a human being to look at a problem and say "of course you need a world for this and it will have X, Y and Z data and it will be give you all you need." That is until they figure out how it is to keeping X, Y and Z updated is.
The thing is that a bunch of data with something like objects and velocities isn't a world model. "How Large" a given thing is, is only one of a multitude of questions.
I think we humans are so good at just passively keeping track of everything in our environment we forget how difficult it is "keep a world model" from what one sees.
> The thing is that a bunch of data with something like objects and velocities isn't a world model.
Sure it is. Whether it's an adequate model for some particular purpose is another question. But no model includes everything.
Objects and velocities would be a good start. I don't know enough about the present state of the technology to know whether we even have that. But from some of these accidents we're seeing, it seems like we don't.
I would agree that a better model would attempt to infer intentions: what is that driver/cyclist/pedestrian likely to do next, based on what they're doing now?
Well, object labels and velocities is not a world model. You can say that they are the attributes necessary to build one, but just a bunch of values is not a model.
Current tech builds a (statistical) model of some aspects of the world that an agent must navigate- aspects that pertain to entity types, trajectories, velocities and so on.
But other bits are missing and they are important ones- for example, what is a human doing on top of a bicycle moving at the same speed as him or her? What is the relation between a driver and a car? And so on. We can still not represent stuff like that with any accuracy to speak of.
In short, what's missing for the most part is a representation of the relations between entities in the world- even where we can accurately represent the entities themselves, or their characteristics.
This is a pretty good explanation of the shortcomings of today's assisted driving capabilities. The marketing towards 'self driving' or 'auto pilot' should really emphasize on these.
My guess here is that the lane assist and adaptive cruise systems somehow thought the car it was following switched one lane to the right, and so the Tesla assumed it was on the most left lane, without any moving car in front (hence accelerating back to 75mph). But what it thought was the most left lane was actually the space between an hov exit and the real most left lane.
What's amazing is that, prior to that article, I had no idea that collision avoiding system at high
speed would ignore any stationary object, even if it's right in front of you... This changes dramatically how I'll perceive such assisted technology from now on.
While the author implied that other system would act the same as Tesla they then go on to explain that most other systems are no where as integrated as the Tesla system implies that it is. In other words, Tesla cannot use the excuse that AEB is separate from cruise. They still have not taking down the picture on their Autopilot page which shows a driver with his hands in his lap [1].
So if current systems cannot or will not acknowledge objects not in motion even if they are in the path of travel why is the NTSB allowing them on the road? Why not force the issue and require it. It really seems that far too many bought into images of what marketers promised by interpreting them to be akin to what is seen in movies and TV shows (enhance!).
The key thing seems to be that the automatic breaking system has a group of systems each designed to satisfy multiple constraints and these systems aren't particularly well integrated.
This seems like how I would imagine a "dumb" system would be designed. One would imagine a "artificial intelligence" (of any sort at all) would find a way to smoothly integrate all these system. The problem is end-to-end neural nets or similar devices intended for such integration don't seem ready for prime time.
I would generally prefer smarter (more error prone) systems to be surrounded by dumber systems. So the break pedal overrides everything, then you'd have a run of the mill radar breaking system, regular lane following system, and only after that (and maybe even further back) the more advanced self driving-ish stuff. It's weird that it isn't designed that way, since that's the way automated systems in factories and the like are setup.
So the break pedal overrides everything, then you'd have a run of the mill radar breaking system, regular lane following system, and only after that (and maybe even further back) the more advanced self driving-ish stuff
The problem the article points out is that there's no safe, dumb response to radar seeing something stopped right in front of a car going 70 mpg.
If anything, the systems the article describes are a lot like what you describe - a bunch of semi-separate systems each of which can do something if it can be sure that something is safe and which do nothing otherwise.
The problem seems to be that given enough no-easy-answer situations, you wind-up with a false sense of security.
Why does "in case of anomaly or ambiguity, slow the fuck down, gather more data and reassess" never seem to be part of the functional logic with these things?
> What's amazing is that, prior to that article, I had no idea that collision avoiding system at high speed would ignore any stationary object, even if it's right in front of you... This changes dramatically how I'll perceive such assisted technology from now on.
Yeah, they all put it in the manual, but who reads those?
My parents bought a new 2018 Camry a few weeks ago that came with a manual roughly an inch thick. Expecting anyone to read it is bordering on delusional. I say this as someone who often does read manuals (because I’m a nerd).
An inch thick? That's roughly the size I'd expect a service manual to be, but I struggle to think of why the operator's manual would need to have so much content.
(Unless it's because they decided to print the same thing in 200 different languages. I've seen that with some other equipment --- surprisingly thick manual, but a tiny fraction of it is actually in English.)
That's extremely normal. My 4Runner's user manual is about that thick and it's all in English. It's a relatively small format, has lots of diagrams. But modern cars have a lot of systems, buttons, indicators, etc.
Not even bothering to flip through a meager 1" thick manual for a newly acquired automobile strikes me as negligent. Atleast review the sections covering features you've never had in a vehicle before, geez.
> Not even bothering to flip through a meager 1" thick manual for a newly acquired automobile strikes me as negligent.
Perhaps, but then almost everyone is negligent. The car designers (and the whole world) know that well, so to depend on people reading the manual is disingenuous and dangerous.
The idea that someone is going to spend an hour studying a user manual when they pickup a rental car at the airport is just silly.
I actually find it something of an issue with rental cars these days. Obviously the foot pedals, turn signals, etc. are well enough standardized that it takes very little to accustomize yourself with them. But environmental/entertainment controls? And then there's the fact that things like backup sensors may or may not be available in a given car even though you tend to start depending on them a bit if you have them.
Just because it's common for people to get in and go with unfamiliar cars doesn't make it any less negligent.
If you don't know how to e.g. turn on the hazard lights in the vehicle you're driving without searching extensively or digging out the user's manual, you've failed to be a responsible driver.
If I got in a loaner car with semi-autonomous features there's no way I'd drive away without first paging through a user's manual or online guide understanding those features first. especially given how many crashes have occurred due to apparent misuse of these vehicles.
This article and many others like it are written as if one ought to be ashamed for thinking self driving cars should not routinely smash into stationary objects on their paths or kill people:
This isn't the only recent case where Autopilot steered
a Tesla vehicle directly into a stationary
object—though thankfully the others didn't get anyone
killed. Back in January, firefighters in Culver City,
California, said that a Tesla with Autopilot engaged
had plowed into the back of a fire truck at 65mph. In
an eerily similar incident last month, a Tesla Model S
with Autopilot active crashed into a fire truck at
60mph in the suburbs of Salt Lake City.
A natural reaction to these incidents is to assume that
there must be something seriously wrong with Tesla's
Autopilot system. After all, you might expect that
avoiding collisions with large, stationary objects like
fire engines and concrete lane dividers would be one of
the most basic functions of a car's automatic emergency
braking technology.
Regardless of how they came to be, yes there is something seriously wrong with these systems.
This is an odd claim to make, since the article doesn't say anything at all about self-driving cars. Moreover, I don't think it's trying to shame anyone. It's making the point that there is a difference between crashing due to an error in the driver assistance systems and crashing due to encountering a situation that the system is incapable of handling. The tail end of the article states unambiguously that this is not an acceptable state of affairs, and that the next generation of driver assistance systems needs to take a different approach.
As to whether this article is really a stealth advertisement for GM's Super Cruise technology, well... yeah, it probably is. But I still found it informative as to the current state of driver assistance systems. I was not aware that the cruise control, lane keeping, and automatic braking systems were generally completely independent of each other.
Moreover, I don't think it's trying to shame anyone. It's making the point that there is a difference between crashing due to an error in the driver assistance systems and crashing due to encountering a situation that the system is incapable of handling.
Well, that point is most likely lost on someone who dies in a crash of these cars. IE, the difference is big from some viewpoints but irrelevant from others.
I think we can see the point where objections to self-driving cars start to become strong. It's not some cooked-up version of the "trolley car problem" but a situation where a self-driving car does something a human observer naturally interprets as "wrong" and "dumb", does it in a way that kills someone and the designers give the answer "sorry, not a mistake, just a necessary design compromise. This is still, on-average safer than a human". I think this would produce a strong emotional effect regardless of any "generally safer" assertion.
Knowing the difference is important because in one case, you need to fix a bug, and in the other case, you need to design a new system (or decide that the problem is unfixable and give up).
Who was questioning it? Who was suggesting that it was possible to undo a death that has already happened? I don't understand what point you're trying to make.
All I'm saying is that the distinction is important from the perspective of someone who wants to prevent this from happening again, and the perspective of someone who is in charge of making the laws and regulations to prevent this from happening again, and the perspective of the people voting for those lawmakers.
I think Timothy B. Lee, the author of the article, was caught up in the early hype about "self-driving" cars, obviously before he (or many others) had a good understanding of the technology. He's covered the field extensively. Now that the limitations of the technology are becoming more clear, it seems to me he's trying to negotiate the very narrow border between keeping his original enthusiasm up, and not spreading misinformation. I interpret this article as an attempt to keep some sort of balance, between reporting on a promising technology without over-promising and without veering too far into accusing his own readers of ignorance.
Timothy B. Lee here. You are correct, except that I think it's a mistake to treat ADAS systems as "self-driving cars." I'm bullish on true self-driving car projects like Waymo and Cruise. I think ADAS systems have a lot of problems that have become more clear to me as I've covered them in more depth.
I think that full autonomy (level 4 and 5) is an AI-complete problem and a true solution is many, many years away. On the other hand, I'm not an expert (I'm doing an AI PhD but on a different subject) so I may be overestimating the difficulties involved.
One problem with reporting on autonomous driving is that a lot of the technology is proprietary and the research takes place behind closed doors, so it's very hard to understand exactly where the state of the art is. We're left with the announcements from the companies who actually sell it, that will inevitably tend to be overinflated.
As the technology becomes more common, I guess we'll all end up adjusting our expectations, one way or another. I'm looking forward to your future articles :)
Didn't hit me that way. Sounds more like we as consumers misinterpret what the technology is and trust it too much... of course, there's a feedback loop where companies like Tesla call everything autonomous and hype the marketing to mislead consumers.
It seems like the radar/camera system Toyota uses for its Pre Collision System has the same weaknesses as Tesla's sensor suite but the list of exceptions in the Toyota manual is pretty brutally honest about the issues.
Just for pedestrians it warns:
Some pedestrians such as the following may not be detected by the radar
sensor and camera sensor, preventing the system from operating properly:
• Pedestrians shorter than approximately 3.2 ft. (1 m) or taller than approx- imately 6.5 ft. (2 m)
• Pedestrians wearing oversized clothing (a rain coat, long skirt, etc.), mak- ing their silhouette obscure
• Pedestrians who are carrying large baggage, holding an umbrella, etc., hiding part of their body
• Pedestrians who are bending forward or squatting
• Pedestrians who are pushing a stroller, wheelchair, bicycle or other vehi-
cle
• Groups of pedestrians which are close together
• Pedestrians who are wearing white and look extremely bright
• Pedestrians in the dark, such as at night or while in a tunnel
• Pedestrians whose clothing appears to be nearly the same color or
brightness as their surroundings
• Pedestrians near walls, fences, guardrails, or large objects
• Pedestrians who are on a metal object (manhole cover, steel plate, etc.)
on the road
• Pedestrians who are walking fast
• Pedestrians who are changing speed abruptly
• Pedestrians running out from behind a vehicle or a large object
• Pedestrians who are extremely close to the side of the vehicle (outside
rear view mirror, etc.)
I imagine that list was drafted with the direction of the legal department and they wanted to exhaustive list that might very well exclude basically all pedestrians. They don't really expect many people to read it unless they find themselves in court, when it will be too late.
It still somewhat baffles me that these systems are trying to identify certain classes of object to avoid colliding with, rather than simply avoiding hitting anything solid over a certain size. Do I care whether the thing in front of me is a hatchback or a group of pedestrians? No! I just want the car not to hit it!
Huh, most of these I can understand with my layman's understanding of the sensors involved, but why would it miss the pedestrians who are on a metal manhole cover?
I see this as far more a caution to the public for presuming that automated systems have a singular worldview as humans ... think ... we ourselves do[1], rather than being comprised of multiple and independent systems. And a caution to both the technologists who fail to consider this and real-world use consequences, as well as those who fail to see divorced systems as problematic.
The real world is where all your independent worldviews are tested, simultaneously and integrated. And where metaphor shear may well prove fatal.
________________________________
Notes:
1. A not-necessarily accurate worldview itself, though I'll leave that to the cog-psych types to explore in depth.
Regarding your edit, I agree that, in isolation, the passage you quoted can easily be misinterpreted to imply that the current functioning of these systems is acceptable, and the only reason I didn't read it that way is because the rest of the article explicitly rejects that interpretation.
If Tesla actually made Autopilot into an emergency braking (or, more generally, safety) system, it would be fine. All they’d have to do is make two changes: switch from automatic steering to a lane departure warning system (so the driver must keep their hands on the wheel and will always be paying attention) and possibly rename it.
Of course, their stock price might crash and a whole bunch of customers who paid for self-driving features might be seriously pissed.
Ever been in a Tesla? The current system requires you to keep your hands on the wheel, and they always have. In fact there are YouTube videos of idiots showing how they use clamps and other defeat devices to take their hands off the wheel, which the car is inherently designed NOT to let you do.
Yes, I’ve driven them. Given personal experience and Elon’s claims of drivers using autopilot with hands off the wheels, I believe you’re incorrect.
To be clear, when I say the software should be changed to not automatically steer, I don’t mean that it should more aggressively complain about hands off the wheel. I mean that the car should drive in a straight line with hands off the wheel, and it should also alert the driver if it thinks that the driver is leaving the lane.
Based on previous discussions Teslas cannot do what you are asking for without LIDAR, Musk would have to admit he made a mistake on not using LIDAR and invalidate the sensor suite on all Teslas on the road.
Tesla sold a $3000 "full self-driving hardware" upgrade for a while, and now advertises all cars with the same, while suggesting that the software update to enable it is just around the corner.
If they admit that radar + optical cameras won't be good enough for full self driving, there may be a lot of refunds to process.
From first principles, just optical cameras should be enough - humans can do it. But the software to make it work could be general AI level complex. Personally I think LIDAR self-driving will be available in less than 5 years, but optical may take 30 years. And the company who would be most capable of developing optical self-driving AI is Google, the fact that they are sticking to LIDAR is a good indication of the complexity of optical.
From limited experience test-driving newish Tesla’s and driving other modern cars, Tesla’s system is quite weak. Audi cars seem to have a more accurate concept of where the lane is. And Audi cars won’t auto steer, so it’s a double win.
It's currently illegal to say that something can cure cancer, and this is because a lot of people just don't understand to the details of fighting cancer to really know what a cure means.
Perhaps we need to regulate how driver assistance is sold so that people don't accidentally think they can just keep their eyes off the road?
The only irresponsible car company has been Tesla (and Uber). All the other companies positioned their adaptive systems correctly and very conservatively. Tesla came out and downright asserted their system was autonomous[1]. Predictably, they have since had to scale that back significantly.
Tesla literally never says that autopilot system is self driving and has repeated nags and reminders to keep hands on the wheel and pay attention. This message is repeated during onboarding, training videos, and manual etc. 99% of Tesla owners follow these instructions. There will of course be 1% drivers who will not.
> repeated nags and reminders to keep hands on the wheel and pay attention
Above 45mph hands free is allowed for 3 minutes [0] when following another car or 1 minute if not. There's no eye tracking unlike Super Cruise [1]. It only takes seconds to get distracted as the video of the Uber driver proved.
> There will of course be 1% drivers who will not.
Somewhere in the middle of the page there's one "Every driver is responsible for remaining alert and active when using Autopilot, and must be prepared to take action at any time." But everything on the rest of the page is about convincing you that the car can drive itself better than you can.
>Tesla literally never says that autopilot system is self driving and has repeated nags and reminders to keep hands on the wheel and pay attention.
When Tesla was debuting Autopilot I got the sense that the system was more than your typical adaptive cruise control + lane assist that was solely designed for long straight roads. That was implied in the marketing, and messaging from Musk. Did the system even provide something as rudimentary as an audible warning if a driver climbed into their back seat like some dumbasses on YouTube?
>This message is repeated during onboarding, training videos, and manual etc. 99% of Tesla owners follow these instructions.
> never says that autopilot system is self driving
That sentence sounds strange to me. It is called 'autopilot', and in our society's mindshare - this means self-driving. Either way, the public believe these cars are self-driving, and as a result, people likely buy the car for this feature.
This idea of refusing to use a 3d imaging system because it's theoretically possible without is sort of bogus.
No one refuses to use GPS because it's possible to use a map or sextant like humans can. No one refuses to use radar because you might be able to point at a plane with your eyes.
Tesla should invest in improved lidar or imaging radar, rather than hoping they can come up with a neural network to solve all their problems from visual images.
>> So the people designing the next generation of autonomous driving systems are going to need a fundamental philosophical shift. Instead of treating cruise control, lane-keeping, and emergency braking as distinct systems, advanced driver assistance systems need to become integrated systems with a sophisticated understanding of the car's surroundings.
My understanding was that self-driving systems are trained end-to-end to do simultaneous localization and mapping (a.k.a. SLAM [1]). In other words, the same model would control breaking, accelerating, lane keeping and everything else.
In fact, I thought this was why Uber had switched off its car's built-in breaking system- because their AI had taken over breaking and the AEB on the car would interfere with the self-driving.
Perhaps that is not the case for Tesla in particular, though?
The cost of false positives at freeway speeds is part of the reason:
> When a car is moving at low speeds, slamming on the brakes isn't a big risk. A car traveling at 20mph can afford to wait until an object is quite close before slamming on the brakes, making unnecessary stops unlikely. Short stopping distances also mean that a car slamming on the brakes at 20mph is unlikely to get rear-ended.
But the calculation changes for a car traveling at 70mph. In this case, preventing a crash requires slamming on the brakes while the car is still far away from a potential obstacle. That makes it more likely that the car will misunderstand the situation—for example, wrongly interpreting an object that's merely near the road as being in the road. Sudden braking at high speed can startle the driver, leading to erratic driving behavior. And it also creates a danger that the car behind won't stop in time, leading to a rear-end collision.
When training new human drivers, they're taught to to steer around obstacles at freeway speeds, instead of braking for them. This is partly because it can take too long to brake.
The other half is these are often separate modules calling each other.
> And like adaptive cruise control, automatic emergency braking is often implemented as a separate system from the lane-keeping module. Most AEB systems lack the kind of sophisticated situational awareness a fully self-driving system would have. That means it may not be able to tell if an object 100 meters ahead is in the current travel lane or the next lane over—and whether it's a temporarily stopped car, a pedestrian, or a bag of garbage.
The auto-braking system could be a basic distance sensor calling the drive-by-wire API with "FULL STOP". This would definitely be non-ideal for freeway situations and speeds.
"If you're at lower speeds, at 30mph, and it detects a stationary object, these systems will generally respond and slow the car down and bring it to a stop," Abuelsamid told us. "When closing speed is above about 50mph, if it sees a stationary car, it's going to ignore that."
Indeed, automated braking that can only apply 100% braking is not ideal at freeway speeds. Collision avoidance at that speed must depend more on steering than stopping, but systems in cars aren't integrated in a way that would allow them to create this level of awareness.
Steering around obstacles is not trivial to do safely. Indeed, when training new human drivers, they're taught not to steer around minor obstacles (e.g. not-large animals) at freeway speeds - to slow down and accept the impact if it happens, instead of swerving into opposite lane or a ditch on the other side, which has a lot of potential to kill people. Similarly, for many kinds of debris and garbage on the road, if you can't safely change the lane (or it's a single-lane road, where you can't go around at all if there's oncoming traffic), the safest approach is to just drive through the obstacle; possibly even if you could stop before it, which isn't always the case.
This is probably one of the more complicated freeway cases--which otherwise are likely mostly easier environments than urban ones. It's not rare to have debris of various types in and around the roadway. The right action from braking to swerving/steering (or some combination thereof) depends on a whole lot of factors.
Until I read this article, I thought that the all-optical adaptive cruise control system of my BMW i3 was clearly inferior to modern radar-based gear. I now see at least one use case -- stationary objects -- where the i3 setup has an edge. (But the i3 setup suffers mightily in low-contrast and low-sun-angle scenarios.)
And Tesla's Model 3 has optical cameras too, but they've chosen to continue to use radar only for automatic emergency braking for reasons only known to them (easier to maintain due to their older radar only fleet?).
But, yes, the i3 and Subaru have optical AEB that works great. Something people like to gloss over again and again in these discussions while talking about radar's limitations (and ignoring that the Model 3 has under-utilized forward facing optical cameras).
A common situation, certainly on British motorways in relatively heavy traffic, is to be tooling along at 70mph, and then have to come to a halt - or drop to 20mph - fairly abruptly due to waves of slow moving traffic that move backwards along the carriageway. I wonder how well such systems would handle that kind of situation, where obstacles might not be stationary, but the speed difference is large?
It depends on the detector used. Radar in particular is bad for stationary obstacles because it can't separate reflections from the ground from a stationary obstacle. Moving objects are no problem because their radar reflections gets doppler-shifted and can then be separated the ground clutter.
In my experience - a car with both adaptive cruise and auto emergency braking: it handles the situation quite well, but the human driver (i.e. me) uses hints that the autonomous systems cannot, such as brake lights coming on ahead. The auto systems take a second or two longer to react.
I've driven a rental with those features as well, and in my experience at least the Prius version is almost a little oversensitive to brake lights. In adaptive cruise mode, I've had it kind of slam the car's brakes pretty hard just because another car not even in my lane tapped their brakes to make an exit.
Of course, I'm not going to test it against stationary objects at high speeds, but at low speeds it definitely gives escalating warnings before you hit the door or wall of a garage. As someone who dented a garage door once, I value this feature.
> A natural reaction to these incidents is to assume that there must be something seriously wrong with Tesla's Autopilot system. After all, you might expect that avoiding collisions with large, stationary objects like fire engines and concrete lane dividers would be one of the most basic functions of a car's automatic emergency braking technology.
> But while there's obviously room for improvement, the reality is that the behavior of Tesla's driver assistance technology here isn't that different from that of competing systems from other carmakers.
Is this actually saying that there's nothing "seriously wrong" with an autopilot that randomly decides to aim at a wall and accelerate, because other self-driving systems also do that?
I don't follow this that closely and I've tended to think of self-driving cars as an inevitable thing that results from march toward a "perfect" system.
But I think one of our blind spots as a culture is the assumption that nothing bad will happen if you drive a car correctly and follow all of the rules, and I think that assumption might prove to be wrong and that collisions are an inherent aspect of driving a car in an unpredictable world.
We as humans like to feel like we're in control of our lives, and I think that we have a prejudice toward looking at situations as if that were true, and that bad outcomes are the result of bad decisions instead of bad luck.
>But I think one of our blind spots as a culture is the assumption that nothing bad will happen if you drive a car correctly and follow all of the rules, and I think that assumption might prove to be wrong and that collisions are an inherent aspect of driving a car in an unpredictable world.
That assumption is clearly wrong. There are other drivers but even leaving that aside for the distant day when most vehicles are autonomous.
And it also creates a danger that the car behind won't stop in time, leading to a rear-end collision.
Inattentive tailgaters shouldn't be accommodated. Sure, their problem becomes your problem when you brake hard, but I'd much rather be rear-ended (with head rests and the whole rear of the car as a crumple zone) than plow head on into a stationary object at high speed.
If the car in front of me doesn’t stop until it’s too late (or at all) it doesn’t matter how attentive I am, I probably can’t stop in time either.
Most of the time I can’t see past the car in front of me. It’s too tall, no windows (commercial truck), etc. so the ONLY way I know I’m in danger is how they’re acting.
I kinda disagree with this. If you cannot stop in case the car in front slams the breaks as hard as they can you probably don’t have and appropriate following distance or you are not paying attention to the road.
The tremendous personal arrogance of Elon Musk, and the blissful disregard of those who would rather agree than question him, will probably ruin the industry for many others to come. Eventually enough people will have died in frankly ridiculous circumstances that trust will be lost.
A lot of new cars come equipped with emergency braking systems, including affordable cars like Hondas. There has not been an increase in accidents due to these systems, nor has there been an epidemic of cars autonomously slamming on their brakes on the freeway for no reason. So I tend to think the manufacturers got the engineering and safety trade-offs right for those cars.
The big difference is that people who drive those cars do not generally expect that their cars are self-driving. Tesla owners do seem to expect that, and consequently the decisions made about trade-offs for, say, automatic braking in a Honda Accord are not the right decisions for a Tesla Model S.
The bottom line is this: if people think their cars can drive themselves, the cars really need to be able to do that or there are going to be crashes.
Now, most cars will do this these days, and do it using the same sensors they need to trigger air bag deployment even without ACC.
I don't know of any study that tries to estimate the effects of doing it earlier like the Audi with ACC does (IE no study tries to evaluate whether the ones without ACC sensors have achieved the optimal amount of pre-tension in the time they have, which would make pre-tensioning them earlier like the ACC systems do pointless)
I suspect that it's too difficult for the systems to accurately determine whether an upcoming object is actually in the lane. It's not uncommon to pass stationary vehicles parked on the side of the road. If you're approaching those vehicles along a curve, they might seem like they're in the road to a radar sensor even when they're not. I can see why it's a hard problem: attempting to detect these probably leads to too many false positives.
Still, you would think that there would be some threshold where the car decides, "Hey, this stationary obstacle is right in front of me. I should slow down"
You could imagine a next generation self-driving system that uses the combination of data from multiple sensors as well as maps to detect plausible obstacles. The mapping data could tell the vehicle when it should expect to turn. Maybe the vehicle could integrate imaging information from both radar and stereo cameras, to detect where the lane is, and which obstacles are in the lane.
Are there good existing techniques in the computer vision community for synthesizing data from multiple imaging sensors, like radar and stereo cameras and LIDAR? I'm imagining dumping all this data into an algorithm, getting back a 3D reconstruction of probable objects around me, along with metadata describing their velocity, confidence of the assessment, and all that.
> Still, you would think that there would be some threshold where the car decides, "Hey, this stationary obstacle is right in front of me. I should slow down"
From the article:
> When a car is moving at low speeds, slamming on the brakes isn't a big risk. A car traveling at 20mph can afford to wait until an object is quite close before slamming on the brakes, making unnecessary stops unlikely. Short stopping distances also mean that a car slamming on the brakes at 20mph is unlikely to get rear-ended.
But the calculation changes for a car traveling at 70mph. In this case, preventing a crash requires slamming on the brakes while the car is still far away from a potential obstacle. That makes it more likely that the car will misunderstand the situation—for example, wrongly interpreting an object that's merely near the road as being in the road. Sudden braking at high speed can startle the driver, leading to erratic driving behavior. And it also creates a danger that the car behind won't stop in time, leading to a rear-end collision.
So...dont brake suddenly. Decelerate, buy some time to collect and analyze more data and if the end result still appears to be a collision, then brake hard.
If a human driver started hallucinating behind the wheel, we would expect them to do the same, not maintain speed (or accelerate!) through the supposed object in their path.
This tech was supposed to be safer and more convenient than human driving, not a simulation of the decision-making abilities of a 12-year-old playing Grand Theft Auto.
> So...dont brake suddenly. Decelerate, buy some time to collect and analyze more data and if the end result still appears to be a collision, then brake hard.
This requires the automated brake computers to be connected to the autonomous cruise computers. According to the article, in most cars, these computers are separate systems.
It's like if one person operated the brake pedal, another operated the gas, and a third steering the car. When one hallucinates, the others might not realize right away.
This is exactly what previous generation AI in the 80's and 90's were criticized for: the systems built were too 'brittle'.
They worked well for the narrow domains they were programmed for but they couldn't deal with novel situations and had no way to generalize. It is not clear we can engineer our way out of this by just adding new rules.
Proponents of early symbolic AI systems (expert systems) said we just need to be patient and add more rules and at some critical point the system would reach the singularity. One such project has been going on for 34 years! https://en.wikipedia.org/wiki/Cyc
> Still, you would think that there would be some threshold where the car decides, "Hey, this stationary obstacle is right in front of me. I should slow down"
I think there is, it's just that by the time it reaches that threshold, the distance to obstacle is less than the stopping distance for the car at 60+ mph. A human making that judgment might steer around the obstacle, but AEB systems don't have that option.
> I suspect that it's too difficult for the systems to accurately determine whether an upcoming object is actually in the lane. It's not uncommon to pass stationary vehicles parked on the side of the road. If you're approaching those vehicles along a curve, they might seem like they're in the road to a radar sensor even when they're not. I can see why it's a hard problem: attempting to detect these probably leads to too many false positives.
This dilemma is only really bad for a passive system. For a system that's actively steering, it can make a committed decision to curve, so that even if it misjudged the obstacle it still won't crash.
It doesn't hurt to slow down for a predicted near-miss either, as you get close to it.
> Still, you would think that there would be some threshold where the car decides, "Hey, this stationary obstacle is right in front of me. I should slow down"
Yeah. There's a big difference between "obstacle might be in the way" and "obstacle is definitely in the way".
Autopilot would have worked fine if the lane markings were properly painted. No one will blame Caltrans terrible maintenance of 101. As I am writing this, NB 101 near Oregon Expressway has been under construction for over two years with no painted lane markings (and the lanes suddenly become super narrow), leading to merge hell and probably at least 3-4 fender benders a week. The Tesla GUI shows you whether or not it can see & track the lane markings, watching the road is much more important than "keeping your hands on the wheel". The one thing I will lay at the foot of Tesla: they really should have implemented eye tracking.
You're presuming that in these fatalities that the driver's eyes weren't on the road. What if the autopilot executed a maneuver that attentive drivers couldn't recover from in time?
Go watch the youtube video of another Tesla owner testing the intersection where this accident occurred. The lane markers on the left side vanish but there is a divider marker setup. It's super clear the Tesla followed the divider marker thinking it was a lane and hit the jersey barrier.
Also of note the impact attenuator was completely missing due to bad highway maintenance. If the tesla had struck an engineered set of barrels instead of side striking a jersey barrier the driver would have walked away.
I've seen it, and it has zero effect on what I said. The car "thinking it's in a lane" while being misaligned? I can possibly blame the people that draw lane markings. The car "thinking it's in a lane" while it slams into/past the edge of the road? 100% the car's fault. It shouldn't do that even if there are no lane markings at all.
I'm so glad I read this. I've been driving a VW Golf with adaptive cruise control and enjoying its responsiveness in heavy traffic. Knowing that it doesn't work with stationary objects will really change how alert I keep my foot near the brake pedal. The first time I realise what adaptive cruise control was I was delighted but I've become increasingly worried that I rely on it too much so I'm interested to try driving a Tesla or similar vehicle to see how I react cognitively to a car that takes over more control. Anyone have any similar experience?
At present, the 'driver assistance' solutions on the market seem to be counter-intuitive. As the article notes, they may handle most situations competently, lulling drivers into a false sense of security, before reacting completely inappropriately to a rapidly developing hazard leaving the driver no time to re-take control.
Tesla explicitly states that a driver using their Autopilot must keep their hands on the wheel at all times. They place the driver in the role of supervising the machine. It seems like other driver-assistance technologies require the same, which is a fundamental misunderstanding of what drivers expect these systems to do. Drivers would expect these systems to ease their workload or take over some of the tedium of long highway drives. But these systems require the driver to be alert and constantly monitoring the system for abnormal behaviour. As far as I can tell, this is more work than simply driving manually. On a long drive, I can slip into my own sort of autopilot where I'm paying full attention to the road, able to react to changing conditions ahead, but am also entertaining a long train of thought in my head.
Six seconds sounds like a long time to react to a changing situation when driving (drivers know that accidents happen in split seconds), but if things are working normally, six seconds is about the length of time you might spend changing the climate controls or selecting a different playlist. If Tesla are saying their cars can't be trusted for a single-digit number of seconds without human supervision lest they total themselves, since it's so often toted that the driver's hands were off the wheel and thus he couldn't retake manual control and prevent the crash, then these companies are approaching driver assistance technology in completely the wrong way. More than that, these systems are outright dangerous because they're implemented so completely wrong. Taking your hands off the wheel in an unassisted car for six seconds won't result in it driving itself into the barrier beside you unless you're extremely unlucky at that exact moment and your front wheel hits a pothole. For these machines to be this unpredictable is becoming a serious safety hazard.
I'm going to stick with a car with non-adaptive cruise control and nothing else. And be extra vigilant around Teslas on the roads.
Re: "The fundamental issue here is that tendency to treat lane-keeping, adaptive cruise control, and emergency braking as independent systems."
You won't be competitive with normal human drivers that way. Humans (usually) have to ability to combine many diverse, and potentially conflicting, pieces of info into a coherent story. AI and AI-like automation will need to similarly synthesize diverse clues.
Am I the only one who thinks the fault primarily lies with the driver? He trusted his life in equipment he didn't adequately understand and/or put to much faith in. This is not to say that Tesla and others shouldn't learn from this and try to improve their systems and driver education, of course. But drivers should also understand how their cars work, and respect their limitations.
Radar on autopilot is a crutch and not a great one. I know Tesla are working hard on moving everything to vision detection. Andrej Kaparthy head of autopilot vision gave a really good talk recently on what keeps him up at night - datasets so the cars know what they are dealing with.
It did make me wonder if firetrucks were accidentally left out of the datasets.
This article tries to cover for Tesla by relying on the fact that competing systems also cant avoid similar objects. Sorry, this is further proof that the engineers and managers at Tesla willingly put unsafe technology on America's roadways. A hard stop and prison time for someone is in order at this point.
We can say is not autopilot where autopilot means what we think it means, because Tesla and the fanboys are pulling out some other definitions that mean something different, while marketing and Elon share/promote images and videos of people not using their hands when using autopilot.,
There also needs to be a question of what is the purpose of the so called Autopliot. You have to sit there with full attention and your hands on the wheel. How is that better than just steering yourself.
I naively thought modern systems used a battery of lidar (with different wavelengths to provide redundancy, like visible+ir+uv) to create a precise 3d map of the objects around the car. Coupled with cameras for visual awareness this seems like the only sane system.