Hacker News new | past | comments | ask | show | jobs | submit login
Upgrading Autopilot: Seeing the World in Radar (tesla.com)
396 points by _miqu on Sept 11, 2016 | hide | past | favorite | 247 comments



More info: Tesla's radar is a Bosch device. Either the Bosch LRR4, which is several years old, or the Bosch mid-range radar. Bosch has made about 10 million of these and related models. Tesla is not the only customer.

This device isn't enough for a full point cloud. It doesn't scan in elevation, just azimuth. Some variants do have an upward-pointing beam in additional to the main forward beam, which is about 5 degrees in vertical.

There are automotive radars which scan in 3D[2], but Tesla's is not one of them.

Small radars are rather blunt instruments. You tend to get one point for each target, not lots of points. The beam focus isn't that tight. Tighter focus requires a larger antenna array.

[1] http://www.automotiveworld.com/news-releases/bosch-presents-... [2] http://www.fujitsu-ten.com/business/technicaljournal/pdf/38-...


I believe you, but I think that misses the point. My takeaway here is that Tesla is just getting started with Autopilot.

Despite the hype, this blog post makes it clear that Autopilot is just a very simple camera based system that requires human monitoring (but is still very useful in my personal experience). Now they are releasing a software update that uses the previously unused radar hardware as well as machine learning to achieve better results. This is exciting because they're going to keep releasing software updates over the air and the software is just going to keep getting better and better.

The fact that the hardware is shitty doesn't concern me. Google's self driving car project depends on a Lidar system that costs $75,000 today. That's almost four times what the average car costs, and it doesn't work in bad weather. Regardless of how good Google's software is it could mean nothing if Tesla acheives a similar or better result using cheap commodity hardware. The hardware will only get better over time at the same price.

Tesla is in a better position to bring full autonomy to market than anyone else, since they control the hardware, the software, have cars in the field etc. For this reason I wouldn't be surprised if Tesla becomes the first company to break a market capitalization of $1 trillion dollars. Computers gaining the ability to move around the world with drones and autononomous driving will have an economic impact bigger than the introduction of the internet. Where we are now is meaningless. What matters is that Musk has stated the goal and we have something in the field today that can be updated iteratively over time until it's perfect (which will be never). We may not be very far along this journey, but we've taken those hardest first steps towards the next big technological revolution that will once again change everything about how humans live, the implications of which we can't even begin to imagine.


Volvo is in a much better position. They're going live with 100 human drivers next year, actual customers, not employees, with drivers not required to have their hands on the wheel.[1] This will be enabled only for well-mapped roads in Gothenburg, but they're not all freeways. Volvo's CEO has publicly taken the position that if there's a crash in auto mode, it's Volvo's fault.

Volvo uses four cameras (one of which is a trifocal 3D unit), four radars, one LIDAR, and 12 ultrasonic sensors. That's a reasonable sensor suite for this. Tesla has one camera, one radar, and some ultrasonic sensors. Not enough.

Volvo is also way ahead on self-driving car commercials.[2]

[1] http://www.volvocars.com/intl/about/our-innovation-brands/in... [2] https://www.youtube.com/watch?v=bJwKuWz_lkE


How would that be a better position than Tesla's? 100 vs 100,000.


The 100 sample is supposed to be true automated driving, take me from a to b. Not just the standard Lane keeping, adaptive cruise etc, which is what tesla offers with their "autopilot".


Tesla offers more than that:

https://www.google.dk/amp/s/amp.theguardian.com/technology/2...

And better yet: their 100,000 (soon to be 500,000) autopilots will gather live information from roads from several countries, weather types, etc. even when the cars are manually driven. Every day, Tesla's 100,000 cars will bring as much data for their algorithms as Volvo's 100 cars do in 3 years.

Or put in another way: unless Volvo magically have much better programmers than the IT company Tesla, they would need 3 millenia to keep up with Tesla's current fleet of 100,000 data collectors.


Those 100 will be the ones who are using an actual finished product, I am pretty sure thousands of their currently sold cars are already collecting information from their radars.


I hope Volvo's autopilot is not a finished product. The whole point of ML is to keep improving the product from what you learn.

Also, do you have any source for Volvo's current system collecting data and sending it to Volvo? We know Tesla does it but I did not know Volvo does it.


Volvo's autopilot uses cloud architecture for traffic analysis(based on other Volvos around you). Volvo has predictive, ML-Enabled Analytics which should(I don't work for Volvo) work similar to Tesla's Fleet Learning.


Thats great, except Google has already done this.


Google have yet to let the public loose in their cars on their own.


4 camera, 4 radar, LIDAR, 12 ultrasonic???

That is silly overkill, and most of it is incompatible with visual cues.

I an human. I have only 2 cameras. I kick ass compared to these systems.

I can handle snow hiding the lines and edges of the road. I can handle snowbanks changing the shape of the road. I can handle cops waving and pointing. I can handle dirt roads. I can handle unmarked parking on grass. I can handle broken traffic lights, both individual and power outage. I can handle a total GPS outage. I can handle areas without cell coverage. I can handle new roads. I can handle traffic being diverted onto the opposite side of the road, or even off the road, as happens for construction. I can handle gaping unmarked sinkholes.


I hope Volvo's auto mode doesn't cheat-disengage at the last time.


What you describe sounds like a "throw a lot of hardware at it" approach. What often matters in the real world is how you integrate and use the hardware, not how powerful the hardware is.


This also isn't really fair as even if Tesla had superior "radar" and "convolutional neural nets" or even "hierarchical softmax", Volvo has a commercial with Jean-Claude Van Damme[0] and Tesla doesn't. Q.E.D.

[0]: https://www.youtube.com/watch?v=M7FIvfx5J10


Pitnicking but Volvo Car Corporation, which we are talking about here, does not have Jean-Claude Van Damme in their commercials. JCVD is the star in the other Volvo company commercials: AB Volvo, which is everything but the cars (trucks, buses, Penta motors, etc...).


TIL that Volvo Cars is owned by a Chinese company and is separate from AB Volvo.

> Volvo Cars was owned by AB Volvo until 1999, when it was acquired by the Ford Motor Company as part of its Premier Automotive Group. Geely Holding Group then acquired Volvo Cars from Ford in 2010.[0]

> Geely (officially Zhejiang Geely Holding Group Co., Ltd) is a Chinese multinational automotive manufacturing company headquartered in Hangzhou, Zhejiang. Its principal products are automobiles, taxis, motorcycles, engines, and transmissions. It sells passenger cars under the Geely and Volvo brands and taxis under the London Taxi brand.[1]

[0]: https://en.wikipedia.org/wiki/Volvo_Cars [1]: https://en.wikipedia.org/wiki/Geely


Volvo and others can say whatever they like, until they bring something to market I will assume they are freaking out as how they will slowly be made irrelevant.


Volvo is bringing it to market. It all started with the XC90(up to 30 km/h) and now with the S90 and V90 is going to work up to 130 km/h which is perfect for highways. Other than that they've recently started a collaboration with Uber [1][2]. I tried myself the S90 pilot assist and it works really well, it does require you to be more "present" on the wheel than what Tesla requires nowadays but it's surely going to be impreved via OTA updates.

[1] : https://www.media.volvocars.com/global/en-gb/media/pressrele... [2] : http://www.bloomberg.com/news/features/2016-08-18/uber-s-fir...


The companies that are being made irrelevant are usually "stuborn" and have a strong belief that their current busines model and products will prevail. Kodak and nokia refused to adopt modern technology until it was too late. I don't think developing autonomous cars counts as sticking with the status quo. They may be behind google or another company but they are not wasting time twiddling their thumbs.


I'm not sure that applies to Kodak or Nokia. They saw the changes coming, Kodak launched digital cameras, Nokia tried launching MeGo. The trouble is they didn't have a competitive advantage over all the other digital camera companies or phone OSs.

By the way I don't think Volvo is freaking out so much as going for it's vision 2020 thing:

"in 2008 we set out our vision that by 2020 nobody should be seriously injured or killed in a new Volvo car."


I have to agree because as much as I want to believe they will succeed in this regard, until they release something and we see how it really performs ourselves it is just media hype


You do realize that in Europe the same thing that Tesla calls "autopilot" has existed for years in Mercedes, Volvo and some other premium cars, Tesla's autopilot is nothing more at the moment than a line keeping and adaptive cruise control that is used in Europe for at least 3 years now


Uh, no. The utter crap assistants used in these cars are incomparable to Tesla’s Autopilot even now. Polish and ease of use matter. Execution matters.

For a representative example, see how bad Mercedes’ latest and greatest E-Class w/ Drive Pilot is: http://www.autofil.no/936897/hands-off

It’s damning. And that is the one they advertised as being “self-driving”. That’s the best Mercedes has.

This review compares more of them: http://www.caranddriver.com/features/semi-autonomous-cars-co...

The best of non-Teslas have at least twice as many autonomous mode disengagements (i.e. they suck at the lane keeping job). They specifically call out Model S for being the only one that can keep in a lane without wobbling like a drunk driver. None of the “old” car manufactures can even get that right, let alone anything more.

So yes, on paper all these components exist for 3+ years and are nothing new. Exactly like smartphones existed before the iPhone did — remember those?


I'm sorry, but you are mistaken. I regularly drive a full fledged E-Class Mercedes Benz from 2015 (company car). It offers nothing even close to Tesla's autopilot.


Yeah, they're the same thing. You can totally leave those cars to drive around on their own the same way Tesla's do.


They're the same thing, and require the same level of attention to use safely. The only difference is that Tesla didn't enforce that level of attention because they're more willing to feed their drivers to the undersides of trucks at 70 mph and then blame the driver for not paying attention than Mercedes or Volvo are, and the press and HN are more willing to let them get away with it.


Do Teslas now drive around on their own? I seem to have missed something.


You haven't missed anything, they use the same system that's available in Mercedes and other cars. Some people just misread the announcement and we've been living with the misconception that Tesla has self-driving cars ever since.

I don't know why, either, since this would presumably be very easily debunked.


> Google's self driving car project depends on a Lidar system that costs $75,000 today.

I believe you're multiple orders of magnitude off here. Google's LIDAR is surprisingly cheap. I believe they used expensive Velodyne units in earlier versions.

Heck, even brand-new Velodyne devices are only $8k.


That's not multiple orders of magnitude; it's, at most, one.


IIRC Google has an in-house unit that is considerably less expensive, at least in the sense that they project it would cost considerably less to manufacture in volume.


I'm not sure of the specifics, but I think it's fancier than the $8000 unit you're referring to. The vpl-16 only has 16 vertical channels. The next up has 32 vertical channels and costs closer to 30k. The biggest one with 64 channels seems to be around 70k.


Tesla definitely used the radar for their autopilot before the update, now they just increased its responsibility. The autopilot still needs the camera, at least for lane detection.

They won't be able to achieve similar or better results than Google with the current sensor suite, for example they don't have anything to check for oncoming cars behind the vehicle. No software update can compensate for things that the sensors don't see.


>"for example they don't have anything to check for oncoming cars behind the vehicle"

Then how does a line changing feature works? I am pretty sure you need to see cars behind you to safely change lanes?


As far as I know, the driver has to make sure that it's safe to change lanes. Other manufacturers have rear facing radars for this.


> "As far as I know, the driver has to make sure that it's safe to change lanes."

For safety reason the driver has yet to be sure that no one is approaching you at the moment of changing lane but the system obviously works autonomously, you just have to engage the turn signal.

> "Other manufacturers have rear facing radars for this."

Tesla have rear radars, otherwise Autolane-changing and Autopark features wouldn't be available to use.


Parking is done with ultrasonic sensors. I am pretty sure that Tesla currently does not have rear radars.


Autopilot will get better and it will be used more, and more people will die. At least until full autonomy. Think about all the weird edge cases that are encountered in distributed systems, and now attach a human life and a 5,000lb car to it. The question is if fewer people will die on a per-mile adjusted basis.

Sure, Autopilot is safe 'when used correctly,' but I wouldn't trust someone to maintain 100% attention on driving with it enabled. Maybe for a few cumulative hours. Not for hundreds or maybe thousands of hours. If I'm paying attention and I prevent Autopilot from getting me into an accident, why have it on in the first place? It's supposed to protect me from inattention! (e.g. automatic braking)

As it gets better at protecting people from inattention, people will be less attentive, and more will encounter the edge cases that the machine learning models will inevitably have.

Don't get me wrong, I love SDCs and the massive impact they will have, I just believe that partial autonomy is unsafe because of the human factor.


> If I'm paying attention and I prevent Autopilot from getting me into an accident, why have it on in the first place?

Because it takes less mental effort to watch for the exception case rather than constantly adjusting speed/heading/etc.

Also something that's rarely mentioned is that in it's current form AP's killer-feature is rush hour traffic. In those situations you're dealing with <20 MPH and tends to be the most mentally taxing.

In my small sample set(~30k AP miles) I've found the mental load for driving long distances and rush hour traffic to be significantly less.


How do you know you watched for the exception well enough to react fast enough to prevent disaster when it happened?

At ~30k miles, chances are you haven't seen enough exceptions to make a judgment on that.

It is hard to stay focused on jobs that are extremely dull, except for the rare cases where they aren't.

There's tons of research on this because there are many people in that situation (watching radar screens for incoming ICBM's, guarding a facility, luggage screening at an airport, etc)


Oh, I saw plenty of stuff in those miles where I had to react.

Flying tires, wildlife jumping out. Heck, just yesterday had someone merge into the lane with no signal. Just because I'm not providing steering inputs doesn't mean that I'm not constantly scanning traffic.


This makes intuitive sense but is not borne out by research such as [1]. The < 20 mph scenario, one of traffic, consists mostly of stress and the general unpredictability and frustration of the whole scenario.

The problem of Level 3 autonomy is not cognitive load but the general lack of active feedback on the one hand, and a lack surprising events to mediate the allocation of attention on the other. It makes sense, the longer you're encountering non-events, the more difficult it will be to justify alertness. Surprise guides attention which in turn is strongly correlated with alertness. The longer one must remain alert, especially without surprise, feedback or reward, the higher the levels of subjective effort and mental fatigue.

Intuitively, if you imagine that only a few things are filtered to attention, the less surprises, the more predictability, the more difficult it is to select what to attend to, the more likely your mind will begin to wander. This is one explanation for the finding in [1]:

   The fact that the number of false alarms increased suggests 
   that the rather dramatic increase in missed targets was not 
   due to a simple reduction in the number of responses: 
   the number of responses to nontargets even increased. 
   This suggests that the observed deterioration of performance 
   is not caused by task disengagement but may result from increasing 
   difficulties for subjects to correctly identify targets
This is also related to the very complicated aspect of maintaining concentration over extended periods of time (you can play a complex but engaging game for longer than you can a dull monotonous task) and the explanatory failure of a resource depletion account. There is as yet no completely satisfactory account for why perceived fatigue occurs but all the best models have attention, motivation and reward in common.

[1] In the linked article they specifically call out driving as an activity sensitive to mental fatigue http://www.sciencedirect.com/science/article/pii/S0926641005...


> Because it takes less mental effort to watch for the exception case rather than constantly adjusting speed/heading/etc.

Yes, this!

The argument that drivers are safer if they are forced to pay attention all the time is ridiculous, you can use that to argue against every feature on the car! Servo steering is bad, you don't feel the road properly! Automatic gearboxes are bad, you're not in tune with the engine! ABS brakes are bad, you should know how to brake so as not to lock the wheels! Traction control is bad, you should never drive on surfaces where you're not in 100% control of the car! Blind spot radars are bad, you should always look around you and make lane changes responsibly! Bla bla bla.

The current Level 2 autonomous systems makes it easier to do the right thing when driving. Used correctly, they lessen the cognitive load, but you have to learn how they work, what they can and cannot do, just like you need to know what situations your traction control systems or anti-lock brakes can save you from. They're not magic, they're just one more helper system.


The difference is in human-enhancing features (ABS, traction control, gearboxes) vs. human-replacing features (collision avoidance, routing, situational awareness).

The best human drivers can get by without the first set of features.


That's a completely arbitrary distinction, you are essentially saying that stuff that was in cars when you learned to drive is all natural and perfectly fine, but stuff that got introduced afterwards is suspicious and bad and you don't like it.

How is routing not human-enhancing? It makes it easier for you to choose which route to take? How is an automatic gearbox not human-replacing, it eliminates the need for a human to shift with the shift stick?


Maybe a better phrasing would be "features that let you focus on the road more" (e.g automatic shifting, ABS) vs. "features that let you not need to focus on the road" (e.g. lane assist, automatic braking, radar cruise control)


> In my small sample set(~30k AP miles) I've found the mental load for driving long distances and rush hour traffic to be significantly less.

Interesting---how did you measure this?


Had about ~7k on the odo when we got the AP update, 37k now. Large majority of my miles are 300mi roundtrip weekly commute.


Oops, my question was ambiguous—I was actually wondering how you measured "mental load", as you put it.


Ah, misunderstood :).

I've been doing the same ~7hr drive + workday for 2 years. Both myself and my wife noticed I was a lot less wiped out after AP. We have the car ~3 months before AP so it was a pretty controlled change.


> If I'm paying attention and I prevent Autopilot from getting me into an accident, why have it on in the first place?

The same reason you have cruise control - it doesn't do everything for you, but it helps reduce cognitive load in some basic areas, which reduces fatigue over the long haul.


> The question is if fewer people will die on a per-mile adjusted basis.

Last I heard, Autopilot was MUCH safer per mile.


Not the numbers I've seen. Estimates range from about the same as humans, if you compare raw miles driven, to considerably less safe if you just compare against highway miles, which is where Autopilot is used. There's also the issue that, with only one fatality, the error in estimating the rate is quite high.


Will more people die than would have with AutoPilot?


If you quintuple the number of Tesla kilometres per year but only triple its safety, you will still have more injuries and fatalities.

For every Tesla that is now able to recognise a truck cutting across a highway and brakes in time to avoid a fatal collision you will have new bizarre scenarios like following behind a truck that dislodges overhead objects, which a human would have avoided but a Tesla's computer didn't see, leading to a human getting crushed by a bridge falling down.

I would also expect a short learning curve for Teslas to acknowledge the emergency stop water curtain signs.

But the short version is yes, SDC will result in more deaths even though they get relatively safer and safer.

https://youtu.be/NoTMC-uxJoo


Let me ask a different question. For every accident that happens with AutoPilot and a non AutoPilot car, would it have happened if every involved car had AutoPilot?


A commenter on a related TechCrunch article suggested they're using Delphi: http://www.delphi.com/docs/default-source/old-delphi-files/7... in the 2014 and onwards models. Unsure if those are built on the Bosch radars you mentioned.


A few months ago elon posted on Twitter saying that they would like to thanks Bosch their supplier for helping them enhance their sensors. In this blog post they say that they got a updated driver/fw.


> "The update will also penalize inattentive drivers. If the car determines that the driver doesn't have their hands on the wheel and throws its audible warning three times in an hour, it will lock the driver out of the feature. In order to re-enable Autopilot, the car will have to be pulled over and put in park."

https://www.engadget.com/2016/09/11/tesla-s-next-autopilot-u...

If Engadget got that right I think we will see a lot of upset Tesla owners in a few weeks.


There will be more upset Tesla owners, but fewer tuned-out ones. Tesla needed to do this.


Tesla punishing users not following the rules > Regulators punishing Tesla.


I don't think it will upset very many owners. In normal use this change will never be apparent, only when you're really screwing up.


> If Engadget got that right I think we will see a lot of upset Tesla owners in a few weeks.

Probably also a small handful fewer dead ones over the next few years.


Reminder that proponents of autopilot systems wage the utilitarian argument that if self driving cars were even slightly better at keeping people alive behind the wheel than human drivers, then it is worthwhile to keep them on the road. Tesla's implementation didn't have these safety features that other manufacturers had in their assisted cruise control systems and people died. A few disgruntled users trumps unneeded deaths caused by a false sense of confidence in Tesla's autopilot.


> Tesla's implementation didn't have these safety features that other manufacturers had in their assisted cruise control systems and people died.

ONE person died. Over many million miles of Autopilot driving. Last I heard it was much safer to drive with Autopilot than without. Yep, per Musk:

> “Indeed, if anyone bothered to do the math (obviously, you did not) they would realize that of the over 1M auto deaths per year worldwide, approximately half a million people would have been saved if the Tesla autopilot was universally available. Please, take 5 mins and do the bloody math before you write an article that misleads the public.”


> Over many million miles of Autopilot driving. Last I heard it was much safer to drive with Autopilot than without.

There are luxury car models with a similar amount of purchases and miles driven that have had 0 casualties.


As a Tesla owner I always keep a hand loosely on the wheel and I never get a warning. In fact the only time I've seen the warning was when I deliberately kept my hands off to get a feel for when the warning occurs and what it looks and sounds like. I still haven't been brave enough to let the car trigger the fail safe "come to a stop" scenario when ignoring multiple warnings, I also wonder if doing so might cause a black mark against my Tesla account ;)


I sense a new application for Korean sausages!

https://www.engadget.com/2010/02/11/south-korean-iphone-user...


Tesla need to show the regulators that this technology is still driver assist. They need to prove that they are doing reasonable effort to remind the drive to pay attention.


The problem is that the marketing isn't that it's driver assist, it's that it is self-driving.


And that's the task their marketing department is facing: to correctly align product features and positioning to develop and sell (gradually) full self-driving while being on a well-developed legal ground (driver assist) while the law catches on.


I keep my hands off the wheel all the time and it hardly ever gives me the warning. And the blog post says that it only stops you if you ignore the warnings i.e. Presumably don't put your hands on the wheel. In reality it is pretty nice to relax in the car and put your hands down while still paying enough attention that you can instantly grab the wheel if nessecary. But yes, it does worry me that this might be annoying.


Entire point of having autopilot is not to have hands on wheel! I really don't think it would serve any purpose other than legal caveats. Even if your hands on the wheels it takes much longer to reacts if something bad happens. I hope this is just because its early days of self-driving so we can collect some test data and improve the system to the point that it is actually self-driving.


Is it just me or does whitelisting static objects to determine whether the car will collide with them seem like a bit of a crude hack? It almost sounds like the system will brake at newly installed traffic signs.

edit: Upon closer reading, he explains it somewhat. Once they have enough data, the system will start braking on unknown objects with gradually increasing force as the confidence level rises. So basically, it will brake on unknown traffic signs but only slightly, as the confidence level shouldn't get too high, if I understand that correctly.

The last paragraph sounds technically challenging and interesting:

"Taking this one step further, a Tesla will also be able to bounce the radar signal under a vehicle in front - using the radar pulse signature and photon time of flight to distinguish the signal - and still brake even when trailing a car that is opaque to both vision and radar. The car in front might hit the UFO in dense fog, but the Tesla will not."

edit: it seems like they are already doing it beginning with this update: "Now controls for two cars ahead using radar echo, improving cut-out response and reaction time to otherwise-invisible heavy braking events". that sounds awesome.


>It almost sounds like the system will brake at newly installed traffic signs.

One of the only situations where both "brake" and "break" have the same meaning in a sentence.


Or, worse yet, will crash into an actual obstacle where white listed sign is.


Self-driving cars are entirely reliant upon mapping data for all sorts of functionality. For example, how is it supposed to differentiate between curved roads and on/off ramps? Decent navigational maps require huge amounts of manual intervention, which is why Apple's mapping software initially sucked. The maps for self-driving cars will require an order-of-magnitude more data.


Yes, but Apple and Google, they only have what their company cars bring in for road data. Tesla makes it clear every customer is also acquiring data for all their cars to make better decisions:

> Curve speed adaptation now uses fleet-learned roadway curvature

So they have an advantage over Google and Apple in that they have mass market cars full of sensors being driven for them for free.


I guess so... Google's mapping cars have expensive sensors that won't make it into production cars. And if they partner with a single auto manufacturer or buy up Lyft, they will have just as much data as Tesla has.


Apple and Google have billions of smartphones collecting location data. Not all of the data are currently sent back, but they can certainly do it with just a software update. For example on my iPhone under location services it already lists "routing & traffic" as one of the system services using location. It's easy enough to extend it to mapping too.


Unfortunately that's only location data. What cars with more sensors can send is closer to: this location has N lanes, turning radius is M, the slope is X, I'm travelling at Y mph, I see connected road at location Z, I see the following signs...

What your phone can send is only "I'm more or less at X, I'm moving fast so we may be in a car. But I may as well be in a glider. Who knows..."


Yes, this seems really complex to me, too. I'm also not quite sure on why exactly it is so complex to distinguish between objects (including vehicles) on the road and ones above/next to it. If they have radar images (I imagine them as images with depth information, which might be fundamentally wrong), they should be able to tell both where the road is going and, with that information, which of these objects are of relevance.

But for the learning part, they probably combine the camera that they used to date as their primary device in combination with the radar (at least in daylight scenarios) to identify objects. They may even be able to learn about the special material properties, like the reflective coating of a traffic sign.


> I'm also not quite sure on why exactly it is so complex to distinguish between objects (including vehicles) on the road and ones above/next to it.

Because radar does not have the same resolution as LIDAR.

EDIT: Phased array radar and cheap stationary LIDAR should get under $100 in ~5 years, at which point this whole argument will be moot. Hacks in the meantime!


What does the world look like in radar? Are there any visualizations available?

Could multiple radar emitters and receivers be used to create a phased array and improve resolution?


Here is the resolution calculation for aircraft radar (with respective example of signature - aka image )

http://www.radartutorial.eu/01.basics/Range%20Resolution.en....

For angular resolution: http://www.radartutorial.eu/01.basics/Angular%20Resolution.e...


> What does the world look like in radar?

Depends on the radar system. Some are distance only without direction. Some are 1D (a line, usually horizontal) and some are 2D. Many objects are partially opaque, which is confusing. Resolution is very poor compared an optical device of the same size.

> Could multiple radar emitters and receivers be used to create a phased array and improve resolution?

Yes. However, this is currently bulky and expensive (in dollars and in compute power). Thankfully, it looks like capitalism is coming in to the rescue here and miniaturizing the everloving shit out of complex radar arrays for human interface tech. This should be usable for vehicles as well.


All the automotive radars are phased array devices, and have been since the Eaton VORAD of the late 1990s. No moving parts. They're usually 1D scan (horizontal) only, although 2-axis scanned automotive devices exist.


If you're cresting a hill or rounding a curve, passing under a bridge, etc., static objects in the visual field are translating in front of the vehicle and are not easy to distinguish from slow moving vehicles. With simple sensors, there isn't enough information to decide that these blips are on a collision course or not.

The Florida situation was the easiest set of circumstances for machine vision to handle and it still failed. More complex, dense road systems with real terrain are much harder to handle. This radar system will still get someone killed.


The driver is still the final authority. Pay attention and in most cases, if you die, it won't be your fault.


Is LiDAR worthwhile despite rain and snow detection? Can that be ignored in software?


I'm also not quite sure on why exactly it is so complex to distinguish between objects (including vehicles) on the road and ones above/next to it.

In the case they're talking about here, it's because to do so you need to predict where the non-visible road surface is going to be.

Consider travelling up a continous, slight incline. Precisely at the crest of the incline is an overhead gantry sign, positioned such that for an observer travelling up the incline it is located directly in line with their current direction of travel.

The only way that you "know" that the road actually dips under the sign rather than the sign being on the road surface is experience.


You can also see other cars "disappearing" under the sign. And the sign going up a bit as you move closer.


That's the kind of higher level reasoning that's easy for people and hard for machines.


  Upon closer reading, he explains it somewhat.
The author of the article is a nameless collective, not an individual. They explain it, but there is no "he" to ascribe the explanation to. We do not know the names of the authors.


I got this from twitter, where Elon Musk writes stuff like "Writing post now with details." and "Will get back to Autopilot update blog tomorrow." So I assumed he's authored the thing.


Annoying you are being downvoted. You are entirely correct, and it is worthwhile to fight against the nonsensical notion that Elon === Tesla.


For what it's worth, there is some strong evidence that Elon penned this post himself - not that it is clear from the post itself.

https://twitter.com/elonmusk/status/771446048262946816

> Finishing Autopilot blog postponed to end of weekend

https://twitter.com/elonmusk/status/774155658476212224

> Will get back to Autopilot update blog tomorrow.

https://twitter.com/elonmusk/status/774664927835553792

> Will do some press Q&A on Autopilot post at 11am PDT tmrw and then publish at noon. Sorry about delay. Unusually difficult couple of weeks.


I was already aware of these tweets, but that doesn’t mean he wrote it all himself, just that he is involved in it. The fact is, the blog post has a byline and the byline does not mention Elon Musk or any individual person's name.


Yup. As soon as you're white/blacklisting you've lost at AI because you're assuming the rules never change, and if that's the case then why use a learning AI in the first place.

I wonder how many fatalities it'll take before they realise that 'the driver should've been paying attention' isn't a good enough excuse.


Incremental steps. They will move past whitelisting in time.

Note that humans use whitelisting too, to the point that they do not notice when a stop sign has been defaced or converted to a "give way".


I think you have it backwards. The radar will not initiate braking events for newly discovered stationary objects. Once it has human feedback from multiple sources, the object will be whitelisted and cause braking events.

blacklisting would cause the behaviour you describe.


The blog post says "When the car is approaching an overhead highway road sign positioned on a rise in the road or a bridge where the road dips underneath, this often looks like a collision course. [...] If several cars drive safely past a given radar object, whether Autopilot is turned on or off, then that object is added to the geocoded whitelist."

So as i read it, the system does think it will collide but ignore it if the whitelist says the object is safe.

The paragraph after that says basically once they have enough data, the system will start braking on unknown objects with gradually increasing force as the confidence level rises. So basically, it will brake on unknown traffic signs but only slightly, as the confidence level shouldn't get too high.


Basic problem: the system doesn't know where the road surface is. Google's vehicles, and most of the DARPA Grand Challenge off-road vehicles, profiled terrain with a LIDAR. They know where the road surface is. This is essential to off-road operation, but on-road, optimistic assumptions (no big potholes, road not leading off cliff) can be used, at some risk.


I read this as saying: we start braking lightly when detecting an unknown object, applying more braking force as it becomes clearer that this object is a collision risk.

Once enough human or autopilot trips detect the same signal with no collision, whitelist the signal and cease braking-and-monitoring behaviour.

I am probably misinterpreting but that is what makes sense to me.


What about a tractor trailer (like Florida) in the same spot as a white listed traffic sign?


Road signs will be more to the left or right, truck will be in front.

Corner case: truck jackknifed across the road immediately under a white listed gantry.


I read it the same way as karyon. Whitelisting something means recognising it in a known-good way (in this case, "we know that's an overhead sign"), whereas blacklisting means known-bad.


> The car computer will then silently compare when it would have braked to the driver action and upload that to the Tesla database...whether Autopilot is turned on or off, then that object is added to the geocoded whitelist.

(emphasis mine)

This has interesting privacy implications. I am not a Tesla owner, but I imagine that by enabling Autopilot you consent to providing Tesla with diagnostic, error, and sensor data. But what about those who have not enabled this feature? Their Tesla will automatically phone home with data regarding their location and surroundings regardless of whether or not they have consented to this?


I was really fascinated by that comment as well, but not for privacy reasons, I believe that while it would be able to peruse server logs at Tesla to understand where a particular car was at a particular time, that is no worse than OnStar or current phone GPS tracking.

The interesting thing is the data set of watching humans drive and using models to drive for the same place. This only works if the "place" is not notably different from the model, say a semi has hit the overhead and its now hanging into the roadway, can the car distinguish between a sign hanging sideways and one that is attached normally?

Severe storms and down power lines is another interesting question. Does autopilot recognize the environment has been grossly modified and refuse to drive? Earthquakes, tornadoes, floods, all can grossly change the environment at a particular geocoded location.

What if a Tesla owner's club decides to use a piece of highway 58 out in Nevada as a race strip? Does autopilot assume that when you hit this point you are supposed to stomp the accelerator and go as fast as you can? (ok that is a stretch)

It's the data without the knowledge. Something machine learning is bad at (hence turning chat bots into vitriol spewing fascists). VERY interesting times.


> say a semi has hit the overhead and its now hanging into the roadway, can the car distinguish between a sign hanging sideways and one that is attached normally? Severe storms and down power lines is another interesting question.... Earthquakes, tornadoes, floods, all can grossly change the environment at a particular geocoded location.

Humans are not perfect in those situations either, and cause plenty of fatal crashes.

Remember, autopilot doesn't have to be anywhere near perfect. It only has to be better than humans are now.


To be honest, it really has to be close to perfect - else, the public will deem the technology killer, even if it is in fact safer.


I think that's what we are going to find out. So far, I haven't seen anyone with pitchforks and torches demanding Tesla be stopped after the first autopilot fatality.

I used to think about it the way you do, but lately I think if they can make the numbers work out, then there will be some insurance companies willing to step in and turn risk of lawsuits into a manageable cost of doing business.


You mean like autopilot on commercial airlines?

Or like x-ray machines?

Or like any other machinery that is automated and kills the odd person here and there?

These cases show us it doesn't need to be perfect at all.


> Earthquakes, tornadoes, floods, all can grossly change the environment at a particular geocoded location.

As do other car accidents.


Interesting to think what an autopilot system should do when it is suddenly in the presence of a car accident. I would love to read about frustrations at the NHSTA trying to crash test cars with autopilots that kept swerving just before they hit the test barrier.


I feel like Tesla is a super creepy company that people aren't giving enough scrutiny to just because their tech is so great. What was that story about them remotely locking out a hacker from snooping around in their software? What about all those articles where they do an uncomfortably accurate play-by-play of someone's accident? I'm not comfortable with the idea of a company having that much access to private data.


I don't quite see how it is "creepy". Tesla's autopilot is not a requirement for operating their vehicle. It is an experimental value add that you have to agree in order to be included in the program.

As with any beta testing of software, high-fidelity diagnostic data is a must, especially in these circumstance as you have no other way of simulating the various scenarios people find themselves in when driving.


I could be wrong, but it seems they phone home for all sorts of things even without autopilot enabled.


Software that can end lives should not be beta tested on customers. Full stop.

In no way is it acceptable to give customers "experimental" software, overhype it, and disclaim responsibility when it inevitably fails.


...which gets you Toyota brake pedal software, which, in retrospect, doesn't look tested at all. Better? (Deadlier for sure)


They log everything whether you opt in or not. But as a cell phone user, I feel like I have opted out of all location privacy already.

I'll trade my location privacy for awesome car capabilities.


Most people don't own a Tesla anyway. It's a little harder to avoid being tracked by Google, Apple, Facebook and Microsoft.


What surprises me is how (at least here in HN) there was a general feeling of annoyance when Apple tracked user locations with Apple Maps to identify traffic patterns, even though Google did the same for Google maps.

But now with Tesla user-tracking, people seem to be actively psyched at being tracked by Tesla.


People are more comfortable with data collection when they understand what the data will be used for and agree that the use is valuable to society. If the use is unclear or doesn't seem important, then people worry that the real reason is something creepy that's been left unsaid.

In this case, people mostly agree that R&D in self-driving cars is important, and can clearly see how this data helps with that. Whereas identifying traffic in Google Maps/Apple Maps feels less important, and the connection between location tracking and detecting traffic takes a little more work to understand.


If Facebook or another company with goals that may not be clearly noble did it people may question their motives more. I also believe these is a natural higher level of trust in people like Elon Musk who have more praise worthy missions set for their companies than let's say someone like Mark Zuckerberg, ie. Elon wants humans to have a backup plant and is taking actions to save this planet, he seems to have dedicated his life to that mission, Mark maybe not so much.


Musk also isn't on record as calling his customers "dumb fucks," the way Zuckerberg is.


What are you on the record saying when you were 19 ?


Musk doesn't have to call anyone a dumb fuck, he just points to some graphs. The log data clearly shows that the driver was an idiot.


It's been a fracas with Google too. Take this discussion (the first substantial one I dug up on the topic): https://news.ycombinator.com/item?id=6873032

There's someone making the same consent argument (https://news.ycombinator.com/item?id=6873947 ) and lots of people expressing ambivalence and more about sharing the data with Google.

As far as how people are responding to Tesla doing it, you did reply to a comment questioning their practices without evening being exposed to them.


Perceived value is greater. Everyone wants self driving cars to be easily accessible and without human intervention. I'd give up my data personally for that goal. Its certainly more valuable than having Google direct me to a route which is 2 minutes faster.


Exchanging privacy for direct physical safety is a little easier to swallow I'd imagine.

Assuming of course that new Starbucks don't start showing up on routes preferred by Tesla drivers.


I would assume that this is because it is already kinda-sorta common behavior for software to track this kind of data, while this is an entirely new thing for cars.


It's easier to see that Tesla are going to do something exciting and positive with the data.


This is no different than how people viewed the idea of using your real name online pre- and post-Facebook.

Privacy erodes naturally. It is inventible. The benefits outweigh the costs almost always. The need of the many outweighs the needs of the few.

The future is one in which humans are recognized, at least by the machines, as being more like a single organism than a group of individuals.


I would argue the opposite is true: the perfectly anonymous population can have policy and management at only the broadest granularities of abstraction.

Data collection is all about machines being able to treat you as an individual.


This has been discussed extensively in the past - apparently the ownership contract includes the downloading of all that information most constantly


It's perplexing to me that to own something, you must agree to a contract.


How does this work for second hand sales? The new owner wouldn't be party to the contract.


I thought that Tesla have always been tracking their cars, autopilot or not. IIRC, there was a story from over a year ago where Tesla complained about a car review, citing the data that the reviewer's car had transmitted. This included location information, battery power, etc.

Perhaps there is a way to opt out of this - but also, if you opt out, do Tesla disable features of the car?


I wonder if/how insurance companies will get this data. It is only a temporary concern until we have fully autonomous vehicles, but still a concern.


By installing the autopilot software update, or purchasing the car with the radar installed I'm sure you signed away these rights.


All of the cars since late 2014 have autopilot hardware installed. This allows purchasers to activate autopilot after the sale of the vehicle.


What happens if I cut the leads?


Tesla will accuse you of industrial espionage, apparently.


It never stops to amaze how software improvements can greatly expand the capability of a given hardware - all this is done on top of the 2014 autopilot hardware. That is, what I like about software - there seem to be very few hard limits you cannot work around with a clever new approach.

In the race to the self-driving car, Tesla now has one big advantage: they have tens of thousands of cars with the autopilot hardware driving around every day. This gives them a huge lead in the amount of data about their software performance - just comparing what the radar sees and how the human drives in any situation should make a difference.


As much as I love writing clever software to work around hardware limitations (I believe it's what makes videogame programming of the 80's and 90's fascinating and led to better creativity and better games), I once had the job of writing software for "broken" hardware. By this I mean I was in charge of a computer vision algorithm, and the camera was physically incapable of taking the images necessary for the algorithm to function. It's worth mentioning this was for a self-driving car.

Tesla will never come close to a stage 4 autonomous vehicle with the hardware rigs they're currently selling. That said, it'll be interesting to see what improvements they can make with software. Given their over-promise and under-deliver history though (which arguably killed someone), I'll take their marketing with a grain of salt.


If the hardware does not work in the first place, then there is only so much a software can do about it. But here we are talking about a system which works and gets improved.

Tesla is very explicit about the limitations of their current system. So the autopilot accident seems to be mostly about the owner not really understanding what the current autopilot can do and what not - after all, in the car it is just called "autosteer".

There is also already talk about the autopilot 2.0. This consists of augmented hardware, e.g. 3 different front-facing cameras. Only with that hardware Tesla is trying to reach level 3 or 4. The 8.0 update is about enhancing the quality of the existing autopilot but not about reaching new levels of automatic driving.


I'd be interested to hear how the radar handles other radar signals. Given the use of police radar, radar detectors, and radar based collision avoidance like what's found in the rear tail-lights of some Ford F150's and of course are used on a fair number of Audi's it would seem the environment could get noisy at times. Yes, the Tesla radar could operate at a specific frequency that would minimize interference, but what happens when a bad actor decides to intentionally "blind" that radar signal? I assume given this is a life critical system that it would have countermeasures, perhaps utilizing a LIDAR or camera based backup?


I don't know how this particular radar system works, but in general you can modulate radar signals with a special encoding, so that only the sender can receive and interpret it. For all others the signal is juse noise. At university we worked with m-sequence based radar systems that have these properties. This would minimize the possibility that someone (accidently or not) can send signals that you misinterpret. Depending on the remaining possibility you might still want to take same countermeasures.


I think this is where the industry really needs to start internalizing a few rules. As an example, never trust encoding alone to provide integrity for signals tied to life critical systems. This is more true than ever, especially since it's possible to buy software defined radio solutions which work in radar bands:

http://ancortek.com/


Yes, but if you flood it with white noise, it'd screw up any transmit waveforms. The only alternative is to have a software radio that is flexible enough to switch bands when it encounters noise.


In terms of the SDR stuff, I'm thinking more about it being in the hands of a threat actor. One of the things that Tesla nor Google have had to contend with to date in their trials are intentional disruption and subversion of their systems and I only hope they've hired security engineers to consider more than just IT stack vulnerabilities like those explored by Charlie Miller & Chris Valasek (https://www.youtube.com/watch?v=OobLb1McxnI). A great number of jobs are at risk from this technology from long haul truck drivers to bus and cab drivers. This will certainly result in tensions which could result in threat actors going after sensing systems. Readily available radar frequency SDR in those conditions really could be used for harm.


Analogous to shining a flashlight in a driver's eyes or pointing a laser at an airplane cockpit. The law handles it.


That's what I'm afraid of. Rarely is the correct answer to a technological threat vector these days "let the law handle it." In this case it's even worse because of the real world problems that can and do happen when it comes to radio systems combined with the implementation of a life critical system. Look at the Evanston key fob incident as just one example. Would a shop owner with a bad neon sign transformer (or to interfere with radar, a microwave) from China be held liable for accidents on the street?

http://evanstonnow.com/story/public-safety/bill-smith/2016-0...


well, that doesn't help you if you're dead. although it's probably an argument in favor of not requiring such countermeasures, they would still be nice to have, wouldn't they? :)


i think that question is not so pressing until the system is declared fully autonomous as in, no human required anymore to drive. as long as it's just an assistant, the human is supposed to brake. but, since they surely want autopilot to become an actual autopilot...

you could do the same with "normal" cameras (and probably lidar as well) i guess by pointing a laserpointer at it. the safest option is probably to just brake.

here elon musk says they don't use lidar: https://techcrunch.com/2016/09/11/tesla-autopilot-8-0-uses-r...

(as an aside, some years ago we had incidents in the news where people pointed laserpointers at aircraft pilots while landing. the pilot's appropriate response is usually to abort the landing and do a go-around, because that's the safest thing to do in this situation)


I wrestle with this one and am afraid that at the point a car is driving itself, a EULA that the driver agrees to which states: "I will always maintain concentration and control of the vehicle in the case things go wrong" isn't accounting for how the human brain actually works. Humans only make for good backup systems when they are engaged and/or have time to react. I'll give Elon some credit. This is one heck of an experiment with a life critical system.


Emergency braking, as with a blown tire.


The fleet of Tesla cars on road is an advantage that Uber and Lyft have over Google. They can deploy cars with a lot of sensors on the road AND make money off of it for the most part!

If data is the differentiating factor in this game, Google has less of it! Which is interesting position for Google to be at!


> If data is the differentiating factor in this game, Google has less of it!

If you're talking about Uber and Left, I think Google currently has them beat in terms of data on a global scale, given how long they've been collecting data for Maps. You're talking about a future scenario where Uber and Lyft have rolled out significant numbers of sensor laden cars, but right now they don't have that - and who's to say Google won't have another approach by then.


Google's map data is in no way comparable to the data collected by Tesla's fleet (expert trained by vehicle owners), which by the way is gathering a million miles of experience every 10 hours.


True, but Google's Street View cars have 360 degree cameras and LIDAR systems on them and I assume the data is saved at full fidelity and sent back to Google.

Teslas definitely have many more miles on them, but I don't think the cars are sending back every single frame captured by its cameras back to Tesla HQ.


Privacy issues aside, the data volume would be far to high for the cars mobile connection to transfer. But if they only send whenever the actual and the expectation differs, e.g. the cars GPS track deviates locally from the map data, or whenever the driver disagrees with the autopilot, they can get a lot of information.

With this new usage of the radar, they seem to create "radar maps" of all radar echoes from bridges and traffic signs. Using those maps they should be able to detect true obstacles with a high enough confidence to enable automatic braking/evasion. The problem so far with automatic braking by radar is not so much with detecting possible obstacles but with false positives. A car must not randomly brake when there is no obstacle.


I think both sides have their pros and cons. Google has high resolution data but it's outdated where as teslas data is lower resolution but always being updated. Tesla has the edge here since they will know about road construction and such well before Google does.

Im sure this will flip one day when if Google increases their map fleet


They only have to send the interesting ones.


Which is a bit circular - lot of things are easy if you can detect "interesting", but like with pornography, this is one of the "I'll know it when I see it" AI-hard, human-easy tasks.


...but Google's data collection specifically covers whole areas. Tesla might be gathering millions of miles, but they could all be the same miles on the same road, like someone's daily commute.


Android phones in moving cars?


Hasn't Google Maps been collecting driving data from users of their app for years now? https://googleblog.blogspot.jp/2009/08/bright-side-of-sittin...


google has waze which i presume sends high-g events like sudden braking to a central collector somewhere.

waze is how their maps product knows where the traffic is on surface streets.


Most maps products get traffic data from the same external vendors (like TomTom). I think they get it from dedicated sensors like counting cars on street cameras.

Obviously you see Waze info on Waze itself but it has to work without any other users nearby. (Btw I don't think Waze and Google really share all their data, Waze doesn't even have lane routing.)


There seems to be a mix of both - Waze-collected data seems to be fed back into Waze, but there seem to be other sources as well.


And how many rounds of additional fund raising is required for Lyft or Uber to buy enough sensors to retrofit 1% of their vehicles? What happens if drivers decide to sell those sensors?


I think that the loss of the population of Uber and Lyft drivers would be a severe problem, assuming that Uber and Lyft drivers form a core advocacy group.

"Developers" ( in the sense of that Ballmer clip ) and sysadmins created a critical mass that carried Microsoft forward.


This post gives an interesting perspective on how important data is going to be.

One thing that Tesla have is human input data. Google won't have that and will never have that I believe?


How are Uber and Lyft related to Tesla?


They also invest in self-driving cars, and have big fleets where they can potentially get data from.


They have big fleets? Where? For the most part the uber cars are not owned by Uber and certainly not equipped with any sensors. I know they have some tech under development but that's not the same as a massive fleet on the road.


It's quite strange to view how ... flexible the requirements-level changes are. As someone steeped in safety and human life critical software development, this seems very odd.

This is especially true of "Interface alerts are much more prominent, including flashing white border on instrument panel." This has been a huge thing in aviation automation for like ... forever.


The speed with which these changes are rolled out, especially considering the scale of the changes, is also … impressive while under design controls.


If they are doing this in the sort of controlled fashion I'd consider necessary, then this needs to be considered a public good and studied well.


Completely agree---it could also make for a very lucrative consulting business.


I'm keeping my tone positive, but behind it is skepticism that they've solved this problem - because it's an extremely profound problem and it's resisted solution for as long as there have been humans building dangerous things.


We are on the same page I think, the skepticism is shared: I'm in medical and not automotive, but much of our PLM process is shaped by what was learned in the automotive and aerospace worlds. It boggles my mind to imagine how Tesla can get all of the responsible parties to even sign off on the documentation alone at the speed they work. I am quite serious (and I think correct) that if Tesla's process is as efficient as it appears, they are sitting on a gold mine for consulting considering all the organizations in regulated industries that are incapable as moving as fast. Musk wants to change the world? Sharing this secret would count.

Another question that popped in my head: what did the requirements look like that allowed such a huge (I assume) CPU and memory budget available that they could improve the system with "six times as many radar objects with the same hardware with a lot more information per object."


"Autopilot" is "experimental" so whenever there's a problem, they blame it on the driver.

I wonder if they have heard of Six Sigma.


I have, and Six Sigma only has weak and thready influence on software as I've seen it. And I have looked very hard.

Six Sigma has been, in my view ( appeal to observer bias ) dominated by supply chain activities. There is too much delay variation in software for it to be much use.

(this being said, internalizing at least JIT to software seems pretty useful ).


There is currently no legal framework which would allow anything else than "driver is responsible for vehicle". (I suppose that might change soon)


More accurately, there is no true legal system period. Tesla's playing PR to avoid a damaging precedent.


Surprisingly correct: USA did not sign the Vienna Convention on Road Traffic, as the only western country. Special snowflake as usual :D


I was under the impression that the driver is blamed because the driver was actually at fault (such as sometimes not even having autopilot on when crashing).


I always assumed that humans evolved eyes adapted to the wavelengths of light because it gave optimal information to avoid collisions. (Well, except for glass panes; we're not genetically optimized for those.)

It feels scary to discard millions of years of evolution and go with radar-first, but as always, time will tell.


The biological eye also developed based on the availability of natural electromagnetic radiation. And we are using most of the spectrum which easily passes through our atmosphere. Radar waves are nice in that they pass through rain and fog, but they are not well reflected by animals. So for surviving in the wild, they are less useful, but on the streets where most cars are metal-heavy they work much better. They do however not give a very well resolved picture. The radar cannot really distinguish between a bridge and a high trailer. The new software tries to work around that by using data from other Teslas passing the same road section before.


Following on the parent's comment on the human eye :

One of the most striking graphs I've seen was the plot of the transparency of water vs optical wavelength.

Basically, there's only a tiny transparency gap, which coincides almost exactly with the wavelengths the human eye is sensitive to.

http://hyperphysics.phy-astr.gsu.edu/hbase/chemical/watabs.h...


I forget where I read it but I thought that the visible spectrum likely developed because it can penetrate water and that was where the first life began. Any later development of depth perception would seemingly be a tangential thing and not implicit to the wavelengths used by our eyes. Don't blue and green wavelengths pass through water rather well?


> The radar cannot really distinguish between a bridge and a high trailer.

Are there any visualizations available of what these radar patterns look like?

I wonder whether a human looking at the radar visualizations could reliably discriminate between them. I have to suspect that we could, but I've never seen such a visualization and don't have a good sense of what kind of detail it includes.


Here is a really great writeup from a guy building his own radar. Look at e.g. the SAR images right at the end.

http://hforsten.com/homemade-synthetic-aperture-radar.html


Humans do not have a light source and use the Sun's light. This is why we see in the "visible" spectrum.

If you can bring your own well collimated coherent source you have no reason to stick to visible light. On the contrary, other spectra might be better (lower background noise, easier to produce coherently, easier to direct, etc).


While I agree with your points - I would say to the contrary that roads are currently marked-up for visible spectrum usage. Road signs and lines are made for consumption within the visible spectrum - some of which may be able to be seen in other light.

Though possibly an argument could be made that a self driving car doesn't need to read signs, if it can get that data from a formatted online source. (Eg street names on signs, vs from GPS and maps)


Two words: update lag. Is the "road closed this week, bridge out ahead" sign going to appear in a map? How?


The scary part is that previously they were supposed to be using the cameras as primary data whereas that requires cutting edge AI that does not exist yet (nVidia and some groups making progress) and even then is limited by not being elevated and not being able to see through solid things.

I would guess that Musk or someone declared that cameras would be the primary approach and the engineers below kind of had to go along with the idea on the surface, but in the actual implementation some of that really came down to what the radar was saying anyway.

If this is ever going to be 'level 4' it will probably be with both an enhanced deep learning visual system (doesn't exist quite yet) and an as-yet-non-existent inexpensive elevated LIDAR. The LIDAR might come down in price within the next two or three years.


I'm sorry your comment was downvoted, but thanks for taking one for the team. Whether you are right or wrong, it's exactly the kind of comment that stimulates an interesting conversation, like this one which I learned some new things from.

To the downvoters: how do you expect to have an environment conducive to intellectually gratifying discussions, if you're not willing for someone to share a wild idea, and maybe even be a little bit wrong, without penalty?


"This is where fleet learning comes in handy. Initially, the vehicle fleet will take no action except to note the position of road signs, bridges and other stationary objects, mapping the world according to radar."

Wow, glad to see that they are using big data and machine learning. If all those 400k orders go through, there will be a network effect in favor of Tesla.


Perhaps Tesla might release the data. Musk is all about improving humanity; selling cars is a means to an end, rather than the end itself, and that end could be hastened if every manufacturer could access (and augment?) the database.


Musk is all about saving humanity, he's apocalyptic. Probably rightly so. Improving also, but not what really drives him.


He'd be stupid too at the moment. You want to leverage that asset to get other companies to open up there data.


Two really interesting notes buried in the release notes at the end:

> With further data gathering, car will activate Autosteer to avoid collision when probability ~100%

> Curve speed adaptation now uses fleet-learned roadway curvature


...imagine this in racing. Saw some races over the weekend and the racing line varied considerably between cars and drivers, not like F1 where every driver is that good. I imagine the wider population have an even wider idea of what the 'racing line' is. In theory the Tesla car could take 'better than Senna' lines through ever curve, avoiding the crashes and also optimising efficient regenerative braking. I look forward to this and I am glad the Tesla brain is learning from 'the fleet'.


BMW was doing that circa 2011: even brought it to the Top Gear track.

So yeah, there's a precedent :)


If I were to ever buy a Tesla, could I turn off data collection?

If not... that would turn off a lot of privacy-conscious people, which Tesla doesn't tend to attract at its current prices but may become relevant as they come out with cheaper cars.


You could remove the SIM card (if there is one), remove the antennas, pry off the modem, run a cell jammer (only for driving in Antarctica!), drive it in the middle of nowhere, DOS them via your free network connection until they boot you off (and hopefully they don't brick your car), etc...


If you have a mobile phone, you are already broadcasting your location. Go to google maps, open the sidebar menu, select "your timeline".

Sorry. Privacy kind of went byebye.


I don't understand arguments like that.

"You already have heart disease, who cares about colorectal cancer? Have a cheeseburger!"

Privacy isn't some binary choice, and having less data collected means less data being spilled when X company has a server compromised.


> If you have a mobile phone, you are already broadcasting your location. Go to google maps, open the sidebar menu, select "your timeline".

Except you can, you know, turn that off.


I'm aware of Google Location History. I turned it off.

In any case, there's a difference between Google storing my location (they don't), Tesla storing my location (they would), and NSA storing my location (they do).

Google gives me the option to opt out. If I opt in, they give me cool features that require said location history.

NSA doesn't let anyone opt out, naturally.

Tesla doesn't let users opt out, even if they don't use autopilot. As we've seen, they actively use their location data to attempt to exonerate themselves whenever there's any kind of accident. In any case, Tesla gives me nothing in return for this information. If I were to try to modify the car to send no data, I don't doubt that Tesla would disable the fucking car.


Every time I am reminded about how Tesla can update their cars in the field I always imagine the stress in the responsibility of securing a network like that, and the risk a compromise carries. Heavy work for some team.


The "simple" radar cruise control in my 2014 Mazda 3 Astina (ex demo; MT) is an amazing experience. I'm finding it remarkably accurate, even picking up objects (motorcycles, bicycles) that they explicitly state will not be correctly registered. The simple, yet effective HUD indicates current following distance (as a side note, most drivers are way closer than 2s). Coupled with the visual AEB system, I have autonomous braking for traffic and emergencies. I still have to steer, however the lane departure system warns if I'm exiting my lane without indicating at +65km/h.

It might not be anything like Tesla autopilot, but it's still a pretty sweet taste of the future. So stands to reason more can be done with it; I wish I had the funds for a Tesla... Maybe one year :)


Slightly off topic, but are the software updates optional or mandatory in Tesla cars? As I do with phones, I hold off on updates until I know it's stable and has no major bugs. Serious bugs in a car software could be fatal.


I can't even begin to imagine how that would be done in this case. While the downside of Bad Things in a Tesla car is not nearly as spectacular as in a late-1950s experimental aircraft, it seems all owners are in effect test pilots.

What I'd like to see is an emphasis on stupefyingly comprehensive test vectors.


Optional but actually semi-mandatory. You have to manually accept the software update to install it, but there was recently a case reported of someone who was refusing to do this and stay on an old firmware. Over time, Tesla gradually deactivated things like navigation and entertainment systems (if I recall correctly), claiming that they had upgraded their APIs and they were no longer compatible with the owner's firmware. In reality I suspect they are just punishing the owner to coerce them into upgrading their firmware.


Does the car still use optical cameras? I'm not understanding how those factor in.


It has to for lane keeping. Radar can't see painted lines.


Serious question - if there are two Teslas, travelling in opposite directions, will their signals not interfere with each other?


One of the easiest ways would be to use a randomized pulse repetition frequency? The other Tesla would be recognized as a discrete interference source.

Seems this is a well-known problem in airborne radar: http://www.dtic.mil/dtic/tr/fulltext/u2/a402557.pdf


> Something made of wood or painted plastic, though opaque to a person, is almost as transparent as glass to radar.

combined with

> ...we now believe [radar] can be used as a primary control sensor without requiring the camera to confirm visual image recognition.

seems like they're now intentionally ignoring the possibility of wooden and painted plastic obstacles?


Sounds the opposite to me: if the radar says "stop" then the car will do it. It won't go though if the camera says stop.


> The radar was added to all Tesla vehicles in October 2014 as part of the Autopilot hardware suite, but was only meant to be a supplementary sensor to the primary camera and image processing system.

I guess I'm surprised that what sounds like a large change in ConOps can be rolled out as an upgrade across a fleet in such a short period of time. It'd be fascinating to hear what sort of V&V had to be done, and how it was accomplished so quickly, to make this happen.


I hope they take their time and not rush the update. This is probably the biggest change since introducing Autopilot. Also, Tesla is now in a very dangerous moment where a serious failure can bury Autopilot.


Interesting that they are silently crowdsourcing a map overlay. When you drive a route in your Tesla, you're playing the part of a Google mapping car for them.


So are they saying radar wouldn't see a tree fallen across the road? Is there a backup system for that kind of thing?


The human. At least, given the technology that Tesla is deploying here, the driver would be expected to continue to pay attention to the road and to brake for obstacles.

I mean, a really cynical view looks at the part where it works better as a smokescreen for the part where it much more aggressively monitors driver attention.


Can someone explain the other release notes? Most of them seemed terse and I didn't understand what they were saying?


I'm not understanding how they surmount the soda can problem?


A single radar return at just he right angle might make the soda can look huge. But with multiple sensors moving relative to the soda can and sampling at 10Hz, you end up able to form a "real" composite of the object


Wonder when next HW is coming, supposed to be this year.


My guess is the same time that the Model 3 launches (July/August)


Maybe, but considering the scrutiny on Tesla right now, I think they'd prefer to launch any new hardware ASAP, irrespective of the marketing benefits of launching it with the 3.


The speculation is that it is coming this fall, and it will not be able to be backported to already sold cars. One of the main reasons I am holding off a purchase.


makes me wonder what's going on with radar detectors these days? Are they just 100% false alarms with all the cars using radar?


It is frightening how little the press release addresses the many catastrophic failure modes.


Yes, especially how it, being a press release, was written with a potentially large -- but unknowable -- bias towards minimizing those failure modes.


Just needs to have fewer failure modes than humans.


We have no evidence that the current Tesla system causes fewer injuries or fatalities. Moreover, the alternative isn't non-automation but other reasonable automated designs, ones that do address the failure modes.


Failure modes aren't just a simple scalar quantity, much less a yes/no.


> The net effect of this, combined with the fact that radar sees through most visual obscuration, is that the car should almost always hit the brakes correctly even if a UFO were to land on the freeway in zero visibility conditions.

Unless he means "UFO" in the formal, aviation sense of an unidentified radar contact, I think Mr. Musk is severely underestimating alien stealth technology. I can't quite wrap my head around the notion that an advanced species could perfect interstellar travel but somehow be incapable of stealth technology.


Maybe they simply never needed it, their primary sensor being gravitational wave GrDAR?


If they're landing on a freeway why would they need stealth?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: