I agree, don’t limit yourself to a language since sometimes there are certain ones that fit the job. I started off with Java and love it, but haven’t used it in a while since new job and project requires go.
As someone who lived in Hong Kong, this is such a big step backwards. Freedom to access information was such a key part of picking HK over China when setting up an APAC office.
Unlike Istio, this is built to work natively with minimal dependencies on every platform, not exclusively on Kubernetes, which means it can support both new greenfield applications and existing VM-based apps, so that networking policies that would typically be available only on K8s for new apps, can be applied on existing workload today. That, and it’s meant to be easier to use, which is another big problem with existing Service Mesh implementations.
Interesting point, would love to know as well. And I echo meddlepal’s sentiment since there are so many options it can feel overwhelming. But for someone who still have services on bare metal, would folks agree that this is a good product to explore?
The driver was found using his phone, over-relied on the autopilot system, and lied about what he was doing when being questioned. Sadly, we can't 100% trust these automated driving systems yet so folks need to stay attentive behind the wheel. The more accidents like this will cause lawmakers to create laws that can potentially slow down automated driving development.
The best way to stay attentive behind the wheel is to actually be the one driving. There's virtually no way of being both attentive and passive for long periods of time. If a driver can't check their email or whatever on their phone while autopilot is on, then autopilot is not safe to put in cars. And while I'm okay with not-exactly-safe for most things people willingly consume or use, driving is not an area where it's okay to roll out a feature that may cause people to stop paying attention to the road.
>If a driver can't check their email or whatever on their phone while autopilot is on, then autopilot is not safe to put in cars.
Have you ever looked at other drivers on the highway? You see people texting, emailing, eating, shaving, doing their makeup, reading a book, the list goes on and on. If there is an activity you can do while seated, odds are people are doing it while driving.
That is one of the flaws with a lot of the complaints about Autopilot. The question isn't whether Autopilot is safer than an attentive driver. It is whether it is safer than the average driver and the average driver is not attentive 100% of the time. And there simply isn't enough data available in the public to say whether a Tesla with Autopilot is or is not safer than the average driver.
Unlike a car, a plane at cruising altitude doesn't immediately crash into the one in front if the pilot doesn't pay attention for half a second, and despite their far higher absolute speed, actions performed when flying have a much lower relative speed.
For example, in a car, several seconds of following distance is normal and following another car that closely is common. In a plane, which can move in 3 dimensions, horizontal separation is measured in minutes and it's not that common to be following immediately behind another plane at the same speed: https://en.wikipedia.org/wiki/Separation_(aeronautics)#Horiz...
In cruise, it's extremely rare to be within 3 miles of another aircraft laterally and 1000' vertically.
Takeoff, initial climb, descent, and approach to landing are high workload times, easily holding the attention of the crew.
Cruise is much more relaxed and much less need for hair-trigger responses. My aircraft has audio and visual cues if an aircraft breeches a 2nm (or thereabouts) and is within 1000' vertically. Other than that, it's systems monitoring, weather monitoring, keeping up on the nav progress vs fuel remaining, and making sure the automation isn't trying to kill you, either subtly or acutely.
There are lots of differences, but one is that situations requiring aircraft pilot intervention often have a timeline measured in minutes. The recommended procedure for handling an emergency situation is to pull the checklist for that situation before starting to ensure you don't miss anything. Even urgent situations, like a TCAS alert, will often give 30 seconds warning. By contrast, this driver had roughly 2 seconds to notice that the autopilot was silently failing to handle the situation and to apply the brakes to prevent a collision.
Extensive training, restriction of hours worked, health restrictions (pretty sure my ADHD would prevent me from being a commercial pilot)...
Also, my understanding is that aircraft autopilot systems are designed to allow for pilot response to take up to three seconds. Three seconds on the highway is a lifetime.
Airplanes are 10km up when cruising. Almost any error a pilot makes has minutes of time to be corrected. Contrast with 1 second while driving. Take off and landing are far more involved and take 15 minutes each (approx).
I've flown "right seat" (not pilot) a fair number of times. Air traffic control interacts with the pilot quite often and at fairly unpredictable times. This requires the pilot to be alert and respond promptly to a request or direction from air traffic control. If the pilot doesn't respond promptly, air traffic control repeats the request with more urgency and goes into the "lost communications" procedure if there is no response.
tl;dr: aircraft pilots generally must be attentive. Tesla drivers on "auto pilot" - not so much.
Statistics makes for a tough sell to the survivors.
“Yes, this autonomous vehicle killed your son who just happened to be in the wrong place at the wrong time, but someone else’s son would have died in an unrelated accident somewhere else were it not for autonomy.”
You absolutely can't challenge someone's personal experience with statistics, but something about this line of argument still troubles me.
I've lost a handful of close friends to car accidents. If survivorship grants standing, then I'm for doing almost anything to reduce the tens of thousands of US road deaths each year.
Survivors have been angry at safety tech before, but we generally ignore them. I've known people who insisted they would rather be thrown from an accident; they wouldn't wear a belt. One friend came to that conclusion because his uncle died trapped in a burning vehicle. I respect the incredibly complex and deep emotional reasons that he came to his conclusion, but he was still wrong, and he shouldn't set seatbelt policy.
ON THE OTHER HAND...
We sometimes throw around this baseline of the natural accident rate of drivers, and that's slightly lying with statistics. Accidents are not evenly distributed across all drivers.
You have a lot of impaired drivers in that pack. You have a bunch of insolvent, uninsurable drivers who cause a disproportionate share of accidents.
If the worst 10% have most of the accidents, then even a car that makes us safer than the mean driver could still make 90% of us less safe. We really want a car that makes us safer than our percentile of drivers.
Maybe driverless cars are already better than the mean. Are they better than the 90th %ile? I don't know that anyone has enough data to say.
The most cautious drivers really will have an at-fault accident rate statistically indistinguishable from zero. We should absolutely be shooting for that.
People understand drunk drivers. They understand distracted drivers. They’re angry, rightfully so, but it’s a known problem and easy to comprehend. We can make laws tougher and feel good about it, even if it doesn’t help much.
It’s harder to emotionally cope with, I speculate, when someone you love is killed by a software bug, or by machine learning where no one can precisely explain why it made a certain choice.
I’m not saying I’m against autonomy. I just don’t know how to prevent it being caught in a backlash when people who didn’t have to die, do.
> but it’s a known problem and easy to comprehend... It's harder emotionally... when someone you love is killed by a software bug
Sure, I understand how a poorly designed steel guardrail can ride up the front of a vehicle and crush the chest of someone I had hugged a week earlier.
My sense is that understanding does not lessen the pain.
You seem to basically be trying to explain how hypothetical people that you are imagining could be experiencing pain that is more salient than mine.
>We sometimes throw around this baseline of the natural accident rate of drivers, and that's slightly lying with statistics. Accidents are not evenly distributed across all drivers.
Not only drivers, but also roads. I would imagine most people using Autopilot use it more on highways than on city streets. The rate of accidents per kilometer is obviously much higher in the city where you have more complex situations and slower speeds. So when comparing autopilot failures we need to take into account not only miles driven, but where they are driven.
> If the worst 10% have most of the accidents, then even a car that makes us safer than the mean driver could still make 90% of us less safe. We really want a car that makes us safer than our percentile of drivers.
The standard shouldn't really vary per-person.
If it's as good as a 50th percentile driver, it's good enough.
If it's as good as a 20th percentile driver, that's also probably fine.
You're right not to touch the mean at all, though.
The problem is if there's any correlation with safe driving and early adoption, a median solution could also lower overall road safety.
For example, teenagers are generally more dangerous drivers than most, safety actually peaks per mile around 60 years old. 60 year olds are more likely to be at peak earning, more able to afford the latest car features.
You're probably right at 20℅ though? This all depends on the curve.
And there's a herd immunity, once most people are driverless, you can probably be less strict as roads become less hazardous overall.
We'd honestly probably make the biggest dent tomorrow by just finding the worst ten percent of drivers and giving them free uber/lyft for life. That would probably save us money as a society, money we could use on driverless tech.
Those aren't teenagers. Judging anecdotally from my time as an attorney, those are serially unemployed middle aged males with revoked licenses who somehow still own an old heavy truck and drive it on the sly with a bottle of something in the glove box.
If you read through case law that profile is weirdly common.
Ambulances save some number of lives. Ambulances are involved in some number of fatal accidents.
Should ambulances be banned? Subsidized? I can imagine different ways of analyzing the question, but asking "what would you say to the families of those who die" is a nonstarter. There's nothing you can say that will be satisfying, or ease their pain.
Statistics may be cold, but they're the only tool that actually lets us answer the questions like "would fewer people die with or without this change"?
Assume for a moment that vaccines could actually cause autism at a small rate.
Your argument applies there. "Yes your son became autistic, but someone else's son somewhere else didn't get measles and die."
I understand there are human emotions involved, but my opinion is that significantly reducing death count is still a better outcome. If we can't accept that noticeably better systems will still have faults, and that those faults will result in a different, but smaller number of deaths, then we're content asking for more deaths than necessary.
You don’t need to assume. With any medical procedure, there is risk involved. Including routine vaccination. That’s why there is a dedicated program to both report and compensate people who suffer adverse reactions from vaccines. See https://www.hrsa.gov/vaccine-compensation/index.html
> It is certainly okay when the rates at which these things happen is lower than the rates at which non-self driving cars have accidents.
Eh, Tesla likes to point to this statistic every time the safety of Autopilot comes into question, but it is an apples-to-oranges comparison because AP is only used on highways where traffic incidents are already far less common per mile driven.
Detailed data is hard to come by, but more nuanced studies have been far from conclusive in favor of the "Autopilot is safer than human drivers" thesis.
it is an apples-to-oranges comparison because AP is only used on highways where traffic incidents are already far less common per mile driven.
True, and that kind of flawed argument bugs me. However, there is also data that showed that accident rates went down significantly on the same cars after Tesla first enabled autopilot via a software update. This is much more compelling to me; if the same people driving the same cars in the same areas became less likely to get into an accident as soon as autopilot became available, then I don't see how you can really argue that the feature is not beneficial to safety.
I assume he's referring to the NHTSA study which was released on the last day of the Obama administration and purported to show a 40% reduction in crash rates. It's still the highest search result on HN for "Tesla Autopilot."
That study was later shown to have employed sketchy data and egregiously flawed methodology. A deeper dive showed that the crash rates may have actually gone up with Autopilot. HN discussion here: https://news.ycombinator.com/item?id=19127613
That's one possible answer. If they're trying to make their car seem safer than other cars, it's a deceptive statistic to use.
But when the argument is "using the vehicle this way is a lot safer than the minimum requirements to be on the road", it's a relevant and fair statistic.
I could buy a car with 3 star crash ratings, and people would be fine with that. I could get in a taxi with a mediocre driver, and people would be fine with that.
Why does the comparison have to be to me driving, in a car like this?
When I'm shopping for a car, it makes sense.
But for public policy discussions, what matters is how the safety ranks against the general population.
The question is whether "Autopilot" is safer than "No Autopilot" ceteris paribus, because if it isn't, then a trivial regulation to make the roads safer with zero downside is to just ban Autopilot.
You can't determine whether Autopilot makes driving safer by comparing miles driven only on highways by Teslas to miles driven across the whole vehicle fleet.
> then a trivial regulation to make the roads safer with zero downside is to just ban Autopilot
That assume almost everyone using autopilot will replace that with manual driving in the same car. Some of them might get cheaper cars or take uber and end up less safe.
And even then, there's lots of stuff you can do to a car that makes it less safe that we don't ban, as long as it's still sufficiently safe.
If anything, increase safety standards. If you want to do something that won't cost extra money then do something to ban oversized cars at the same time...
I get where you’re coming from, and I agree there the needs of public safety policy and self interest don’t always align.
What I mean to say is, when car considering the safety data of any vehicle, it is insufficient to know that the vehicle performs better than whole-fleet.
It’s also necessary to know whether any particular vehicle outperforms others in its class.
One of those without the other is necessary but not sufficient.
More data is almost always better, but I disagree on what's sufficient. If a specific use case ranks moderately high vs. the entire US fleet, then that single piece of data is sufficient.
The driver that's above average but could be more above average isn't where we need to be fixing things. Not when there are millions of drivers that are either hyper-aggressive or senile still on the roads.
If you create a system that by design will cause the driver to lose focus, you are responsible for the driver losing focus.
Saying “we told the users that they must pay attention while they are not driving”, or “we told them that the system called ‘auto-pilot’ is not actually able to autopilot the car”, and “we advertised the car’s self driving ability but had documentation saying it doesn’t actually have that ability” then you are responsible for that system driver behaving exactly as you would predict.
Their have been numerous studies that show that people stop paying attention when they are given menial or non-focus dependent tasks. Saying that the only safe way to use autopilot is by not being a human is fundamentally the same as saying “the brakes in the car only work if you press them continuously for 5s or apply 300lbs of force”.
My experience with autopilot isn't that I lose focus as much as I focus on something else. Instead of focusing on keeping in my lane, not going too fast, and if the car ahead of me is braking, I'm focusing on the road ahead -- is there construction coming up? Is there debris in the road? Is there a wall ahead that gets too close to the road? Is there anything weird up ahead that needs attention? You learn what AP can and can't handle and you keep an eye out for anything out of the ordinary, which on long road trips is significantly less fatiguing and probably safer.
There's obviously going to be some of that from people in every sort of car. The hope is that over all, people with AP are safer. I personally feel like I am.
If we want to be fair to consumers and bystanders, Tesla needs their product to safely stop the reckless behavior (not simply warn the user). If it can't do that, it needs to disable AutoPilot. AutoPilot was marketed in a way which led to Tesla selling their product to people with unwittingly dangerous uses in mind.
It is unacceptable for Tesla to enable this reckless driving. Tesla has the ability to affect the scale of this problem most readily and relying on individual consumers is not in the best interests of public health.
Why can't I say literally the exact same thing about all cars?
Let's hold people responsible for their actions, not objects. It's one thing to say "autopilot makes it hard to focus", it's another thing to say "autopilot made me pick up the phone while driving, and lie about it".
Tesla's Auto Pilot website makes claims that other auto manufacturers don't make like:
"With the new Tesla Vision cameras, sensors and computing power, your Tesla will navigate tighter, more complex roads."
There is a lot implied in the short phrasing of "navigate tighter, more complex roads". It also begs the question that Tesla's capabilities were sufficient in the first place.
"All new Tesla cars have the hardware needed in the future for full self-driving in almost all circumstances. The system is designed to be able to conduct short and long distance trips with no action required by the person in the driver’s seat."
Why make potentially misleading and untested future-looking statements when consumer safety should be the focus of a system like this?
https://www.tesla.com/autopilot
This might sound like a good idea in principle — in effect, don’t allow people to use the system unless they are more attentive then even most drivers are normally.
The problem is the natural outcome is that people will then simply just stop using the system, and end up less safe because of it, causing more accidents and possibly costing lives.
In the interest of public health, the system has to be better than the average driver, and also not annoyingly unusable.
So there is a balancing act. Tesla will already disable AP if you keep your hands off the wheel for too long. It even locks you out for the rest of the drive as a “penalty box” so you can’t just turn it back on.
As I drive down the highway on AP, the number of cars drifting in their lanes, crossing the lines, changing lanes without signaling, not maintaining proper speed, driving aggressively, driving for an exit at the last minute, panic breaking, etc. is actually somewhat mind blowing. It’s amazing there aren’t more accidents.
Versus the Model 3 which maintains speed and centers in its lane more precisely than I even do manually, for mile after mile.
What’s also remarkable is the constant progression of new features, better feature detection, more complex navigation and route planning which has happened over just the last 12 months. Tesla AP now in 2019 is significantly improved from what it looked like even July 2018, and v10 will just continue the progression.
Tesla safety stats are hard to compare to the general public, but what I can say definitely is a Tesla on AP on the highway is way more consistent and predictable of a driver than any human. It’s better to be in the Tesla in AP on the highway, but I imagine it’s also better to be driving behind or beside the Tesla in that same situation because it’s not driving distracted like almost everyone else on the road.
>Sadly, we can't 100% trust these automated driving systems yet so folks need to stay attentive behind the wheel.
And companies selling the systems need to stop calling them "Autopilot".
>The more accidents like this will cause lawmakers to create laws that can potentially slow down automated driving development.
You'll actually hear this from people working on self-driving at most companies. That they have deliberately been conservative about their roll out and claims because they don't want to destroy the concept of self-driving before they can achieve it. Tesla has done a great job of damaging the reputation of self-driving cars for everyone.
> The physical act of turning a wheel isn't the hard part of driving. Paying attention is.
As an avid user of Tesla Autopilot I'll disagree with this. On a long road trip, monitoring my surroundings and making sure everything is generally ok is far less fatiguing than making constant microadjustments to steering, speed and following distance.
Even while paying close attention to the ride, long trips (especially in stop-and-go traffic) become significantly less stressful and exhausting.
This isn’t a matter of whether you think you’re paying attention.
It’s whether you actually are - and that’s very different from what you perceive. For example i would be curious to see how eye movement of regular autopilot users differs when they are using autopilot vs when they aren’t.
Based on all prior studies of attentiveness I suspect you’d find dramatic reduction in active scanning, etc.
Part of the reason for the many curves in I280 in the sf peninsula was apparently to ensure drivers had to actually do something while driving. I’m tempted to investigate whether that’s a myth or fact, but the nature of the internet means I’ll probably just find that i280 gave me cancer :)
I've noticed the same thing driving my Chrysler Pacifica, which has lane keeping and adaptive cruise (with FCW). But it's not called "autopilot", and it deactivates if it detects that my hands are off the wheel.
Model S and X owner here. Autopilot will nag you after so many seconds (depending on vehicle speed and autopilot path planning confidence) if it doesn’t detect steering wheel torque, and if you ignore the nags, it brings the vehicle to a stop safely with the hazards on.
Car and Driver tested AEB in several vehicles (Tesla, Toyota, Subaru) and they are all pretty equally bad at it. Figuring out if matter in front of you, while traveling at speed, poses a life critical hazard is hard! It’s better than no AEB, but don’t rely on it entirely or you’re going to end up maimed or dead. As the driver, you are still the final responsible party.
How do you reconcile this with the claims in the accident report that the hands-off alert intervals are measured in minutes for the accident scenario? Quoted here with link to full source:
Visual indicator after 2 minutes of no detected driver-applied torque to the steering wheel (apparently because they only detect torque - i.e. turning/steering motions - and not pressure?)
>hands-off alert intervals are measured in minutes for the accident scenario
Since January 2018 there have been software updates that reduce the time that the car will allow you to not signal to it that you are paying attention. You now have to apply a little turning force to the wheel much more often than before.
I can’t speak to their vehicle, but we run the latest software revisions on both of ours (both 2018 builds, Autopilot hardware version 2.5), and nag delay has never been more than 10-15 seconds apart if I’m not applying sufficient torque. I have not experienced nags taking a minute to be realized (even in rush hour traffic with no curves and slow speeds).
The safety system is robust IMHO. If I push the accelerator pedal while autopilot is running, immediate nag. Running at 89 mph (max speed before Autopilot locks you out for the duration of the drive due to too high of a speed)? Nags every 5-10 seconds.
It’s not perfect by any means, but it also expects a responsible, aware driver behind the wheel.
> is far less fatiguing than making constant microadjustments to steering, speed and following distance.
Adaptive cruise control and lane assist, available in every modern car under the sun will make microadjustments to steering, speed, and following distance all day long.
I also wonder whether “dumber” systems are easier to predict. I’d feel much more comfortable knowing what the car won’t do (change lanes on me, e.g.).
(Update: to be clear, I’m not inherently opposed to a car changing lanes if I need it to, but it seems easier to deal with a potential emergency, and easier to persuade myself to pay more attention, if I know definitively it won’t do something that sophisticated.)