Hacker News new | past | comments | ask | show | jobs | submit | rando3826's comments login

> I think the ethical question is greatly overblown.

Not at all. One programmer is going to make choices that affect ~ a billion vehicles one day, so something that doesn't happen 99.9999% per hour will happen 100 times per hour. And it's going to be more mundane things, like programming the speed limit 1 mph slower results in 500 less deaths per year, or using drm/copyright to stop poor people from getting driving software, thus killing thousands of people.


"Per hour" is rather ridiculous. I'm talking about per crash. There are a couple million vehicle crashes in the US each year, so if it really is 99.9999%, that's two or three ethical crises per year here. Among a billion vehicles it would be maybe ten per year. And honestly I'd be surprised if it's that many. When have you ever heard of a crash where there was any kind of ethical dilemma in the response? People keep imagining scenarios, but I've never heard of one happening in real life.

As for speed limits and such, I'm talking about the car's ethical choices in crashes, not the ethics of the programmers.


I'm actually familiar with a situation like this, the person was on a highway at night and came around a corner to see a deer lying in the road and a crowd right behind the deer. There was a couple of people off to either side. They couldn't brake in time to stop and not harm the people on the other side of the deer.


The people crowded behind the deer ought to have sent a person off in either direction to warn approaching traffic of the obstacle.

I recently attended to an echidna sitting in the middle of the other lane in on a blind corner. I reversed until I could see straight road for 60 meters, stopped with my car at an odd angle on the side of the road with the hazard lights on to give an indication to approaching vehicle something unusual was occurring ahead, namely my partner and I were on the road on a blind corner. Then we listened carefully for approaching vehicles while we got the echidna off the road.

The ethical dilemma lies not with the driver, who is otherwise driving to the conditions, but with the people who have put themselves at risk by performing a task in a dangerous environment without appropriate hazard protection.


Are you saying the correct response is to swerve in to the group with the fewest people?

I don't think human drivers have the reaction time to make these decisions, not sure why so many expect computers to be able to.


Because eventually, computers will be able to.


Those computers probably won't get into situation like this in the first place, being able to stop or react preventive. Anyway, it will be so rare an occasion with no guaranteed outcome (it's all about probabilities), so it's not really matters as much as it's discussed. With same kinda of attention you could ask if making and selling ladders is ethical, because people fall from them and hurt themselves.


> Those computers probably won't get into situation like this in the first place, being able to stop or react preventive.

It will always be the case that circumstances can change faster than something with the momentum of a fast moving car could adapt. Somewhere between "you're boned" and "your car saves the day" lies room for a scenario such as this, with potential room for a large amount of thinking.

> Anyway, it will be so rare an occasion with no guaranteed outcome (it's all about probabilities), so it's not really matters as much as it's discussed.

I agree that it will be rare, and that the concern is overblown. That said, "it's all about probabilities" is no reason something can't be vitally important.

> With same kinda of attention you could ask if making and selling ladders is ethical, because people fall from them and hurt themselves.

I think there is a substantive difference between the two. We're not asking whether selling a car that has a chance of injuring the user is ethical - we're fine with that. We're not even asking whether selling a car that has a chance of hurting others is ethical (already meaningfully different than the ladder case). We're asking about the ethical ramifications of making particular tradeoffs in "chance to hurt the user" versus "chance to hurt others". It's an interesting question, so it gets a lot of attention. I don't think most of those involved see it as a reason to prevent self-driving cars - long before the dilemma is really relevant, self-driving cars are already safer than human driven.


> It will always be the case that circumstances can change faster than something with the momentum of a fast moving car could adapt. Somewhere between "you're boned" and "your car saves the day" lies room for a scenario such as this, with potential room for a large amount of thinking.

I think it's possible that with improving AI of self driving cars, making overall safety better, there will be ≈0 occasions when car simultaneously 1) doesn't have time to react to some sudden problem and 2) has time to make informed decision (and physically perform it) on how many people to save. Especially if pedestrian airbags become popular.


But by then the car will be able to engage its flight module, take off, and fly you directly home.


Even if, the car will have to calculate whether it has enough clearance to deploy the wings so that it doesn't cut down a pedestrian, or whether there's anyone that could be caught in the engine wash.

Interesting as it may be, it's only shifting the problem ;).


Maybe. Flight generally uses quite a bit more energy.


Yes, but perhaps less than you think. I fly small aircraft, and the one we fly gets ~15 mpg in cruise carrying up to 6 people at just over 200 mph. There are cars/trucks on the road that get less than that. If I slow it down to 150 mph, I can get over 20 mpg.


A properly designed self driving car would never out drive its sensors in the first place. This is just basic defensive driving. If there's a blind corner coming up then it would slow down sufficiently that it would be able to come to a controlled stop if there's a stalled car or other static obstruction around the corner.


They also could not think. If you've ever been in an accident, you know that there's no time for thinking. Reflexes take over.

Ascribing ethical decisions to the driver in such a situation is not rational nor ethical.


no, but it's a situation in which a computer could make a decision


I imagine you mean a choice like "should I run over this child who suddenly ran in front of the car or should I swerve into opposing traffic?"


What's the ethical dilemma there? From your description it sounds like they were just screwed regardless.


Also, an autonomous system would probably get information about that sort of problem and slow down prior to going around the turn.

At least, they will if we don't screw it up.


Human drivers can already get information about that sort of problem, e.g. by using Waze. This requires that a human who uses the app has reported a hazard (the act of which could sometimes result in distracted driving), however, I would expect autonomous cars to tell each other about such things in the future.


Self-driving cars can also limit speed such that they'll always be able to stop for stationary obstacles. If it's a blind corner, they'll slow down.


True, but seeing as the article was about the ability of people to modify their car to for instance not do that I see where there could be a dilemma here.


thank your for saying this!!!

Because the article was completely about people REPAIRING technology they purchased, and not at all about mods, modding or modifications.


They're pretty much one and the same, how can you allow someone to repair something without allowing them to modify it as well? What if they repair it badly?


how can you allow people to own guns without allowing them to shoot people?

how can you allow mechanics to fix brakes without allowing them to break brakes?

I dunno... laws maybe? Like, the same laws that can require manufacturers to allow aftermarket competition.


> how can you allow people to own guns without allowing them to shoot people?

even more, correct me if I'm wrong here, but you can repair a gun, but it's illegal to make certain modifications to it, right?


>but it's illegal to make certain modifications to it, right?

That's correct in the US. For instance, you can repair an AR-15 back to factory specs, you can modify it in certain ways (adding different sights, etc.), but if you modify it for full-auto operation (which is quite feasible actually), you can be thrown in jail IIRC and be subject to a gigantic fine.

This system works well: people can repair and modify their property to a certain point, but making specific changes are highly illegal, and the penalties are extremely severe, so almost no one does it. The same can be done for other things if the case for public safety warrants it. So repairing your robo-car should be completely legal, but modifying it to blatantly violate emissions laws or to run cyclists off the road can be punished harshly with vehicle confiscation, $1M fines, lengthy prison sentences, etc.


I'm pretty sure the correct behaviour is to not come around a corner so fast that you can't stop for a non-yet-visible obstruction. Human drivers routinely flout this due to poor risk assessment, but an AI driver would easily be able to to do it.


I'm pretty sure the correct behaviour is to not come around a corner so fast that you can't stop for a non-yet-visible obstruction.

Do you actually practice this? Setting aside legality concerns (many freeways have minimum as well as maximum speeds), if you truly did this at every curve and corner you'd be more likely to cause than to prevent an accident. The number of times there will be something you avoid by slowing down will be outweighed by the number of times your speed differential relative to surrounding traffic causes an accident (and it is speed differential, not speed itself, which is responsible for virtually all "speed-related" accidents).


For truly blind corners, yes. However, I don't have great faith in my ability to correctly judge all partially blind corners, and occasionally find myself surprised.

I don't advocate slamming on the brakes seconds before the turn, there's such a thing as slowing down safely as well.

All of this is something AIs will be nearly infinitely better at doing safely than even a minimally impaired (tired, distracted) human driver.


> I'm talking about the car's ethical choices in crashes, not the ethics of the programmers.

Cars are not entities to which ethics can be ascribed; the humans building, designing, and programming them are.


I think another way to phrase what you are saying is that "Cars are not moral agents." .

Apparently there has been some disagreement as to whether cars can be moral agents?


Let me clarify: the topic here is specifically the ethics of whoever in a crash where the car may be able to get different people killed depending on what actions it takes. Speed limits and DRM are an entirely different topic.


I like how you're going to fight about other people's numbers, but act like "99.9999%" is in any way meaningful.


I happen to think that a rate of one in a million crashes is realistic (even if far from accurate, it's roughly in the ballpark) while a rate of one in a million hours is absurd on its face.


> I'm talking about the car's ethical choices in crashes, not the ethics of the programmers.

There is no practical difference. Computers do what we tell them to do, whether what I tell it to do has a conditional in code (the computer chooses), or instead always chooses one option because I left that condition out is a completely irrelevant distinction.


You realize that the common `debugging caveat' applies: computer do indeed do what we tell them to do. That doesn't mean than anyone understands what the computer are going to do.


Debugging sucks.

Formally verified code is not unheard of.


Formally verified code is awesome. It moves the `debugging problem' up one level, "Does our formal spec capture our informal meaning?"

Ideally in practice, that problem is simpler than "Does our code what we want?

In theory in the abstract, of course, the problems are the same.


To me the question is relatively uninteresting because it seems obvious.

As long as cars belong to individuals they have the responsibility to favor the interests of those individuals as far as law permits. Where and how much the community is to be preferred over the individual, the community should specify in laws. People running code that breaks those laws should be liable for that codes actions. (I could imagine an ethics slider settable at the owners choice within the legal limits that allows them to discount the lives of the passengers compared to the lives of others, on the condition that they notify their other passengers).

It's possible that ultimately the right choice is to move to a different model, but in that case, the AI should not be owned by the individual who bought the car and has a reasonable expectation that it work on their behalf.

The ethical decision on the car is no different than the ethical decision that we ask the human driver to make in similar situations (which as the gp noted are incredibly rare), or that we trust the human driver to make when we are passengers in their car. In fact I even doubt that it makes sense to legislate it, since we haven't so far for e.g. uber drivers, but perhaps testing will indicate that we should.

In terms of the speed limits, society has already decided to accept a huge number of deaths per year that would almost all be preventable with a speed limit of 35mph, so I don't see how the introduction of AI drivers affects this question.

I would like to see laws that say that all source code for self driving cars must be publicly auditable and pass tests before being approved for road use. Running code that has not been safety tested by the state will leave you uninsurable and personally liable for its actions. We should aim to make running the test suite as accessible as possible in order to allow innovation, but it should still give us a high confidence that the driver behaves correctly according to the law.

I take pretty much all of this from analogy with human drivers: when we're driven by someone else, we have relatively little control over their ethical slider setting, we have high speed limits for humans because we think the convenience of the many is more important than the lives of the few, we don't let brand new humans drive without having some indication that they'll make the right decisions. None of these problems seem to be particularly changed by the introduction of AI.


> As long as cars belong to individuals they have the responsibility to favor the interests of those individuals as far as law permits.

Well put. I would not buy a car that wasn't designed to protect its occupants first and foremost. My family is more important to me than abstract, unlikely to occur ethical dilemmas.

Having that choice programmed in actually makes it easier for self-driving cars to predict the behavior of other cars on the road. It's simple and probably at least a near-global maxima for optimal car fitness.


Have you even been in a Taxi? Or a bus? Or any situation where someone who is not you is driving the car?

You have put your life into their hands when you do this.

No matter what algorithm the self driving car is running, it is going to be 100 times safer than you driving the car. Yes, YOU are a worse driver than a self driving car, NO MATTER what "ethics algorithm" it is running, because any of the ones that could be running will be safer than you or the taxi driver.

What you are basically saying is that "I don't trust any braking system in my car that isn't 100% perfect, therefore I am going to drive my car without ANY braking system! I'll physically stop the car myself if I ever need to brake!"


No matter what algorithm the self driving car is running, it is going to be 100 times safer than you driving the car.

And yet, I have driven hundreds of thousands of miles over many years under all kinds of road conditions without ever having an accident. Statistically my record is better than many, but it's hardly a unique achievement among experienced drivers who are careful.

Tesla's Autopilot probably has done more than 100x the miles I have by now, but it's also had several accidents, some of them fatal. As I understand it, it has also only been used on highways, which are statistically much safer and have much less challenging driving conditions than places like winding rural roads or inner city residential areas.

As another example, Google's self-driving cars have apparently done less than 10x the driving miles I have, but under somewhat more challenging conditions than highways. They too have had accidents, and they too reportedly still can't cope with anything close to truly realistic driving conditions on their own.

So no, it appears that the state of the art in self-driving technology today is not 100x safer than me driving a car. If anything, it's looking considerably more dangerous. Of course there is potential there -- no matter how skillful and careful I am, I still only have two eyes and human reaction times -- but automated driving is still a long way from outperforming good human judgement as things stand today.


But are self-driving cars better drivers than some identifiable other group of people than you? E.g. Legal but elderly drivers with poor sight.

Are the algorithms a useful helper for some drivers?

I really really want all our rental vehicles to remind the drivers if they are on the wrong side of the road (we get multiple deaths every year from Americans and Europeans driving incorrectly in our small country).


I expect that with time these self-driving algorithms will become safer than an increasing proportion of human drivers. I'm just very wary of assuming they are better than most or all human drivers prematurely.

We as a society have a habit of putting too much faith in technology. Most of us aren't knowledgeable and unbiased enough to assess its risks objectively, particularly in areas where there is a very low chance of something happening but it will be very bad if it does happen.


> No matter what algorithm the self driving car is running, it is going to be 100 times safer than you driving the car.

That seems very suspect.


I think that you highly underestimate how bad humans are at driving.

Being better than a human driver is a very low bar to pass.


Being 100x better than a sober, attentive human is a very high bar to pass.


You're assuming most drivers on the road are attentive.


False equivalence.

If you get into a taxi or a bus, there's still a human driver who has the same survival instinct that you have. If he has to run someone over so that he can survive, he likely will, saving you too in the process.

This isn't about systems being perfect, it's about whether the system should err on the side of protecting the car's occupants or on minimizing casualties overall.


Programmers are not going to decide what is an acceptable MPH vs posted speed or if the car will use DRM. Much like how programmers at Volkswagen, Audi, etc are not the ones that chose to cheat emissions tests.

The real choice is going to be what to do about soft obstacles. Hitting a person or deer can easily kill passengers and hitting trees is actually normally safer thus making the discussion fairly moot.

People are going to want to hack self driving cars so they can speed, which is more inline with this moral discussion. Not because it's safe for passengers, but because it's also dangerous for other people.


Programmers have to make decisions about those things all the time. It isn't like the product team defines every nitty-gritty detail of implementation -- especially for machine learning and AI algorithms.


Sure, in the how cautious should I be in rain situation. Not, it's a sunny day and open road, the speed limit is 65 what should I do situation.

PS: I expect company's to punt with a user selectable: how much to speed buttons.


You're wrong. Programmers cannot escape making decisions when they design these algorithms, no matter the situation they are designed for.


A lot of AI is based around training sets not traditional Algorithms like your thinking of. There are simply to many edge cases to deal with by hand. Yes, there is also a lot of hand coding in these systems, but good training sets are very critical.


The choice of those training sets, including their contents, their size, how they are consumed, and the algorithms they are fed to, affects the output. Therefore, programmers make decisions. It is unavoidable.


I don't think the people doing a million miles driving Google cars around are classified as programmers.


Actually, some of them are programmers.

Even if they weren't, it is unrealistic to assume that programmers have no input into the decisions that go into building the training set, which in this case involves where, and in what conditions, the prototype cars are driven.


Granted, though plenty of Uber drivers are probably also programmers, and I have made plenty of business decisions I probably should not have. But, I would hope this is held to a higher standard.


If the cars obey the posted speed limit and slow down and split-the-difference when on curves or near hazards (such as children running on the sidewalk), the hypothetical moral dilemmas shouldn't even happen. So far, I've only heard of one accident involving a Google car, and there wasn't a fatality.


Natural environment/human/animal behaviour related issues kind of throw a wrench in that idea.

Yes, in normal conditions you shouldn't ever be in a situation where the 'trolley problem' is relevant. But it's also quite easy to imagine external factors that might make it unavoidable. Like say, a car running out of power in the middle of the road, falling trees, buildings, fences or other obstacles, people or animals in the road, natural disasters or freak weather conditions.

All of these situations open up a possibility that the car won't have enough time to stop regardless of the speed limit.


There's only been one accident that was the Google car's fault. Google cars have been involved in more than a dozen collisions in total.


Engineers make such decisions all the time. There are always tradeoffs with respect to safety/cost/utility/etc.


Why use an ANKY in the title? Using an ANKY(Acronym no one knows yet) is bad writing, makes readers feel dumb, etc. Google JUST NOW invented that acronym, sticking it in the title like just another word we should understand is absolutely ridiculous.


I honestly didn't find it that hard to understand as they recently released a machine learning library that started with a T.


Are they accepted upstream? If not, I'd like to see a link to the lkml thread for these.


I wonder if they are discouraged by the experience of Con Kolivas[1] who proposed an alternative scheduler back in 2007. (Apparently he is still maintaining his "-ck" fork of the linux kernel[2]!)

I only mention this as a historical case that has remained in my memory. Maybe Linus is willing to revisit the issue, I don't follow LKML.

[1] https://en.wikipedia.org/wiki/Con_Kolivas

[2] http://ck-hack.blogspot.com/2015/12/bfs-467-linux-43-ck3.htm...


I do not follow Kernel Dev enough to have a good representation of what happened, but it seems to me that it was another example of smart people pushed out. Nowadays I think he is just maintaining his patches from one kernel release to the next, it was smart for Con to stop interacting with them and move to other types of devs, it comes a point where you have to keep your sanity.


I remember the whole debacle back then when the lkml was summarized on that website I can't remember

It'd be interesting to see if that branch exhibit the same behavior and issues.


Maybe KernelTrap? I was sad when it shut down, it was a very useful resource for following Linux development from the sidelines.


Yeah that one! The memories.


You might be thinking of LWN. They covered the scheduler multiple times over the years: https://lwn.net/Kernel/Index/#Scheduler


I can't find any evidence they were submitted upstream at any juncture.

Moreover, since (several of) the patches seem to rip out a bunch of logic in favor of very simple logic, they would probably be contentious without broad testing and probably a runtime option to configure the behavior.


And uses spaces instead of tabs. Someone will need to go fix them up.


Not sure why you're being downvoted. Tabs instead of spaces is the Linux kernel style.


The conference where this will be presented starts next week. So I think they'll post it after they catched up some sleep after that.


Which conference, Eurosys?


Yes, since the filename is eurosys16-final29.pdf.


Nope nope nope. Gpl is a binary decision whether you are in compliance and can distribute or not. So, in this case, it would be negative, not allowed to distribute. Unless the law requires someone to distribute gpl software, which it doesn't, there is no conflict here.


Nope nope nope. My license can say "cannot distribute in places where law x exists." My license can say "no one can distribute. period."


Feedback. I can't get much from your screenshots since I can't expand them and they are too small to read anything. I don't know what a "simulated phishing campaign" is, what results I might get out of it, I just don't really understand the whole purpose / process, and you don't any info other than the phrase "simulated phishing campaign" to explain what the thing does. So, I can create simulated phishing email, I can see who opens them. I can do this easily. But we're missing something. What does a simulated phishing email accomplish? What does a user get out of it? What would one look like? When would I send one?...


Great feedback, thanks! We're working on a new landing page that hopefully shows this in action a bit more, as well as provides a bit more information on why you might want to use gophish and when.

I'll keep this feedback in mind. Very helpful!


Because canonical wants to be the os of light weight vm images and docker images. You can't do that if the first step in distribution is always "boot into a separate filesystem, compile a kernel module, then reboot", and then if you want to distribute your image to anyone outside your organization, be sure to first remove that kernel module, and have them do the same process again.


The engineering problems are tractable even in those cases: Don't put the root file system on ZFS. Develop some kind of first-boot initrd that includes enough of a toolchain to build zfs.ko. I'm sure there are other, better ways to solve shipping ZFS in a virtual machine image, especially compared that to the legal and business risks of having a LTS release deemed a copyright infringement: Court costs, legal fees, fines, re-releasing Ubuntu, etc.


I don't think they are tractable. They break the requirements.


In linux, there is the immutable file attribute which won't let root rm the file without first changing it with chattr.


File attributes need to be implemented by the filesystem. efivars is a pseudo-filesystem which does not implement them. It could, but that is one of the more needlessly complicated solutions...


Is Apache reporting back to the apache foundation about how you are using your apache software? Maybe your expectations about user tracking are the problem.


I'm pretty sure they aren't, but I must admit, I have never checked. I supposed that would be known by now... One thing is sure, access logs are located on my own VM, and are processed locally by awstats. The only thing that binds requests to an actual user is the IP address, and one must contact ISPs to have more information. I don't have the authority to do that.


> not be based on anything technical, but on how an API is used in practice

Since it's an api, the "in practice" is some other piece of code, so I would still call it a technical issue.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: