Hacker News new | past | comments | ask | show | jobs | submit login

You're better off thinking about the logical properties that entail moral responsibility. Consider that it makes no sense to hold a car engine morally responsible for malfunctioning and killing its passengers, but it does make sense to hold a driver responsible (and it might make sense to do the same for a general AI in the future). Why?

What distinguishes how we deal with one deterministic system over another? That's what free will is. The debate has progressed quite well over the past century, and most philosophers are Compatibilists like the OP for very good reasons.




I love your framing of the question, but am frustrated with the Compatibilist conclusion.

"Moral responsibility" is a heuristic for our monkey brains, tapping into evolved ideas of justice, accounting for imperfect prediction of future events, attempting to control human behavior one way or the other. The reason we don't lecture engines about morality is because it is perfectly clear that they obey only one kingdom of rules, the physical.

Humans also must obey physical rules, but there are so many layers between base physics and human behavior that we speak in very fuzzy heuristic terms about morality. This does not mean free will is anywhere to be found in one of those intermediate layers, just that it's too complicated a machine for us to analyze like an engine, so we resort to using the levers and knobs that evolution gave us. IOW, free will is a "god of the gaps", only existing while we still lack the knowledge to better understand the layers between our gross beliefs/actions and base physical processes.


> This does not mean free will is anywhere to be found in one of those intermediate layers

The problem here is that you're carrying some preconceptions of what free will means into this debate. The whole point of the free will debate is to define free will and figure out what it means.

Think of it this way, you can define a term in two ways: via denotative or connotative definitions. Incompatibilists push a connotative definition of free will, where they say, "free will must have such and such properties, oh and look! humans don't have those properties, therefore people don't have free will". They have been wrong every single time they've claimed a property was necessary.

By contrast, most people typically reason with denotative definitions saying, "that thing I do when I make a choice free from coercion, that's free will". And then we explore precisely what this means and what properties this requires.

As for it being just a heuristic, I'm not sure that's accurate. Take a sorting algorithm and point out precisely which step actually does the sorting. Well that's nonsense isn't it? Every step is essential to a sorting algorithm, and removing any single step breaks the sorting property.

Your god of the gaps argument is essentially trying to do the same thing with free will: you can't point to any specific brain state and say that's free will in action, but the ensemble is what produces what we recognize as free will. All brain states we see when a decision is being made are essential to making a free choice.


I like your "God of the Gaps" analogy. That's exactly the problem I have with dualism. We don't understand the physical causes, therefore mysticism.

As cars become more than just human-controlled engines (self-driving) they are likely to be called on to make arguably moral judgements. They are being given rules to follow. For now these systems are still largely deterministic in a comprehensible way, but with deep learning neural networks playing an increasing role we might begin to lose the ability to fully and deterministically anticipate how they will behave. Not because it isn't possible in principle, but because it will become too complex to do in practice. Then what do we do?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: