> People still do things and decide things, they are just not the conscious author of those actions and decisions
Nothing you have said supports this claim. It's perfectly possible for you, the way you are today, to be largely the result of processes you didn't choose and didn't control, and also for you to do things and decide things that you are the conscious author of.
If you reflect closely enough on your own thinking process, you will realize that every thought you consciously observe seemingly comes from nowhere. You don't get choose your thoughts.
Consider I give you the task "think of a color". You choose blue. How did you choose blue? Did you consider the color lavender? If there are some colors you considered, how did you choose what to consider? And why wasn't lavender one of them? Did you choose not to consider lavender?
In reality, I give you the task "think of a color" and some thoughts of colors come into your consciousness via some unconscious process.
I think "red, blue, orange, red, oh wait I already thought of red, green, purple, red, why do I keep thinking of red, yellow, hmmmm ok I pick blue I guess."
I wasn't deciding which thoughts were going to come into my conscious view. They just appeared. When I had those thoughts of internal monologue about thinking of red so much, I wasn't choosing to have those thoughts either. I was just observing them as they appeared in consciousness.
When I settled on blue, I knew I wanted to avoid red because I had thought about it so much. But when did I choose to have that want? I didn't. It just appeared.
> If you reflect closely enough on your own thinking process, you will realize that every thought you consciously observe seemingly comes from nowhere. You don't get choose your thoughts.
Perhaps this matches your experience. It doesn't match mine. Some of my thoughts might work like this, but not all of them.
More generally, the fact that none of us get to choose everything about our lives or our mental processes (which as a general claim I agree with, even if I disagree about a specific case as above) does not justify the claim that we choose nothing.
You are simply focusing on particular cases and then making invalid generalizations from them.
Can you provide an example of a thought that recursively asking things like "why did you think that thought" or "why did you choose to think that thought" doesn't eventually get to an unconscious cause?
You are getting things backwards. Consciousness is not some magic dust that gets sprinkled over certain thoughts. Consciousness is part of the process of having thoughts. The fact that we don't have conscious access to all of our thoughts does not mean we cannot consciously choose anything about our thoughts.
Similar remarks apply to "free will". "Free will" is not some magic property that some of our mental acts get endowed with. It is part of the process of our thinking and acting. The fact that we don't and can't freely choose everything about that process does not mean we cannot make choices at all.
Do you believe that thinking is a physical process (that it’s embodied as or is the outcome of chemical/electrical activity)?
Do you believe that the initial state of all the particles in your body (including brain) exists? If so, what leads you to conclude that thinking is anything other than an on-going electro-chemical reaction that we’re just observing the outputs of and concluding that we’re controlling it somehow?
> If so, what leads you to conclude that thinking is anything other than an on-going electro-chemical reaction that we’re just observing the outputs of and concluding that we’re controlling it somehow?
These are not mutually inconsistent options, that's your mistake. Thoughts are some kind of (likely deterministic) information process, and thoughts counterfactually determines downstream actions and thoughts, ie. earlier thoughts determine later thoughts. If I choose to lift my arm, I only lifted my arm because I chose to do so. The fact that I was determined to choose to lift my arm doesn't seem relevant to whether the thought caused my arm to lift.
> The fact that I was determined to choose to lift my arm doesn't seem relevant to whether the thought caused my arm to lift.
It is relevant to the moral implications, however. If you agree that the process that led to an action was deterministic, then it's entirely unreasonable to use punishments as vengeance, for example. It may still be morally tolerable to use them for rehabilitative or preventative means, but if you accept that they had no agency in the decisions, then applying more restrictions than the minimum needed to protect society seems to create another victim.
> If you agree that the process that led to an action was deterministic, then it's entirely unreasonable to use punishments as vengeance, for example.
I disagree, punishment could still be justified if it were effective at deterring or altering future behavior, particularly if it were more effective than any other alternative. You can make a straightforward utilitarian calculation for this, for example.
In any case, the matter of justice is separate from the question of free will. Once you have free will, you still require further assumptions to argue for punitive or restorative justice.
You say you disagree, but what you describe is not what I meant.
My point is exactly to argue that if people could not have acted differently, punishing people for the sake of causing them suffering in retribution rather than for another purpose such as rehabilitation or deterrence is no morally different than enacting the same on someone who had not carried out a crime.
> In any case, the matter of justice is separate from the question of free will. Once you have free will, you still require further assumptions to argue for punitive or restorative justice.
It is partially separate. I agree that if free will (in the non-compatibilist sense) was possible, it does not automatically justify vengeance. But it's linked in that if free will does not exist, then it is meaningless to assign responsibility in any less abstract sense than we might talk of a piece of code being "responsible" for a crash. Someone might be "responsible" in the sense of being the most proximate cause, but given they could not have chosen differently, inflicting harm on them for the choices they could not have made differently is no morally different than inflicting harm on any other random person who only did the only things they could.
To me, then, given I find free will nothing more than an illusion, the near total lack of focus on rehabilitation and minimising harm to prisoners makes our "justice" systems some of the largest organised brutal and immoral criminal enterprises. Most judges have done vastly more immoral harm than anyone they've ever convicted, for example. Of course they could not have chosen differently either, so I don't think punishing them for it would be any better. But changing the system would be (and yes, I do believe that this too is not a free choice, nothing more than cause and effect playing out, giving us an illusion of making decisions we can not make differently)
> You say you disagree, but what you describe is not what I meant. My point is exactly to argue that if people could not have acted differently, punishing people for the sake of causing them suffering in retribution rather than for another purpose such as rehabilitation or deterrence is no morally different than enacting the same on someone who had not carried out a crime.
If it wasn't clear, I was disagreeing that "punishment as vengeance" is the primary motive for punitive justice, while pointing out some grounds typically used to justify it (deterrence mainly). I think reducing punitive justice to vengeance is largely a strawman.
Furthermore, I'll just note that Fankfurt refuted the principle of alternate possibilities decades ago, so I don't find this particular quality you ascribe to free will meaningful.
I think you read "punishment as vengeance" quite differently from how I read it.
I read "as" as "for the purpose of". With that reading, "Punishment as vengeance" would be undesirable if people's actions were pre-determined. "Punishment as deterrence" would still be desirable (via whatever mechanism allowed for a deterrence effect to exist in a world where actions were pre-determined).
> If it wasn't clear, I was disagreeing that "punishment as vengeance" is the primary motive for punitive justice, while pointing out some grounds typically used to justify it (deterrence mainly). I think reducing punitive justice to vengeance is largely a strawman.
I did not say it was the primary motive for punitive justice. I said using punishments for vengeance would be unreasonable. And vengeance clearly is a motive in a lot of punishment, or the pushback against treating prisoners nicer or shortening sentences where the evidence shows both works just fine would focus on the evidence rather than the emotions relating to whether prisoners deserve to be treated a certain way.
> Furthermore, I'll just note that Fankfurt refuted the principle of alternate possibilities decades ago, so I don't find this particular quality you ascribe to free will meaningful.
I'm not remotely compelled by the notion that Frankfurts arguments against PAP makes any difference whatsoever here, because the point is that a strict determinism makes the very concept of moral responsibility meaningless, in that if one accepts determinism then no entity had any ability to deliver another outcome, and the person holding the gun had no more control over the outcome than the gun or the bullet.
What then makes the person and not the gun or bullet, or any other entity that was a part of the cause and effect chain more morally responsible?
Moral responsibility to me rests with a minimum the possibility of choice as a starting point, and I reject that there's any evidence to suggest such a possibility exists.
Finding someone responsible in the abstract sense and for the sake of considering whether a deterrent or rehabilitation is necessary for the sake of wider society is fine, with the caveat that many entities and events can have a causal responsibility of various degrees and if we genuinely care about this we'd more often look past the individual that is the proximate sentient cause and to e.g. addressing underlying contributing societal conditions. This, to me, is another piece of evidence that the concern is often vengeance rather than deterrence.
Seeking to assign a moral responsibility for the sake of blame or vengeance, on the other hand, would to me in itself be immoral. Doesn't mean I'm immune to assigning responsibility - we're wired to do so and the illusion of free will is strong and it's hard to see past it.
> And vengeance clearly is a motive in a lot of punishment, or the pushback against treating prisoners nicer or shortening sentences where the evidence shows both works just fine would focus on the evidence rather than the emotions relating to whether prisoners deserve to be treated a certain way.
That too can be chalked up to deterrence. If prison were not unpleasant, why would it be a deterrent? Edit: unpleasantness has limits of course.
> I'm not remotely compelled by the notion that Frankfurts arguments against PAP makes any difference whatsoever here, because the point is that a strict determinism makes the very concept of moral responsibility meaningless, in that if one accepts determinism then no entity had any ability to deliver another outcome, and the person holding the gun had no more control over the outcome than the gun or the bullet.
This isn't the point of the Frankfurt cases. The point was that they are still responsible if they wanted to make that choice, regardless of whether or not they effectively had no other choice in terms of outcomes. That's exactly the case under Compatibilism.
> What then makes the person and not the gun or bullet, or any other entity that was a part of the cause and effect chain more morally responsible?
The simple fact that the person can learn to do better and make a different choice in the same circumstances in the future, where the bullet and the gun cannot. This is why mental competence is a factor in moral responsibility. Systems with minds that process moral content can and should be held morally responsible.
> That too can be chalked up to deterrence. If prison were not unpleasant, why would it be a deterrent? Edit: unpleasantness has limits of course.
You missed this part:
> where the evidence shows both works just fine
The evidence is clear that prisons can be far nicer without affecting the deterrence.
> This isn't the point of the Frankfurt cases. The point was that they are still responsible if they wanted to make that choice, regardless of whether or not they effectively had no other choice in terms of outcomes. That's exactly the case under Compatibilism.
And I'm saying I don't find this compelling at all. It's an utterly idiotic argument in the face of determinism, because them wanting to make the choice in a deterministic interpretation is itself beyond their ability to control.
> The simple fact that the person can learn to do better and make a different choice in the same circumstances in the future, where the bullet and the gun cannot. This is why mental competence is a factor in moral responsibility. Systems with minds that process moral content can and should be held morally responsible.
In a deterministic interpretation they are learning only in the same way a computer updating a weight in a decision graph or neural network does, so replace the bullet with a program firing a weapon then, which by your description are morally responsible.
> And I'm saying I don't find this compelling at all. It's an utterly idiotic argument in the face of determinism, because them wanting to make the choice in a deterministic interpretation is itself beyond their ability to control.
But that's irrelevant. There is no "them" beyond the entity that determinism constructed, and the entity that determinism constructed wanted to make that choice, therefore that entity should be held responsible so it learns right from wrong and that it can do better in the future.
It's really hard to understand exactly what the issue is. It seems you agree with all of the following:
1. The entity with a functional mind is the proximate cause of the outcome in a long chain of deterministic causes.
2. The entity's mind contains moral content and so understands the moral ramifications of its choice.
3. The entity wanted to make that choice and was not coerced by another entity.
4. Moral blame for wrongdoing and praise for ethical choices shapes future behaviour of entities.
Yet you somehow disagree that this process of moral feedback matches how humans apply moral responsibility, and so step 3 matches what we've been calling a "freely willed choice".
> In a deterministic interpretation they are learning only in the same way a computer updating a weight in a decision graph or neural network does
Yes.
> so replace the bullet with a program firing a weapon then, which by your description are morally responsible.
If the program is mentally competent just like a human, ie. capable of learning and making decisions considering moral factors like we do, then yes, holding it responsible makes perfect sense. Bullets aren't that.
Exactly what holding a firing program responsible means can be different than how we hold humans responsible, but the process by which we assign responsibility is the same.
The portion of the chemical reaction that "thinks" of itself as having some kind of consciousness that's somehow non-physical rather than just reactions in a blob of salty tapioca.
To its own way of thinking, in its own made-up frame of reference, it could appear to be. In the actual, physical universe, it would still not be. Physics and chemistry do not care about our feelings and emotions.
OK, so the "we" that is observing the outputs and the "we" that thinks it is in control are both the tapioca. But the tapioca is wrong about being in control, because actually it's just physics and chemistry. Where is all that physics and chemistry happening? In the tapioca. So the tapioca controls the whole show, right? Which is it?
The notion that we freely choose anything is absurd on the face of it, as it'd mean a process where the outcome neither follows deterministically from the inputs, nor is random, nor any combination of the two.
Find me a logically coherent definition of free will that does not reduce it to an illusion, and you could write it up and have a great shot at a nobel prize in physics, maths or both.
> The notion that we freely choose anything is absurd on the face of it, as it'd mean a process where the outcome neither follows deterministically from the inputs, nor is random, nor any combination of the two.
You're just imputing a meaning to "free will" which is not how people typically use the term. People don't mean a free choice to be free of antecedent causes, because that would be absurd: you choose to eat when you're hungry, you choose to sleep when you're tired, you choose a job based on what salary you want and work/life balance which are preferences shaped by your experiences, etc.
When people say someone made a choice of their own free will, they mean the choice was not coerced, and so that choice was consistent with their values rather than forced to conform to someone else's values. That is, it's an expression of their "will".
This is the view on free will known as "Compatibilism", because it's compatible with determinism, and it's basically the view we see in the legal system: if you are mentally competent to make a choice, and you were not coerced, you are held responsible for that choice.
This entirely sidesteps the issue, which is that if you employ a compatibilist notion of free will (in other words: a "free" will that is merely an illusion of agency), then you have no actual agency, and so assigning you moral responsibility for the actions taken is fundamentally unjust and in itself wildly immoral: You had no actual control of the choice taken.
I'd also strongly content that you're wrong, and that if you actually dig into peoples beliefs about free will deeply enough to ensure they understand what you're suggesting, you will find most people get annoyed, agitated or outright angry at even the notion that they don't imbue outcomes with some magical fairy-dust agency they can't define.
Getting people to superficially agree with compatibilism is easy. Getting people to agree with it once you present them with the logical consequences of it, is not.
At least it's consistently been my experience with this over decades of having this discussion: The exceedingly rare compatibilists I come across tends to be a small subset of the exceedingly few people who read philosophy papers.
EDIT: I'd love to see a survey ask people if they think a purely deterministic process, e.g. a suitable program running on a computer, can ever be said to have free will. If people say "no" to that, they are rejecting a compatibilist definition of free will.
You can see the strong emotions that brings up in play if you look at the discussions over AI art right now, with the number of people angrily rejecting not just the current iteration of such tools, but the very notion that AI can ever be more than deterministic automatons, with the implicit belief in that being that humans are not.
> This entirely sidesteps the issue, which is that if you employ a compatibilist notion of free will (in other words: a "free" will that is merely an illusion of agency), then you have no actual agency, and so assigning you moral responsibility for the actions taken is fundamentally unjust and in itself wildly immoral: You had no actual control of the choice taken.
Except that's irrelevant. The fact that a murderer had no "ultimate" control over becoming who he is, is irrelevant to the fact hat he felt justified in committing the murder, and the entity that feels justified in murdering has made an unethical choice, and so that entity requires correction. This is the inescapable question free will addresses: who is responsible for a choice?
And this addresses your claim here:
> This entirely sidesteps the issue, which is that if you employ a compatibilist notion of free will (in other words: a "free" will that is merely an illusion of agency)
So this is not an "illusion of agency", this is exactly the kind of agency people care about: if your thoughts and feelings counterfactually determined your choice free of coercion, and if that choice was wrong, your thoughts and feelings require correction via feedback, which is holding you morally responsible.
> I'd also strongly content that you're wrong, and that if you actually dig into peoples beliefs about free will deeply enough to ensure they understand what you're suggesting, you will find most people get annoyed, agitated or outright angry at even the notion that they don't imbue outcomes with some magical fairy-dust agency they can't define.
You can contend strongly all you like, I cited empirical evidence that was part of a series of studies undertaken over years which tested to see whether people have Compatibilist and incompatibilist intuitions. People agreed strongly with Compatibilism after it was properly explained to them, which is the opposite of what you claim.
> The exceedingly rare compatibilists I come across tends to be a small subset of the exceedingly few people who read philosophy papers.
Well then you're in for a shock, not only from the paper I linked above, but also because a comfortable majority of philosophers who actually read and write philosophy papers for a living are Compatibilists:
> I'd love to see a survey ask people if they think a purely deterministic process, e.g. a suitable program running on a computer, can ever be said to have free will. If people say "no" to that, they are rejecting a compatibilist definition of free will.
We already know the answer to this: anthropomorphization is a near universal cognitive bias, by which humans assume a mind behind phenomena, particularly phenomena that exhibits complex behvaiour. This is why people are polite when conversing with chatbots like ChatGPT.
If there's a mind that's responsive to moral feedback as I described above, then it makes perfect sense to hold it responsible, and I am 99.9999% certain this is how most people would treat this question.
> We already know the answer to this: anthropomorphization is a near universal cognitive bias, by which humans assume a mind behind phenomena, particularly phenomena that exhibits complex behvaiour. This is why people are polite when conversing with chatbots like ChatGPT.
That does not give us the answer to what I wrote at all. Yes, people will casually treat computer systems as thinking entities. I wrote my first chatbot that fooled people ~28 years ago or so. It was not a good chatbot: All it took to fool people into long conversations was to continuously pick random responses from a set of only 4-5 or so.
But that does not tell us whether or not they would think that a computer exhibits free will, nor whether they think it ever could. Assuming this cognitive bias equals a belief that is maintained when probed deeper is something I've seen many times when discussing this with compatibilists. It's easy to assume a compatibilist view on free will if you look at surface level indications; people are not consistent, nor do most think carefully through unstated implications. We - me included - will often act as if something has agency and/or is a fully thinking being with a meaningful form "magical" free will even when we know full well the entity in question isn't even an advanced automaton.
Case in point: the trivial chatbot mentioned above. Some people would pour out their life story. Some would get angry at it for repeating itself. Some would tell it they thought it was probably a bot, and still talk to it as if it was a sentient being afterwards. Does that mean they have thought it had agency anyway since they treated it as if it did after "outing" it as a bot? Of course not.
It meant that in that context it met some immediate desire or need of theirs to treat it as if it was another mind they could connect with, because it was a convenient fiction.
Compatibilist free will is a convenient fiction. And so if you were to ask me whether I agree that "free will" as defined by compatibilists exists, I'd say yes. I just don't find it to have any moral implications, and I see it as no more than an illusion and a smokescreen. And so if you were to ask whether I think "free will" exists, I say no. It's an absurd notion. There's nothing "free" about the compatibilist notion of free will.
Failing to account for this ability to buy into a fiction like this on a contextual basis makes any investigation into what people actually think of something like free will entirely meaningless twaddle (and to be clear, most of what I've seen from non-compatibilists in this respect is also meaningless twaddle). Trying to dig into this with short surveys without multiple questions aimed at determining inconsistencies in the answers that signal lack of understanding of implications is a big red flag.
> If there's a mind that's responsive to moral feedback as I described above, then it makes perfect sense to hold it responsible, and I am 99.9999% certain this is how most people would treat this question.
Even if we posit that people would treat it that way, it does not mean it makes perfect sense or is justified. If that what people do believe, it would just tell me that most people are just as immoral, cruel and callous as they consider criminals to be. Personally, if one consider it moral to apply a corrective system to attempt to reduce crime (and I do, at least for crime that actually has victims), then I'd consider it equally moral to apply the same form of corrections to people engaged in that harm against others as part of current "justice" systems. Participating in the oppression of the huge number of people being punished in ways that do not serve a corrective purpose the way prosecutors, judges and police do amounts to horrifying levels of immorality from the point of view that the people they inflict harm on could not have acted differently.
> Except that's irrelevant. The fact that a murderer had no "ultimate" control over becoming who he is, is irrelevant to the fact hat he felt justified in committing the murder, and the entity that feels justified in murdering has made an unethical choice, and so that entity requires correction. This is the inescapable question free will addresses: who is responsible for a choice?
If most people genuinely saw it like this - a matter of correction rather than vengeance -, the justice system would look fundamentally different. If the goal was indeed correction, sentences would follow evidence of risk, and imprisonment would focus on rehabilitation. Every evidence we see of how people act and respond to sentencing show that people see prison as to a large extent punitive, an attitude that is wildly immoral if the "entity in question" could not have chosen differently given the same inputs and the same state.
> So this is not an "illusion of agency", this is exactly the kind of agency people care about
Irrespective of whether or not it is "the kind of agency" people care about, if that is what people care about, then people are using compatibilism to justify to themselves allowing harm to people who had no ability to choose differently. If that is the kind of agency people care about, that does not make it better. It makes people worse for understanding that they are harming people who can't choose differently.
> if that choice was wrong, your thoughts and feelings require correction via feedback, which is holding you morally responsible.
The first part is a utilitarian argument for minimally invasive rehabilitation, not for punishment that does not achieve those goals. The latter part is a smokescreen - if you could not possibly have made a different choice, there's nothing moral about holding you responsible, and punishing you without trying to minimise the impact on you is itself inflicting immoral harm.
This is one of those things that tends to get people very agitated when you dig into this. If you accept a purely materialistic universe, then the current "justice system" is nothing of the sort. You can reasonably argue that correction is necessary, but there's certainly no justice in applying further harm to people who had no ability to choose otherwise.
> You can contend strongly all you like, I cited empirical evidence that was part of a series of studies undertaken over years which tested to see whether people have Compatibilist and incompatibilist intuitions. People agreed strongly with Compatibilism after it was properly explained to them, which is the opposite of what you claim.
I see no evidence of this in the paper you linked to. What these kinds of discussions tends to show me is that you can make all kinds of superficial statements like the ones referenced in the paper, and get people to make seemingly coherent assessments when you give people the "escape clause" that however much you tell them otherwise they think there's some neubulous special thing about humans that give us an ability to decide otherwise, but once you start digging into it people tend to get very upset when you suggest their ability to make decisions is computationally equivalent to a computer (hence my edit, which I realise you might not have seen when you replied to my post above).
> Well then you're in for a shock, not only from the paper I linked above, but also because a comfortable majority of philosophers who actually read and write philosophy papers for a living are Compatibilists
How is that shocking? The proportion of people who read and write philosophy papers are a tiny little proportion of people. I'm totally unsurprised at the number of people who read philosophy papers for a living come to this view - it's hard to reconcile a strongly materialistic view of free will with living in a society that you then necessarily will see as brutally immoral and without any semblance of justice.
When you are focusing on a problem, and then catch yourself distracted daydreaming, when did you choose to direct your attention away from the problem and towards the daydream?
It's not obvious to me at all that I consciously author the choice of where to direct my attention. And even in the cases where I do think "Ok, I am going to think about X", did I author that thought? On down recursively.
Thoughts appear, and I notice them, but I never choose to notice them. It's all quite mysterious.
The nature of attention-direction, the way it moves and such, be it a choice or otherwise, is indeed mysterious.
One phenomenon that you might consider is that when you direct your attention at one thing (be it a thought or whatatever) you become blind to other things. And this blindness is proportional to the intensity of the attention-directing.
Nothing you have said supports this claim. It's perfectly possible for you, the way you are today, to be largely the result of processes you didn't choose and didn't control, and also for you to do things and decide things that you are the conscious author of.