If your behaviour can be predicted then you don't have free will.
Obviously you can argue whether this really predicts human behaviour or even the above implication itself (for example I think compatibilism kinda disagrees with it) but I'd say it is related.
From my point of view, that someone can predict what I will do from past behavior does not imply I don't have a free will.
Only way that would prove I don't have a free will is someone who would predict my behavior without knowing anything about my past, like a fortune teller that would tell my friend (because if fortune teller tells it to me it might be self fulfilling prophecy :D) something I would do; without having any interaction with me.
I think it is true. If something could predict all your actions with 100% certainty then you have no free will - you have no ability to decide for a different outcome than the one that is predetermined.
Although if it wasn't like that and you could do truly random decisions, I'm not sure what that says about your free will either.
If I had to guess, I don't think I have a free will. In my opinion my actions are a deterministic function of the state of my mind (including memory, personality) + input. I have no choice in the matter, only an illusion of one.
> In my opinion my actions are a deterministic function of the state of my mind (including memory, personality) + input. I have no choice in the matter, only an illusion of one
It's important to note that as far as materialist physics goes, "you" are literally "your state of mind (including memory, personality) and input". The fact that this system (i.e. "you") produces an apparently deterministic result doesn't mean that "you" didn't do it.
The common confusion comes from the fact that systems cannot in general predict their own outputs without going through the required computational process. I think it's related to the concept of "computational irreducibility". So "you" as a system needs to go through the set of motions to figure out what you eventually decide to do. Before the deliberation you can't predict the result, and thus it "feels as if" "free will" is a messy indeterministic process. But that's not a given.
Weird things happen when some other system can predict "you". The problem there is that if "you" can be simulated, you don't know which "you" is "real". But the mere fact that your state of mind and inputs will deterministically produce decisions doesn't mean "you" don't have a choice. In fact it is opposite -- "you" do have a choice, and _only_ the system "you" makes the choice (and thus it is determined solely by "you", hence, deterministic!)
I think that is exactly my stance. You are a system that is complex, iterative, and chaotic (in the mathematical sense). You are probably deterministic, but the only way you can be accurately predicted is by simulating your entire brain and sensory experience, along with memories, to the degree it's no longer a simulation; it's just building a copy. At that point, it's not a prediction; it's creating a copy of you and seeing what they would do.
I'd add that the devil is in the details of the "copy".
To accurately simulate/copy the "sensory experience", you'll have to simulate/copy the entire "observable universe" of the person. Unless you just want to predict the behavior of a person locked inside a solitary, dark, sensory deprived room, the practical implication is that you need to simulate/copy the entire observable universe to run an accurate simulation/copy.
Which is why I always tune out when Laplace's demon is brought up. Granting such an obviously impossible premise means all kinds of insane/wrong conclusions can pop out the other end.
The prediction machine in the though experiment would make the prediction after it knows the result of the coin flip and before the action is done by the subject.
There's admittedly a relationship, but it's very subtle and nuanced.
For example, you can predict that I'm going to pay my rent with > 90% accuracy, but that has nothing to do with whether I'm paying out of my free will or not.
A Marxist will argue that it's the oppression of the bourgeoisie that makes me pay, and a free market capitalist would counter that I made the choice "freely". But the truth is somewhere in between, and probably depends "subjectively" on how the person "feels" about doing it.
Of course if the prediction for all actions (not merely within an experimental setting) is close to 100% then there's probably something weird about it, but I don't think that's remotely possible. Not with current tech, and unlikely with future tech either.
Obviously you can argue whether this really predicts human behaviour or even the above implication itself (for example I think compatibilism kinda disagrees with it) but I'd say it is related.