> I suspect that the true hard problem of consciousness is 'why do we (feel that we) have free will?'
Free will and consciousness are separate issues. You can potentially have one and not the other. What's important for free will is whether choice is free and what that freedom amounts to. What's important to consciousness is subjectivity. You can feel pain while not having a choice about feeling that pain, say if you were restrained and unable to do anything about it, just to give an example.
And potentially, a robot could be free to make choices while not feeling anything.
I doubt that 'will', as it is commonly conceived, has any meaning, or at least a very different one, for a non-conscious agent. It's similar to the way i think IIS is clouding rather than clarifying the issue. For related reasons, I feel that compatibilist positions on the issue are largely avoiding it.
Free will and consciousness are separate issues. You can potentially have one and not the other. What's important for free will is whether choice is free and what that freedom amounts to. What's important to consciousness is subjectivity. You can feel pain while not having a choice about feeling that pain, say if you were restrained and unable to do anything about it, just to give an example.
And potentially, a robot could be free to make choices while not feeling anything.