One reaction to that statement might be "so be it."
I did not, however, come here to defend the idea that consciousness is an algorithm. Personally, I consider it rather more plausible that a computer plus a random number generator might achieve consciousness through simulating the sorts of physical processes that occur in a brain, and if it is a 'true' random number generator, rather than a deterministic PRNG, then we are no longer talking about math, at least in the sense that you use to conclude Platonism.
One might respond by saying that if the universe is deterministic, then there is no such thing as a 'true' random number generator, as distinct from a PRNG. My understanding of the philosophical implications of QM, and its competing interpretations, is insufficient for me to be sure whether the universe as we experience it is deterministic, but regardless of whether it is or is not, I suspect that the true hard problem of consciousness is 'why do we (feel that we) have free will?'
I don't think there can be or is a reason to believe there exists a 'true' random number generator. What's the reason to believe such a thing exists?
If something seems random to us likelier explanation is because we couldn't observe it from close enough so we do not know the exact algorithm/causes behind that event or occurrence.
To me it seems like even if there's a random number generator there's no reason to believe such a thing exists. Simpler explanation is that there's some logic involved which we just can't observe (yet).
Same thing as we can't prove that a man in the cloud does not exist, but there's no good reason to believe that one exists - except for people's ego of course.
> I suspect that the true hard problem of consciousness is 'why do we (feel that we) have free will?'
Free will and consciousness are separate issues. You can potentially have one and not the other. What's important for free will is whether choice is free and what that freedom amounts to. What's important to consciousness is subjectivity. You can feel pain while not having a choice about feeling that pain, say if you were restrained and unable to do anything about it, just to give an example.
And potentially, a robot could be free to make choices while not feeling anything.
I doubt that 'will', as it is commonly conceived, has any meaning, or at least a very different one, for a non-conscious agent. It's similar to the way i think IIS is clouding rather than clarifying the issue. For related reasons, I feel that compatibilist positions on the issue are largely avoiding it.