This points out something very related that I think about a lot - can you prove to me that you do any of those things? Can I prove to you that I do any of those things? That either of us have a will? When would you be able to believe a machine could have these things?
In Computing the Mind by Shimon Edelman is a concept that I've come to agree with, which is at some point you need to take a leap of faith in matters such as consciousness, and I would say it extends to will as well (to me what you've described are facets of human consciousness). We take this leap of faith every time we interact with another human; we don't need them to prove they're conscious or beings with a will of their own, we just accept that they possess these things without a thought. If machines gain some form of sentience comparable to that of a human, we'll likely have to take that leap of faith ourselves.
That said, to claim that will is necessary for intelligence is a very human-centered point of view. Unless the goal is specifically to emulate human intelligence/consciousness (which is a goal for some but not all), "true" machine intelligence may not look anything like ours, and I don't think that would necessarily be a bad thing.
Not just consciousness- all of science requires a leap of faith- the idea that human brains can comprehend general universal causality. There is no scientific refutation for Descartes' Great Deceiver- it's taken as a given that humans could eventually overcome any https://en.wikipedia.org/wiki/Evil_demon through their use of senses and rationality on their own.
I have long worked on the assumption that we can create intelligences that no human could deny have subjective agency, while not being able to verify that. I did some preliminary experiments on idle cycles on Google's internal TPU networks (IE, large-scale brain sims using tensorflow and message passing on ~tens of pods simultaneously) that generated interesting results but I can't discuss them until my NDA expires in another 9 years.
Dunno. For all I know, DeepMind will publish a paper titled "Evidence of Emergent General Intelligence in Deep and Wide Perceptron Hypertoroidal Neural Message Passing Networks of Tensor Processors Trained over the Youtube Corpus" and get all the credit. :)
Which brings up a related tangent. How is it Deepmind has a veto over what you publish? I understand keeping proprietary knowledge and implementation details secret (though the industry trend is in the other direction), but forbidding publication of your research seems excessive.
In Computing the Mind by Shimon Edelman is a concept that I've come to agree with, which is at some point you need to take a leap of faith in matters such as consciousness, and I would say it extends to will as well (to me what you've described are facets of human consciousness). We take this leap of faith every time we interact with another human; we don't need them to prove they're conscious or beings with a will of their own, we just accept that they possess these things without a thought. If machines gain some form of sentience comparable to that of a human, we'll likely have to take that leap of faith ourselves.
That said, to claim that will is necessary for intelligence is a very human-centered point of view. Unless the goal is specifically to emulate human intelligence/consciousness (which is a goal for some but not all), "true" machine intelligence may not look anything like ours, and I don't think that would necessarily be a bad thing.