I don't think the rich and powerful will fund an AGI that has free thought and free initiative, but perhaps some other group will. The point of AGI, from what I can see, is to replace humans with reliable machines, so that they can do away with humans.
I guess I'm skeptical that a truly General AI could be constructed that didn't have those things. It seems like a fundamental contradiction to me. I don't think an intelligence could make complex good choices without understanding what the bad choices are, and to understand them I think they have to be possible.