>So it's very possible we could build an AGI, but won't know how to make it any smarter than us.
Make it as smart as the smartest of us and then it'll figure out how to become smarter on its own. Then ask it to explain to us how we work.
Tongue in cheek but once almost-as-intelligent-as-a-human level is reached the next step is not that far fetched - quantity, multiple times faster than humans learning and we have progression on a different scale. Or we can always try mutating the thing to see what comes out - crude but worked at least once.
> once almost-as-intelligent-as-a-human level is reached the next step is not that far fetched - quantity, multiple times faster than humans learning and we have progression on a different scale.
But this doesn't necessarily follow. The experience of a mind working at a higher speed will be a slowing down of time -- imagine what you'd do if time slowed down. The machine will not necessarily learn more (we might have a limit of how much we can learn), and will probably experience boredom and frustration (as everything around it will be slow) that might drive it crazy.
Make it as smart as the smartest of us and then it'll figure out how to become smarter on its own. Then ask it to explain to us how we work.
Tongue in cheek but once almost-as-intelligent-as-a-human level is reached the next step is not that far fetched - quantity, multiple times faster than humans learning and we have progression on a different scale. Or we can always try mutating the thing to see what comes out - crude but worked at least once.