> We need to have a conversation about the possible paths to make programming more widely accessible.
We've been doing this for as long as computers have existed and have made virtually no progress (pun intended) since 3GL languages. No other industry has ever tried as hard as ours has to make itself redundant. Programming is about as simple now as it will be for the foreseeable future, any simplification sacrifices the versatility.
If you want to make programming more accessible then you need to work on peoples abstract reasoning skills.
I strongly disagree with this. Today, it is "think to program", but the computer is right there, it can help out a lot more beyond just interpreting the program you thought hard about writing. Instead, the computer can help us think, it might not be able to do the reasoning for us, but it can help us break it up into smaller pieces.
A nice analogy is the difference between Ultron, Hawkeye, and Iron Man. Ultron represents the singularity where computers just program for us; it will come someday but it isn't very interesting to us. Hawkeye, the "super" archer, is analogous to the programmer with advanced abstracting reasoning skills, he is amazing, but there just aren't going to be too many of him. Humans aren't getting much smarter in general.
Then there is Iron Man, who makes himself awesome by using technology, from his power suit to his holographic design environment and interactive voice assistant. That is the sweet spot for us.
It's actually possible that we are: https://en.wikipedia.org/wiki/Flynn_effect ; summary: "The Flynn effect is the substantial and long-sustained increase in both fluid and crystallized intelligence test scores measured in many parts of the world from roughly 1930 to the present day." (But both the reality and the interpretation of this are matters of some dispute.)
Moreover, without any computer assistance until very recently, our ability to do mathematics has advanced marvelously over the past few thousand years. We shouldn't underestimate the power of education, culture, better notation, and accumulated wisdom to advance human capacities even at purely intellectual endeavours such as math (or, for another example, chess).
Of course, none of this says that computers themselves can't help us program, which is your primary point. I just think you're being a little pessimistic about "merely human" ways of improving ourselves.
Well said. Programming is just simulating the world. If you don't understand problems well enough, we could teach you any language there is on earth and it wouldn't matter.
Accessible tools and laptops don't make programming any easier than cheap hammers, nail and glue make carpentry easy.
After a while its just you. And that's the biggest thing.
Ultron, Hawkeye, and Iron Man are fictional characters. It’s easy to imagine an omnipotent programming AI, but we have no reason to believe any such thing could exist.
No, we don’t. Nuclear fusion is a theory, there is evidence it is possible. We can observe it by looking at the stars.
Omnipotent AI is not a theory. There’s no evidence it is possible. I have yet to even see a a falsifiable hypothesis about how it could come about. The most advanced hypotheses are something along the lines of “they’re getting smarter do eventually they’ll be infinity smart” which is not a strong claim.
Evidence suggests intelligence is niche specific. Humans are smart, but in the middle of the ocean a jellyfish is smarter. A better prediction about AI is they will surpass us in some ways and not others.
That's not what was suggested. Human-level intelligence is sufficient to cause a singularity (because then they can start improving their own programs).
> better prediction about AI is they will surpass us in some ways and not others.
That isn't a better prediction at all. Regardless, it is uninteresting to my original point, which is we want to live in the Iron Man, not Ultron, phase now anyways.
> Human-level intelligence is sufficient to cause a singularity (because then they can start improving their own programs)
Your assumption is that human-level intelligence is bound by offline simulation capacity.
If humans are already optimally utilizing offline simulation (in the biological world we call that imagination) then the “human level” AI will have just the same limitations as a human.
That’s the counter-proposal you have to make falsifiable for your guess to become a hypothesis:
That any human-level AI, in order to become human level, will be bound to the same interactive constraints on learning that humans are.
Think about World War II for example: were strategies limited by offline simulation capacity, or were they limited by the fact that you can only try (and therefore sample consequences for) one at a time?
I’m not making a claim here either way: I’m just saying you wave this whole debate away by saying “human intelligence plus unlimited simulation equals superhuman intelligence”
I’m not saying your wrong, it’s an interesting thought experiment: I’m just saying not only is there zero evidence for that, there’s not even (to my knowledge) a robust model describing the mechanism by which that could expect to be true.
We've been doing this for as long as computers have existed and have made virtually no progress (pun intended) since 3GL languages. No other industry has ever tried as hard as ours has to make itself redundant. Programming is about as simple now as it will be for the foreseeable future, any simplification sacrifices the versatility.
If you want to make programming more accessible then you need to work on peoples abstract reasoning skills.