Hacker News new | past | comments | ask | show | jobs | submit login

Given the trajectory that AI is taking in terms of being a very helpful code assistant, what type of CS education would be ideal for current elementary school kids? Sure, some will want to go all the way and be programming the AI; but for the rest, it would probably make sense to be able to use the tools in the same way that today's "tech savvy" can accomplish 10x as much as someone who isn't familiar with computers. What parts of computer programming will be necessary, and which parts can be summarized or jettisoned?

I ask this question as a mostly non-technical person, wondering what skills it makes sense to develop in my children, so they can "skate to where the puck is going", so to speak.




Scratch seems to be a very popular no-code way of getting kids into programming (games). Prior to that, encouraging your kids to solve new problems and constantly pursue their curiosities is a good way to build the skills / mentality. Divergent thinking (what are the different uses for this thing / what are the different ways to solve this problem) and convergent thinking (what characteristic makes all of these different things alike) are ways to make them comfortable being uncomfortable (refactoring and extricating). Pattern recognition via IQ tests, which are heavy on programming-related patterns (visual XOR, AND, etc.)... and also asking them why the IQ test might be wrong / which other "tile" could be next other than the provided answer.


If they are in the sixth grade then it may be 2030 by the time they graduate from high school.

I am working on a website right now that uses OpenAI's models and at the moment it can definitely write and (immediately) deploy simple web pages with interactions and calculations exactly customized by my customers. I am working on the dialogue interface and the text-to-speech and some other things to make it work better. Planning to have a new release at the end of the week. The current version that is up is hard to use since it requires special commands and has some rough edges and uses the non-coding ML model (which can code, but not as well as the code-specific model I am switching to).

Within 3 years I believe that these types of systems will be doing a very significant percentage of programming tasks, both for programmers (as "assistants") and end-users.

Within 7 years I think that programming will have evolved mostly to be directing these types of coding assistants for most use-cases.

The challenging thing though is that kids still need to learn how to read, write, think and problem solve. Despite the fact that AI is starting to be able to do quite a lot of it for us.

They need to learn solid problem solving skills like problem decomposition, how to search/find answers to roadblocks (such as using ChatGPT etc. or other tools like Google and other tools that arise) or just plain persistence, logic, abstraction. The struggle will be getting them to do some of this stuff for themselves rather than cheating like most of their classmates.

But at the same time they absolutely have to learn how to use the new AI tools. It will be critical to stay competitively productive as an adult or just to be able to fit in. There will be important new tools every few months or years.

Where this is really headed in my opinion is by the 2040s high bandwidth brain computer interfaces that tightly integrate cognition with advanced AI systems (2-10 X smarter than humans) start to become commonplace. These will enable different paradigms for communication and society. Before we get to that, AR glasses/goggles will integrate AI deeply into most people's lives.


> Within 3 years > Within 7 years

What is your justification for these estimates? I've been trying to pay attention to what various experts think (I'm not one) and it seems like far from a foregone conclusion that this will be the case, and it might not even be the case that scaling up produces the same outsized benefits we've seen so far.

François Chollet for example seems like someone who is pretty in the know about current SOTA and is not nearly as optimistic best I can tell.

https://twitter.com/fchollet/status/1620617016024645634?cxt=...

> But at the same time they absolutely have to learn how to use the new AI tools. It will be critical to stay competitively productive as an adult or just to be able to fit in. There will be important new tools every few months or years.

Definitely agree here.

> Where this is really headed in my opinion is by the 2040s high bandwidth brain computer interfaces that tightly integrate cognition with advanced AI systems (2-10 X smarter than humans) start to become commonplace.

My gut feeling is this is pretty insane and unlikely to be the case but I suppose insane things have happened before.


Like I said, my website can already do it to some degree for end users. GitHub CoPilot etc. is very popular for programmers. I am just assuming the models will continue to improve and be deployed.

The only reason to be un-optimistic like that person is if you assume that the model capability will remain essentially static over several years. But even with current models (which new ones are released every few months at this point) there is huge potential for replacing quite a lot of real software engineering work especially when you start putting them in loops and specializing/priming them for particular types of programming.


Would love to try this out! Contact info in profile...


Maybe learn the same things but use tools like ChatGPT to get help where needed?


The sense I'm getting is that the output will look authoritative, but may not work. When it doesn't work, it may be obvious, or it may be latent. What does someone need to know in order to be able to detect latent flaws (and to be able to ask the right questions in the first place)?

I imagine it helps to know a lot of the same basics, but as AI gets better at reliably performing certain types of functions, it becomes OK to view more and more stuff as 'black boxes'. I'm trying to figure out what those black boxes are, and what they will be in the future. Because the more time you spend learning something that becomes irrelevant, the less time you can spend learning other stuff that remains relevant (coding or otherwise).


You try to run the code, or your kid tries to run it. I do plan to build an auto-debugging mode in eventually to the extent it is possible. Say about half the time, feeding an error or bug description in to the coding model from AI, it is able to fix the bug just from that.


Better to think of it like a fellow student (who may get things wrong) than a teacher.

Learning how to deal with possibly-bad advice is a real world skill, as anyone who's used StackOverflow should know.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: