One of the big venture capitalists predicted “prompt engineering” as a future high paid and high status position.
Essentially handling large language models.
Early prompt engineers will probably be drawn from “data science” communities and will be similarly high status, well but not as well paid, and require less mathematical knowledge.
I’m personally expecting an “Alignment Engineer” role monitoring AI systems for unwanted behavior.
This will be structurally similar to current cyber security roles but mostly recruited from Machine Learning communities, and embedded in a broader ML ecosystem.
I like this descriptions better, considering that companies like Anthropic are working specifically on Alignment and AI Safety. Being that the team actually spun out of Deep Mind, it is interesting.
Alignment is going be a giant industry and will also include many people not originally in Stem. The humanities and “civil society” will both have their contributions to make.
It’s likely that alignment jobs won’t themselves be automated because noone will trust AI systems to align themselves.
Essentially handling large language models.
Early prompt engineers will probably be drawn from “data science” communities and will be similarly high status, well but not as well paid, and require less mathematical knowledge.
I’m personally expecting an “Alignment Engineer” role monitoring AI systems for unwanted behavior.
This will be structurally similar to current cyber security roles but mostly recruited from Machine Learning communities, and embedded in a broader ML ecosystem.