Reasons all the same: difficult to automate with AI due to non-repeating nature of work. However repeating parts will be automated, hence GP likely to be a bot, also things like co-pilot will replace most web-devs. also houses can be mass built in container-like pods and stacked. so this is very nuanced. check-mate cheeky comment section.
will not survive:
retail worker non-luxury goods
delivery driver in cities with regular grid-like streets aka most of the US
truck drivers between cities
Twitter content moderator
Reasons all the same: easy to automate with AI due to repeating nature of work.
I think the common theme is that if you want something nice like seeing a human doctor, some personlised service, or a nice brick house you are going to see a human. But this will cost tremenduously. So rich people will interact with humans for most services/products while poor people will be interacting with bots. It's already happening (auto-bot callcentre helplines). Overall very distopian.
yes laying bricks in a long straight line is automated. Laying bricks in more intricate ways would take a lot more time.
Also the machine still has to have human super vision and they seem to clean the mortar.
We're getting closer to automate programmer's job too though. In the end it's a text problem - get a textual description (ticket) and generate code that fits it. We have models that can generate non-repeating art, why not non-repeating code?
Programming in 50 years would be then formulating the textual description accurate enough to the AI will not create garbage. So basically what programming already is.
Also fixing or at least iterating the corner cases and bugs that AIs generated. Someone needs to be there to tell AI to repeat what they did until solution generated is correct.
...but once you get your "textual description" to be as exact as it needs to be so code can be generated from it directly, it will be basically indistinguishable from code.
>> I think we're getting close to strong AI there though, so I don't see it happening any time soon.
It's unclear which clause is meant to be negative or positive here. If we're close to strong AI, systems which ask clarifying questions would be close as well, no?
Sorry. To be clear: I think strong AI is "fifty years away" and has been for seventy five years.
So yeah, an expert system that can read a natural language description of a feature and ask questions until it has enough detail to generate the code is science fiction in my opinion.
Having said that, maybe it could be achieved in a limited domain, SHRDLU-style. But that's just a way to deal with the ambiguity by excising it.
It doesn't have to be exact, no more than your input to Dall-E should be exact. Give the AI some understanding of intent of the project, some automatic metrics that will check AI's job quality and let it optimise against it. If AI doesn't generate what you wanted on the first run, correct it and let it learn. Basically what you would do with a junior developer.
Programming is as much encoding a domain and human communication as it is computation. If you can automate the first two, which is what you're claiming will happen, then all of human knowledge work could be automated. Call me skeptical.
The issue is that natural language is often ambiguous. So what you want is to define a formal language that takes out the ambiguity.
Over the last decades, we have grown from writing assembly languae, towards ever more generic languages, that allow us to express the same idea with less effort.
I see programming advancing in this direction. It will still require training to 'speak' the formal language to communicate with computers, but it will always become easier and easier, only leaving computer scientists as a niche occupation to actually build the layers supporting the higher levels.
good luck explaining an enterprise sized system to an AI, validating all the use cases match the requirements ... and don’t forget our best friend: change. This will be a full time job
Kinda yes, in a more serious sense, there will be new roles for humans
to play with respect to moderating technologies;
"Bladerunners" might not be exactly what P.K Dick imagined, but maybe
not so far off.
If we take AI to be the science fiction vision we seem to wish for
then it will require managing, stewarding, planning, opposing,
judging, teaching, healing, hunting down and deactivating, sabotaging,
negotiating with....
Bricklayers became architects, surveyors and town planners as
complexity increased in the construction world. Cities evolved to have
traffic wardens and police... but the same has yet to really mature in
digital technology. We imagine all these benefits of "smart cities"
and "digital working" - many ideologies that have been around since
the 1970s. Yet software engineering is still in its infancy with
respect to civic function, ethics, rights and responsibilities,
remedies and rules.
We have a more or less laissez-faire free-for-all market economy that
produces isolated "goods". The failure of this default model can be
seen in the tombstones of the Google graveyard. And, while it was a
driver of innovation for some time, it isn't really working out in the
big "civic" sense, and we certainly cannot rely upon centralised mega
monopoly Big-Tech to do the right thing (except in a William Gibson
style techno-fascist dystopia).
So I think many timeless jobs will adapt so that technologies can fit
into our society, as welcome, well-managed friends, rather than
allowing them to take-over our society. Maintaining that balance will
be a new frontier for human intellectual labour in teaching, legal,
planning and policing functions.
I've generally trusted somebody would make an APL for iOS and Android. The code thickness and utilization of images appears to be ideal for little touchscreens.
I hope the years ahead and the investigations that fill them produce a picture of a cartoonish mustache twirling villain. It would, as you’ve pointed out, make it so easy to understand. I expect something more nuanced and layered filled with self deception and good intentions might emerge. The sort of thing which reminds us that risk is not just that if gleefully explored possibility but also the sort of risk to be managed and mitigated … a dull retrograde conservatism rendered gauche by decades of SV VC survivorship bias. Time may tell.
My take on this:
the stock market tanked, us congress and senate members already sold all their stocks, and now they told the FTC it's safe to go after big tech.
The version with the tilde character is worse! At least with the bare slash, some versions of rm will simply refuse to run (they require --no-preserve-root). Plus, even if it works, “rm -rf /“ is likely to start at something like /bin and spew a ton of errors unless you use sudo, giving you ample time to abort before it starts wiping the stuff you care about in /home, /usr, etc.
On the other hand, with the tilde, it will just start wiping your personal files right away and will probably not run into any permission errors.
$ git clone git@github.com:dotfiles '$HOME'
Cloning into '$HOME'...
*ugh...*
$ rm -rf $HOME
*wait...*
$ cd $HOME
cd: no such file or directory: /Users/admin
*fuuuuuuuu*
No deepness needed, the only thing you need to know is recursion.
Also SQL is even better than that because every query is it's own statement, so the parsing is dead simple. I totally get why he did it this way.