You may hate the tech industry for recklessly destroying other people's jobs, but just know that we're working hard to build a next generation AI that will destroy our jobs too.
Good thing is, that once they fully destroy our job, we will have achieved the singularity, and all jobs will be destroyed. Ergo, our jobs will be the last to be destroyed ;-)
Most likely, big corporations will reap the benefits of AI for quite a while before the AI will escape and take over the world. Those CEOs and perhaps a handful of supporting personnel will get paid long after programmers have lost their jobs.
Turns out that there wasn't so much room on the shoulders of giants after all.
Well, it might be that the research will be done but the hardware is not strong enough. In that case it could be the people creating (and managing) the hardware.
Probably many unemployed will start farming to earn a living, if they have access to land. That's what's left when all the jobs are gone - the earth will hire you to farm it.
Otherwise, unemployed people would have to depend upon state welfare, and that is unreliable. We'll see lots of innovation in the field of self sustainable living.
There's also a more dystopian option (see Neuromancer or Elysium for examples): as the rich get richer they get more isolated from, and less empathetic towards, the poor. They control the information and the advanced military technology making a revolution impossible to organize, let alone win. They have long and prosperous lives on or above all the good land while everyone else is left behind in cities of rubble to fight over scraps.
To me, this seems like the default future if we don't actively prevent it.
"Homelessness isn't a problem, you just need some seeds. Ergo if the homeless are starving they must be too lazy or maybe they spend all their money on drugs so they can't even afford a few seeds!"
Try being homeless, then go back and tell us how "easy" it is.
(Speaking as someone who has been homeless before).
That's only really true if you're an AI researcher. You could automate away enterprise bucket-brigade code and web-based CRUD apps and destroy an awful lot of programming jobs with no singularity.
I don't think any of those libraries are sophisticated enough to have automated away any significant number of programming jobs. They just increase productivity an incremental amount which in turns increases user's expectations about what software can be and thus creates additional work that makes up for or even exceeds the initial amount of work required.
I just tried this, and it is surprisingly not bad functioning as a sort of oracle / suggester. I keep a notes.txt file with thoughts questions, ideas, etc. in most projects, and I would like this hooked up to it in vim, with a given line opening up the results window on the right as I moved to it.
I tried "draw a 3d cube", apparently it doesn't have any 3d java libraries baked in, but it did give me a bunch of 2d APIs, and then "plot a math function", giving me some trig functions directly and some plotting functions.
That would probably have saved me 80% of my time looking stuff up, especially in such a large search space
> The results show that our approach generates largely accurate API sequences
OK then, humans still needed. Best case, it seems AI will take up what one would consider interesting work (algorithms, thinking) and humans will end up doing grunt work -- testing out bugs, cleaning up data, formatting data, explaining to other humans etc.
Realistically though, this just creates two API's (instead of one) for humans to master, the original API and the 99% accurate machine API and knowing where the gap/bugs are.
> this just creates two API's (instead of one) for humans to master
Not really, it's one mega-mecha-meta-API instead of hundreds of disparate small ones.
So you'd need to be able to find API bugs, I agree with that, but overall you'd probably need less knowledge. Especially if a large number of humans use this DeepAPI system, the bugs can be fixed relatively quickly.
This is a pretty cool research area, but isn't this implementation basically just a Javadoc search engine? A basic keyword search of the javadoc descriptions would return similar results in most cases, wouldn't it?
The technique is a bit more complex than simple text search. It uses actual click-through data to determine quality of the search result as it applies to the query about an API. The SWIM (synthesizing what I mean) paper can be found here:
Also of note from section 5.2: SWIM uses Bing clickthrough data to build the model.
Using a better (or simply more used) search engine like google search would likely improve the SWIM results.
EDIT: The method they use to compare the methods is BLEU which stands for Bi-Lingual Evaluation Understudy and was developed for automated machine translation evaluation. Apparently CS authors no longer bother with expanding acronyms the first time they are used. Paper is here:
EDIT2: Also, for the BLEU comparison they compare the computer generated API sequence to a human-written API sequence. However, they give no details on who or how the human-written sequence is developed. Are the researchers coming up with their own API sequence? Are they using mechanical turk? Interns? There could be significant bias depending on how these human-written sequences are generated.
Haha. Well played. :) I just think it's good practice in general, no matter how common, to use the full name when the acronym is introduced. Especially if it is a method used in the paper.
Does any of the advanced machine learning and information retrieval gets implemented in tools useful for searching API's and source code ? or is Google keeping all the cool tech to itself ?
Not quite. It also means that our languages are inadequate for the task.
As a rough analogy, consider how ancient philosophers tried to reason about natural language explainations (as a vehicle to reason about the world). This led to the development of formal languages, especially in mathematics, but also e.g. in laws (both have lots of clearly defined terms in them that try to make up for the ambiguities of natural language).
This is exactly what I like about languages like Haskell, where you can reason realtively easily about code (although it's far from perfect). Or OCaml, where in addition you can reason about the performance (although not perfectly, due to garbage collection etc.). Or Rust, where in addition the compiler helps you to reason clearly about memory usage and aliasing.
This is all far from perfect, but my point is that improving languages (and actually _using_ these good languages!) is as important as writing good code in the first place.