Hacker News new | past | comments | ask | show | jobs | submit login
As Artificial Intelligence Evolves, So Does Its Criminal Potential (nytimes.com)
114 points by tysone on Oct 23, 2016 | hide | past | favorite | 68 comments



You replace "Artificial Intelligence" in this title with nearly anything, and it will remain true.


That's because AI - just like "nearly [any]" technology - is a tool. Tools amplify our abilities, and they are not magically limited to only "good" uses (if it is even possible to define a "good use".

One of Frank Zappa's insightful observations may have been about drugs, but it could apply to anything (including AI):

    A drug is not bad. A drug is a chemical compound.
    The problem comes in when people who take drugs
    treat them like a license to behave like an asshole.
Modern machine learning techniques can do great things, but they are still just a tool. The problem is when people use AI as an excuse to skip important steps, over generalize, etc.


True. It's not a very good title.

"As drones evolve, so does their criminal potential." Yup. "As electric cars evolve, so does their criminal potential." Okay, uh, sure, I guess so. "As turtles evolve, so does their criminal potential." Wait, what?


They are apparently good analogies but they break because of the difference in scalability (maybe not for the turtles...)

Paper based advertising and con artists always existed, but the availability of email over the Internet made automation possible and opened up the possibility to do that on a totally new scale with spam and phishing. The example in the post (impersonating an old mother) is a kind of phishing that doesn't scale if criminals have to do it by themselves. An AI able to handle the conversation would make it possible to make much more money in a single day. By the way, that would be detrimental to the less technological criminals because there is a limit to the number of such a calls you can believe. Probably either 0 or 1. A "90% of the market vs all the others" rule for criminals as well?


When turtles evolve they become ninja turtles, and if they go bad, well... do I really need to paint you a picture?

Seriously though, the level of evil that can be perpetrated as technology evolves is scary and I ask myself sometimes whether humanity's wisdom will manage to catch up to it's intelligence before it destroys itself in very creative ways... Though I must admit that criminal AI feels much less threatening to me than criminal biotechnology.


Criminal AI wielding biotechnology?

Hack into some database to steal bioweapon designs (or invent one by itself), get a few chemical companies to assemple components, bribe some poor schmuck into mixing the vials together, boom.

THB though, what I fear is individuals. Humans as a group behave often in batshit insane ways, but most of the time, they're a peaceful and kind species. But you always have, by the random lottery of genes and environment, crazy people. What technology does is magnifying the power a single person can wield, and thus the damage they can do. Self-replicating stuff is particularly nasty power amplifier here.


As criminals evolve, so does their criminal potential!


As cops evolve, so does their criminal potential!


As criminal potential evolves, so does its criminal potential. #FullMetaJacket


The worst thing is, unlike your other examples, it's turtles all the way down.

What will it be like when it's AI all the way down?


"As diapers evolve, so does their criminal potential."


Well turtles may evolve to figure out how to steal food from grocery stores.


Yeah, it's a common template used by media.

1) Because of new thing "X", bad things are going to happen.

2) Quote a government bureaucrat to "confirm" it.

3) What wasn't used in this article surprisingly was some government program to "help", but in reality just takes things away from people.


At least, this kind of media campaigns might span interesting conversations.

"- Artificial Intelligence is really bad for X. I lost Y because of it. Ban Artificial Intelligence! Let's keep our jobs and our family safe!

- Dude. You have no idea what you're talking about."


As [Igglybuff] Evolves So Does its Criminal Potential.


You've got to understand the context. The average person DOES think that technology (or AI) can reduce crime in a sustainable manner. This article is for such an average person.


Your point, whether valid or not, actually has nothing to do with my comment.


AI is the biggest hype of the moment. It reminds me the film "Eagle Eye" (2008) where a kind of of Zeroth-law empowered AI wants to assassinate the president of the US. Despite its incredibly intelligence what I found more unrealistic was its control of all internet-connected systems in the US (traffic, remote control drones, phones, different OS, etc) "just because I am an AI and I can do whatever I want"

For god's shake, it is 2016, we are still unable to have a decent dependency system for most programming languages. AI is still decades far to rise up against us.


"Despite its incredibly intelligence what I found more unrealistic was its control of all internet-connected systems in the US (traffic, remote control drones, phones, different OS, etc) "just because I am an AI and I can do whatever I want""

On the other hand, "Mirai botnet". Even the "mere humans" are doing pretty darned good laying hands on vast powers in the current environment.

The hard-to-believe part is an AI that is much smarter than human. Once you have that, I have no problem swallowing that it could pretty much hack whatever it wanted in our current world as long as it could get a connection. That's not because I believe in magical hacking powers, it's because we demonstrably don't really care much about security and we get exactly the systems you'd expect as a result. My local red team [1] seems to be able to get past pretty much anything it wants to and they're not superhuman AIs.

An AI that operated in a world where all programming languages were memory-safe would at least have a harder time of things. Though between things like Rowhammer and much higher-level hacking (social engineering, for instance), I suspect it would still not face much challenge.

[1]: https://en.wikipedia.org/wiki/Red_team


To play devil's advocate. Let's say in 2030 we can fully simulate a human brain and it works. Let's also assume it runs 10000x faater than wetware (a highly conservative estimate?). That means in about 1 yr it should be able to assimilate as much info and experience as a 30yr old human. After that it could use it's 10000x speed advantage to effeftively have the equivalent of 10000 30yr old hackers looking for exploits in all systems.

I'm not saying that will happen or is even probable but when A.I. Does happen it's not inconceivable it could easily take over everything. I doubt most current state actors have 10k engineers looking for exploits. And, with a.i. that number will only increase as the a.i. is duplicated or expanded.


Let's say in 2030 the aliens invade and it works. Let's also assume they're 10000x more powerful than wetware (a highly conservative estimate?). That means in about 1 yr they should be able to attack us as effectively as 30 years of human war. After that they could use their 10000x power advantage to effeftively have the equivalent of 10000 30yr wars looking to wreck havoc.

I mean at this point you're just making things up...


Honestly, making things up is exactly how we prove if it works or not. For example:

Suppose I can talk into a tiny little device and someone can hear me from miles away.

That sounded like witchcraft at some point but it became a reality. Futuristic thinking needs to suppose lots of crazy-sounding things are possible.


Compare with:

Suppose I can think into a tiny little device and someone can hear me from miles away.

That still sounds like witchcraft. Before one of them gets invented, or at least the basic underlying principles are discovered, how do you tell them apart?


Simulating a brain is not aliens. Or maybe it is an I'm hopelessly naive. Lots of smart people are working on simulating brains and there estimates are that's it's not that far off from being able to simulate an entire brain in a computer, not at the atom level but at least at the functional level of neurons. At the moment there's no reason to believe they're wrong


> The most accurate simulation of the human brain ever has been carried out, but a single second’s worth of activity took one of the world’s largest supercomputers 40 minutes to calculate.

http://www.telegraph.co.uk/technology/10567942/Supercomputer...

The above supercomputer in 2014 was 2400X slower than the human brain. Moore's law is dead so I think your 10000X and 2030 estimates are grossly optimistic.


If you applied your logic to DNA sequencing. The first DNA sequencing took ages and by various estimates at the time it would have taking 100+ years to fully sequence DNA. Fortunately exponential progress doesn't work that way and full DNA sequencing happened orders of magnitude sooner than those pessimistic estimates.

I see no reason to believe A.I. will be any different. But then I don't believe Moore's law is dead. We'll find other ways to extend it. Insert magic here.

http://michaelgalloy.com/wp-content/uploads/2013/06/cpu-vs-g...


But, but, Quantum computing.

Jazz hands


Wetware is unbelievably, ridiculously efficient compared to anything we can make out of silicon and metal. Our most complex artificial neural networks, endless arrays of theoretical neuron-like structures, are much smaller, less complicated, slower, and less energy-efficient than the actual neurons in a brain. Of course, we can make computers as large and energy-consuming as we want, but this limits their potential to escape to the web and doesn't address the speed issue.


General AI like that is not years or decades away. The problem hasn't even been stated clearly yet. AGI is probably a century away or more. It's not a resource problem, it's a problem problem. I attended an AGI conference a couple of years back with the luminaries of AGI attending (held at my alma mater, University of Reykjavík). The consensus was we didn't even know which direction to take.


The same argument can be used the other way. If we don't even know which direction to take, what makes you think that AGI is a century or more away? Say, in 10 years, we better understand the problem we want to solve and the direction to take, what make you think it would take 90 years to solve, versus 20 or 30?

I think we simply have no idea when this could happen, it could be in 20, it could be in 200. But one thing is sure, when it will happen, this will have drastic implications for our society, so why not start thinking about it now, in case it's 20 and not 200?


We should. I was not expounding a policy. I was merely stating the results of a conference on the subject at hand. All of what was discussed in the article is weak AI. Narrow programs delivering narrow features. Social engineering doesn't require strong AI.


"If we don't even know much about our universe, what makes you think that an alien invasion is a century or more away?"

Yet I don't see us losing our heads over the chance of an alien invasion.


Based on the fact that there hasn't been any known alien encouter during human written history, that we haven't found any artifact of such an event, even in a distant past, that a 100ly radius is really tiny at the scale of the galaxy, that we haven't found any sign of life outside earth, and that anyway, if an alien civ is advanced enough to come here and invade us we can't really hope to do anything againts that, there is indeed no need to spend time worrying about that.

Considering the evolution of computing and technology in general in the last 50 years would you consider the two things to be remotely comparable?

I personally don't.


Neither have we experienced a true AI, and none of the gains in the last 50 years have brought us anything near it, only more advanced computing ability and "trick" AI.

We just assume technology will improve exponentially based on an extremely small sample size. Has it never occurred to us that the technology curve may horizontally asymptotic as opposed to exponential?

The ICE was an amazing piece of technology that grew rapidly, from cars to military warplanes, to our lawnmowers. Yet we can not make them much more efficient or powerful without significantly increasing resources and cost. If you judged the potential of the ICE on the growth it had then, we'd be living in an efficiency utopia now.


Based on the fact that there hasn't been any known AI encouter during human written history, that we haven't found any AI artifact of such an event, even in a distant past, .... , and that anyway, if an AI civ is advanced enough to explode exponentially we can't really hope to do anything againts that, there is indeed no need to spend time worrying about that.


AFAICT there's 2 paths to A.I.

Path 1 is understand how the brain organizes info. Recreate those structures with algorithms. This path is 100+ years away

Path 2 is just simulate various kinds of neurons with no understanding of how they represent higher level concepts. This path is < 20 years away by some estimates

You probably believe path #2 is either also 100+ years away or won't work. I happen to think it will work the same way physics simulations mostly work. We write the simulation and then check to see if it matches reality. We don't actually have to understand reality at a high level to make the simulation. We just simulate the low level and then check if the resulting high level results match reality. It certainly seems possible A.I. can be achieved by building only the low-level parts and then watching the high-level result.


I have found myself repeating this paraphrased quote often recently, which is relevant.

"We didn't achieve flight by simulating birds; we needn't worry about achieving ai by simulating brains."


If the Hammeroff-Penrose conjecture is correct then the only simulation possible is whole replication with quantum effects present. The unreasonable effectiveness of neural networks makes that unlikely thou.


So... I dont want/need to think about this cause I'll be long dead? :) On a related note, how optimistic are you about life extending medical technology (which is likely to be accelerated by even minor advances in computing and AI)?


Not what I said. I'm cautiously optimistic about life extending technologies. I'm not sure computing technology and AI will carry the day, I'm more inclined towards the physical sciences.


Why do you believe it will run so much faster than wetware? It would be great to see the math. Also, how much power do you think it would take?


>Let's also assume it runs 10000x faater than wetware (a highly conservative estimate?). That means in about 1 yr it should be able to assimilate as much info and experience as a 30yr old human.

Assuming your assumptions, and assuming it had access to the information, you meant one day, not one year.


A bigger issue (as the NYT correctly postulates) is the industrialisation of cybercrime.


Evil AI is the ultimate in plausible deniability for evil people. They can just blame the algorithm for whatever scheme they're plotting. You could easily have evil AI god that we can't turn off doing evil things while the man behind the curtain is getting away with things that he never could if it wasn't attributed to the AI. Witness the recent YouTube channel takedowns at the hands of Google'a algorithm.


I really wonder if the man behind the curtain really has any control? The info-streams are exploding. And the control mechanisms are half-ass and unsophisticated. And not really in the interest of the tech world, that likes bragging about how many guinie pigs their networks can reach.

People keep worrying about Google or Facebook or the media controlling and shaping the info streams.

What if they cant? What if we have already passed the point where things can be controlled.

Does anybody really believe Google or Facebook will admit that they have lost control?


That seems to be true in the meat world. It probably has been for a long time. Leaders of the world may have the power to end it in a nuclear holocaust, but that's probably the extent of it - a few high-impact things that can be triggered by individuals, because they were deliberately designed like that. As for the rest of stuff, it is my belief that "the economy" runs by itself. We cannot stop it, because the economy is just an aggregate of "what humans do because they're humans". We can influence it by various degrees, but it's more like prodding a complex feedback system and seeing what happens - I doubt there exists a person or group of people who can tell the economy "go there" and the economy will follow.


I'm not sure they'd bother even if it becomes available.

It doesn't appear that people have had any real difficulty pulling off scams where they pretend to be the IRS, or Microsoft tech support, or some other entity to extract money. The AI would only eliminate their call center costs.


Call center and labor costs. Labor is a huge part of any endeavor. Of course this "AI to human speech interaction" tech would be good enough to pass two levels above a Turing test (fool a person into believing a program, using voice and with enough nuance to impersonate a specific person or tight range of verbal nuance -- grandmas with regional accents). If someone had this and they're using it for scams then they would quite possibly be the worst business mind to ever live.


https://www.quora.com/Is-there-a-text-that-covers-the-entire...

It may not require much sampling to impersonate someone's voice based on these constructs.


Security needs to catch up, the fact that IP addresses, caller ID among other ideas can be faked shows that we have a long way to go to improve peoples ability to validate the identity/security of communications.


Agreed. This "criminal AI" tack is pure fear mongering. One of their biggest points was solving human computer voice interaction so you could use it for automated social engineering.

If you solved that problem you would command Gates/Musk/Bezos levels of wealth from the legitimate applications.


Agreed. Social engineering is pretty difficult when you consider all the nuances you need to understand to accomplish it well.


True, but when it becomes almost inexpensive, it will pay off even with a low success rate. You would be impressed with how simple some social engineering schemes are.


As the number of chandaliers grow, so do their criminal potential.


As AI evolves, so does our government's potential to abuse it to remain in control.


I'm actually fine with the government(s) remaining in control. I would fear the alternatives a lot more.


I'm more worried about corporations run by machine learning systems optimizing for shareholder value. Somebody with access to YC's data should try training a classifier to predict YC success. How far away is the first VC fund run by a machine learning system?


YC do use machine learning on applicants to predict success already. They don't listen solely to it, but they use it as one of their ranking signals to use google terms.


Interesting. What metrics? Has it worked so far?


If they tell you that it might not work so well.

There's a saying about good measures ceasing to be so when they become targets.


Any observed statistical regularity will tend to collapse once pressure is placed upon it for control purposes

Goodhart's Law : https://en.wikipedia.org/wiki/Goodhart%27s_law


Ethnicity, gender, religion. It's okay because a computer is doing it. /s


Yeah, we're not allowed to use high signal attributes because it tells the truth and makes people feel bad. Might as well select people randomly since they won't work for it...


Indeed, we haven't got any single positive or altruistic implementation of anything remotely representing AI yet.

And yet, we have bots, spyware, malware, etc. infecting the IoT devices already, think what will happen if a more "Evolved AI" attacks these devices (or even worse, us!)?


Time for extrapolation, brain storming, and irrational views of the future.

I remember some years ago reading that many of those social media quizes like "Which [random TV show] character are you" or "if you were a color, what color would you be" are run by companies that are slowly aggregating consumer behavior and background data on everybody.

With access to this database, and a semi-intelligent bot that's been given instructions, one could build an collection of people who meet certain criteria.

You could filter people down to determine who is most easily influenced by peers and have the bot befriend them and act as a peer. This power could be used to simply influence them to have certain consumer behaviors, or it could be used to cause online malcontents to move to the real world and take up arms against governments.

You could filter for people who were "easy targets" to trick them and steal their life savings, or better yet, convince them to send you their life savings.

You could run a fake church and find the people easily swayed by your specific brand of faith or family that they crave.

You could find not just the next lone gunman, you could find a thousand lone gunmen or bombers and set them off all at the same time against a wide variety of targets.

You could convince 10,000 people to invest a pittance into a penny stock to make it soar and cash out.

You could trigger viral boycotts or artificially construct "Grassroots" organizations.

Similar to a recent Black Mirror episode, you could blackmail people but make it automated. Bots could scour the internet for "deviant behavior" in safe pseudo-anonymous communities but connect the profile to real-world profiles and automate threatening them for some kind of action or payout.

On the other hand, many of these things could also be used for good depending on your viewpoint. A true believer in a cause might like the ability to easily find and reach out to people who believe in the same cause to form a grassroots campaign.

A church with low attendance numbers might be able to find more members for their flock.

An intelligent bot system with psychological and marketing profiles on everyone in a country could be used by humanitarians to give certain categories of people (brave, natural leaders with compassion) the prodding and emotional support they need to stand up to militants or warlords.

An automated bot that connects pseudo-anonymous identities with real identities could be used to privately and discretely tell trolls to stop their negative behavior.

If the tool kits arrive, I anticipate a new wild west of uses... negative, positive, and purely exploitative.


I am at a point where I feel the need for an addon that blocks everything that mentions "artificial intelligence".

STOP already! There's no such thing. There won't be such thing. You have 0 idea about what you are talking about.


Would such an add-on also block these comments?


It's all artificial




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: