Hacker News new | past | comments | ask | show | jobs | submit login
FTC proposes new protections to combat AI impersonation of individuals (ftc.gov)
152 points by oblib 7 months ago | hide | past | favorite | 98 comments



Seems like this is broader than just AI impersonation. It also would include a person claiming to be from the government when they’re not.

> Falsely imply government or business affiliation by using terms that are known to be affiliated with a government agency or business (e.g., stating “I’m calling from the Clerk’s Office” to falsely imply affiliation with a court of law).

My first thought on this was great, I’m glad the FTC is doing something about this and I’m surprised it wasn’t already regulated by the FTC.

My second thought was the majority of this type of fraud is probably from foreign impersonation, not Americans. And it’s not like they’d be sending in predator drones for surgical strikes against scam call centers, as satisfying as that might be.

My third thought was that having this on the books and keeping a record of these violations will give the FTC leverage to crack down on telecoms that don’t do anything about it.


> My second thought was the majority of this type of fraud is probably from foreign impersonation, not Americans.

There's an industry for it in India, because they can source english call center workers there fairly easily.

The youtuber Mark Rober ran into a couple of US mules for these sorts of scams while working on one of his prank videos, and decided to team up with some other youtubers to troll some of these companies in India. The video on it is worth watching: https://www.youtube.com/watch?v=xsLJZyih3Ac


>My second thought was the majority of this type of fraud is probably from foreign impersonation, not Americans. And it’s not like they’d be sending in predator drones for surgical strikes against scam call centers, as satisfying as that might be.

I think it means that american-owned social media platforms would be required to comply with these rules though, right? that seems like a fairly big deal.


indeed, almost no novel criminality or fraud is occuring.

robocalls should just be illegal. unidentified propaganda should be banned. ads with fraudulent claims should be prosecuted.

you can argue about "to what degree" but AI isn't doing anything but exposing the true lack of enforcing of existing laws because of capitalism grease.


I don't think we need to ban robocalls. People do want to know when their school closes, when their prescription is ready to pick up, when their utilities will be undergoing maintenance, etc. The ban should be on unsolicited calls, which sort of does already exist, but has two major problems: too many exceptions, and calls from outside of US jurisdiction.


It should be possible to hit a button and receive $1 for every unwanted phone call. If the carrier cannot come up with a liable party, they are responsible for the $1.

Ban robocalls, though, and just pay real humans to make those calls instead of overpaying superintendents.


I really like this idea because it has a sassy, unusual, creative and useful logical structure.

As we know, a weakness in all free-entry point systems (like email) is freeloading. The somewhat broken "solution" is charging a small fixed fee to the initiator. But this doesn't work in systems we would like to keep at zero cost because it's a blanket response to an acute problem - most coms are welcome and legitimate.

Giving the callee the power to charge the caller a small "handling charge" if they don't like being bothered makes a lot of sense.

Of course it has its own modes of abuse, but the best way to avoid that is not to send to people you don't know.


Like the worst case scenario for individuals is their friends do it to them as a joke or they misdial and lose $1. I doubt most people would exercise the option for a wrong number though. I need to iron out some edges cases, like it must be ineligible for 800 and 900 numbers. But then I am pretty certain all carriers will be able to miraculously trace and block unwanted calls on midnight of the day it takes effect.


Maybe apply it only to call centers?


I've always wondered why Google and Apple don't build a report button into their dialer that allows you to generate a FCC complaint. This seems like a pretty easy task but doesn't seem useful from a third party app. It would auto fill a fair amount of information for you. I'm sure this would burden the FCC system, but I'm not sure that's a big concern and rather just would force the FCC to recognize the scale of the problem. Hard for politicians to argue against too: "We need to defund the FCC because companies like Google and Apple are over burdening the systems with reports of spam callers!"

It would also be nice if they streamlined the ability to add yourself to the DNC list.

Google engineers that are here, is there a reason you don't do this? If not, can you do it in your 20% time? 350 million of us will thank you.


20% time died a decade ago


Ahh classic Google, killing everything that made it the thing it is today (or maybe better put, yesterday)


It's hard to collect, but you're actually entitled to something like $500 per violation of the do not call list. I do wish there were a button on my phone to collect all that money.


That would be rad. The problem is all the guys trying to contact you about your car's warranty are outside US jurisdiction or hidden. If the phone companies were liable at all that would end immediately. I'm fine if it's $500 instead of $1, but it has to be easy to collect.

You used to be able to collect direct from spammers if you could hunt them down, too. I think CAN SPAM ended that, which is a shame.


Yeah "hard to collect" is something of an understatement.


all of those things can be communicated just as easily over text message. A solution could be opt-in on the customer side, and the business could need a license from the telecom to robocall. Otherwise the FTC could fine the telecom and/or the business for unsolicited robocalls.

I would go as far to say unsolicited business calls, automated or otherwise, should be banned.


Not everyone can receive or read a text message.


Not everyone can hear a voice-mail. My partner is deaf, and if Google Voice can't transcribe a voice-mail, they don't know what it says until I listen to it.


Many automated messages that I get come via both text and voice, I presume, for this reason.


whatever happened to SHAKEN/STIR and why didn't that fix everything like people said it would?


IIRC it's still being slowly implemented, as slowly as things are made mandatory - because any telecoms that can make money by forwarding fraudster calls to other telecoms don't want to support it.


I believe SHAKEN/STIR was mandated only for calls originating from within the US, however, most of these fraud calls are originating over VOIP (or SIP) from outside the US.


I contribute my grain of sand by reporting every single spam SMS that I get . I know those are much harder to spoof so its valuable when they get reported.

This is a nice little resource for quick reporting that an HNer put together recently. I have the abuse report email templated now, so all it takes is 2 minutes to report every new one.

https://reportphonespam.org/#Reporting-abuse-to-your-wireles...

Shout out to Telnyx staff for actually processing my requests with a reasonable SLA. Your CS team should be the industry norm.


I use to report spam SMS, then my next phone didn't have a report spam SMS option, and I've not seen the option on any phone since.


the link makes it someone easy to report. Try it out!


No sane person is going to intentionally touch a potentially malicious link on their phone.


> robocalls should just be illegal

I would pay extra to be part of a phone network that guaranteed humans only and displayed First + Last Name & Organization (if any) of the caller.


This is a legal clarification. Impersonation has always been illegal, but they didn't specify AI because they didn't exist back then, so now they do.


> unidentified propaganda should be banned

Oh man, this sounds bad. Who gets to decide that something is the real, real truth and not propaganda? The US? China? Germany? a PAC? A company? Your neighbor?


I think the easily provable part of this is “unidentified”.

Unsolicited and unidentified should be banned, with users being able to opt-in to any lowered bar (which would count as solicited) if they like.

If businesses or political actors start sending robots to shout messages into your home from the street, without some opt-in, they would be banned.

I would be happy for all unsolicited messages (mail, email, text, voice) without a specific purpose related to a relationship or related to unavoidable impact on the receiver be outlawed.

Citizens that were ok with unsolicited or unidentified advertising or political messages could opt into the mediums they were ok receiving such messages on.


_unidentified_


> but AI isn't doing anything but exposing the true lack of enforcing of existing laws because of capitalism grease.

This is just a dismissive way of saying “AI isn’t doing anything but making the problem worse.”


right.

I guess my opinion is "they already weren't tracking the type of fraud AI is capable; they weren't enforcing fraudulent activity..."

AI just makes it more numerous. a non story.

"sorry guys, we're still not regulating white color fraud"


> white color fraud

white-collar fraud


i prefer mine as more apropos.


I don't get it, sorry.


When are we sending DEVGRU to raid some call centers?


“Hello sir I am calling from Windows support.”


This is also being quickly rolled out because of the recent incident where someone robocalled called voters with a recorded voice that sounded like President Biden, urging voters to stay home instead of voting. I don’t know that they actually used AI for that, but I recall it being suggested that it might have been used, vs a voice actor.


This is the controversial provision, the "means and instrumentalities" clause. Existing law covers people running impersonation scams. The big question is, what responsibilities, if any, do sellers of tools have? The draft language:

§ 461.5 Means and Instrumentalities: Provision of Goods or Services for Unlawful Impersonation Prohibited.

It is a violation of this part, and an unfair or deceptive act or practice to provide goods or services with knowledge or reason to know that those goods or services will be used to:

(a) materially and falsely pose as, directly or by implication, a government entity or officer thereof, a business or officer thereof, or an individual, in or affecting commerce as commerce is defined in the Federal Trade Commission Act (15 U.S.C. 44); or*

(b) materially misrepresent, directly or by implication, affiliation with, including endorsement or sponsorship by, a government entity or officer thereof, a business or officer thereof, or an individual, in or affecting commerce as commerce is defined in the Federal Trade Commission Act (15 U.S.C. 44).*

It's the "with knowledge or reason to know" clause that's key here. Various industry parties have already commented on this, some wanting stronger language there to protect sellers of general purpose tools for creating content.

Sellers of automated outbound marketing tools which can be used to deliver impersonation scams might be caught up by this.


There's a conflict of interest at play. The law as you point out is rarely enforced, so tool providers are willing to look the other way and accrue revenues from the spammers.

What needs to happen is that the liability should be clear and present danger when a reasonable tool provider knows X is a spammer, and they don't investigate (or take action) the cost of maintaining that customer should be many times greater than the revenue from it. This should be particularly true if a spammer is reported by a consumer (Which is a strong signal).

Right now, the situation is reversed.


Interesting, I suspect all the mobile providers in the US are sitting on a pretty big pile of evidence from customer reports about which upstream voip providers are willing to host scammers


I suspect in the near future there will be a number of cases where individuals will intentionally release a bunch of AI "chaff", in the sense that having a very large number of bad videos/texts about them, of which many are clearly false, will disguise the actual bad behavior. I'm not sure what term/phrase will be used for this particular tactic, but I am absolutely certain one will arise.


This almost certainly already happens, just without AI. For one example look at the surfeit of UFO stories, many of which can be plausibly attributed to state efforts to cloud intelligence about actual classified air and space technology


This was a plot point in the 2019 Neal Stephenson novel Fall; or, Dodge In Hell. In an effort to prove information on the internet was unreliable a massive spam/defamation campaign was launched on a person -- to the point that the information was unbelievable and it was obviously fake.

In the novel, despite several attempts from different parties to encourage people to think more critically about what they read on the internet, highly sensational AI generated content radicalizes and stupefies the population.


I think you'll also see this as a defensive measure from some forward-leaning targets of deepfakes.

First, it's awful that this could be even considered as needed in the future, and developers behind the open source projects should consider the future they are enabling. It's not all just harmless tech.

So with that in mind, I've spoken to people who have theorized about releasing preemptive deepfake porn of themselves. This is due to the currently awful but very much existing trend of revenge porn, and possible expansions of the theme that w/ deepfakes. Can't blackmail someone via embarrassment if it is all plausibly deniable.

For example, I was in a con with a presenter who was a leading advocate against revenge porn, and the the call got zoom-bombed. Awful stuff. I could see someone on the receiving end of that treatment going full nuclear in the manner I described, in the off chance those sort of measures ended the threat finally.


This (minus the AI part until now) is pretty much the strategy of political operator Steve Bannon, who pithily summarized it as 'flood the zone with shit'. Think of all those junk 'news' sites that are just barely curated content farms using automated 'spinners' to pump out content, rage farming pundits etc.


This kind of happened in The Office, where Michael Scott spread a rumor about everyone to cover up the fact that he was about to get caught gossiping inappropriately about a co-worker.


This is already a thing but it's more for people with money right now.

There are a lot of 'reputation management services' whose job is to flood out bad press and replace it with anything else.


Too bad scammers couldn’t care less what the FTC decides just based on the crap coming into my physical mailbox, email inbox, and calls and texts.


Nor the FCC. I just got a call from a contractor-recruiter call center. (About half of them are scams, but you can't tell, because they all sound like the same foreign call center.) The caller ID said "Public Service". That has got to be illegal?


People don’t want to be impersonated by deepfake.

Artists don’t want their art style copied.

Writers don’t want to be out of business or have the price of their work degraded by GPT spam.

There are a lot of things people don’t want AI to do, but I can’t want for them to use AI to remaster Star Trek Next Gen (and other old sci-fi) into 4K. This kind of application hurts nobody.


The pictures I see from iPhones look like a better photographer took them zoomed out but gross and make zoomed in because of all the weird filtering. Isn't that basically what AI upsampling does? It doesn't add information, so it's somehow imagining what an optimal image will look like. Personally I'd rather pass when it comes to video.


I would be so, so happy if Apple would just give users the option to turn off all the AI crap on the camera. It sucks so bad. I have an 8MP Canon point and shoot from 2008 that takes way better pictures than my iphone 15. And I'm not surprised that a larger sensor with larger, better glass takes better pictures with finer detail and less grain, I just wish more people understood what they're buying.

I've seen a few "4K" AI-upscaled movies and they look horrible. Rubber faces and uncanny valley effects every few seconds. What a shame. I'll stick with 1080P on blu-ray.


I'm curious why you would want 4K in this case? I was over at a family member's place and threw on an episode of TNG to kill time. I was trying to figure out why everything looked so cheap: Picard's uniform, other costumes, the alien makeup. It was because they'd increased the definition past what the sets, costumes, and makeup were designed for. Personally I prefer the fuzzy look which appears more 'realistic' to me.


I saw a fan-made AI 4K remaster (Star Trek Borg) and it was a lot better than the remastered Star Trek on Amazon prime.

I think the AI adds crispness that was not captured due to limitations in the cameras (and skill of the cameramen) at the time. Possibly it also fakes textures to make them more visually appealing.


This is a good point, but AI also will allow us to enhance the video in other ways than upscaling, such as inserting (or replacing) CGI / special effects. We should be able to really up the visuals.


I don’t understand who the audience is for 4K up scales. I’ve never seen one that looks better than the original resolution, and any halfway decent screen will be able to match your content’s resolution and framerate. I consider myself a quality-enthusiast when it comes to media and I won’t touch something upscaled. Some stuff was meant to be seen in SD.


> I can’t want for them to use AI to remaster Star Trek Next Gen (and other old sci-fi) into 4K. This kind of application hurts nobody.

If the remaster only improves video and audio fidelity sure. But they may also swap out characters, personalities, and plot points, based on the whims of the studio.

AI controlled by major powerbrokers is going to further the aims of those power brokers.


If there’s a good tool floating around to do the upscaling, I bet committed fandoms are about as likely as major companies to gain access. Superfans wont care about copyright. The only way that wouldn’t seem likely to me is if the company owns or unilaterally controls the tool, which we haven’t seen any indication is a likely business model.


The original Star Wars trilogy is now ruined because of overzealous “remastering”


Personally I’d rather AI be used to help fix society and health issues. Help people thrive and survive, things like that.


Human problems can only be solved by humans, not technology.


Is this an accurate representation of your meaning? Because technology has been relentlessly solving or making progress on solving human problems for thousands of years. Agriculture alone is a series of technological improvements which have solved an unfathomably number of instances of the "I have no food" problem, one of the most recurring and fundamental problems of all life. Avoiding diseases, healing injuries, trying to get three states over in a hurry because you heard your grandfather is about to die, communicating without being intercepted, satisfying wanderlust, making communication easier - all of these are very human problems which have been fully or partially solved by technology.


I’m not sure that’s true, are you suggesting the internet hasn’t solved a single human problem?


Nope. It hasn't solved anything.


Wow, that's a pretty strong opinion. You do see that you are on a website on the internet right now, having conversations and discussions with strangers who could be from anywhere in the world, about technology and stuff that never would have happened before the internet.


Agreed, but these things are actually harder than upscaling star trek unfortunately.


It's not an either-or. People will try it for all sorts of things and keep the ones that are useful.


FYI there's professionally-done remasters of TNG into 1080 - it was shot on film originally, not video. And there's a decent-quality fan upscale of DS9; it was shot on film, too, but the remaster of TNG didn't sell all that great and the studios didn't want to spend the money on DS9.


Wasn't TNG shot on film and remastered [yes: 0]? Not sure if it made it to 4K, and it probably could use some AI clean up. I did read that DS9 was scaled up with "AI" because it was mostly CGI. I read on reddit that fans are even doing it, with mixed results.

[0] https://trekmovie.com/trek-remastered/tng-remastered/


Isn't fraud already illegal?


They think there will be additional deterrence by making it super illegal. See "identity theft" for a previous example.


So, did their action solve it?


Yes and assaulting someone is also illegal (whether with a weapon or not), reckless driving is also illegal (whether you are drunk or not), and violent hate crimes are illegal (whether you had discriminatory hate in your heart or not)


Fraud has to be prosecuted by the Dept of Justice. Deceptive business practices can be independently pursued by the FTC.


The existing criminal process can prosecute fraud after the fact; but there are potential FTC regulations which could make it either harder to commit it or easier to detect it, acting as a deterrent before fraud is done.


So individuals still have fair use for protests and comedy?


The FTC’s power only extends to commercial activity, and in this case it appears limited to implied (impersonated) commercial endorsements.


Key words - “allow for civil penalties”.

Civil is a much lower bar than criminal when it comes to court cases.


Seems to me people are not easily impersonated.

However, the false belief that a proxy persona, such as remote telephone, video call, or any other non in-person interaction are what give rise to impersonation.

Might as well simply say that all persona non-gratis communications are unenforcable.


Unfortunately the cat is it out of the bag, we can't simply legislate this away.


The same thing has been said before. Doesn't make it true.

Lawn mower manufacturers said they couldn't make lawnmowers safe, it was impossible. Until the government mandated that they had too.


Lawnmowers can't emulate the president of the united states.


People respond to incentives.

We have insanely high quality printers, yet we do not have much counterfeiting.

Just because we can do something easily and illicitly, doesn't mean that people will do things illicitly if the proper instinctive structure is implemented.


What proper incentive structure do you suggest? There are plenty of protections for printers and even then we do have (printer based and otherwise) counterfeiting.

We are talking about a software based solution that can emulate any public figure (locally) that the average person will not be able to recognize. This is a categorical risk to the information age


we can't legislate it away, but we can throw the book at people who do. i don't understand why the knee jerk cynical response is "why bother," as if that will make the problem better.


Do you want to regulate every gpu that can run a video sim?

An easier requirement from official sources to embed a public key in a video. This is a solved problem


is this regulating every gpu? this simply says there are legal consequences for aiding fraud.

laws that say murder is illegal are not regulating knives.


Governments are very trigger happy on regulation, like I said this is an impossible problem to solve without public/private key verif (or alternative).


Embedding a key might help technologically distinguish a small subset of the videos we're worried about but 1) ok so then what, you still need the regulations that make impersonation not allowed which is what these are 2) what about unofficial sources, hot mikes and leaked tapes etc.

Basically, you're suggesting a technological approach that's fine for a narrow set of concerns, but you still have to regulate and disallow behaviors that can be prosecuted. The actual problem here is not a solved problem, but yes in one very narrow subset of problems, a more complicated solution than hosting official videos on official sources would be to require official sources to also sign the video.


When dealing with Western actors, this is symmetric warfare. If you are a political party and the opposition impersonates you, you could do the same back.

What concerns me is the asymmetric threat such as posed by North Korean dollar groups. For the uninitiated, this is a real threat but let's for a moment not think about the silliness of present day Communists getting extremely ruthless about stacking cash.

The model I'm concerned about is, say you have a NK hacker group and they make a very, very convincing video of a CEO doing something embarrassing (shout out a former UK PM's alleged porcine initiation ritual) with a view to making cash.

These people are focussed on the bottom line. They could structure their extortion demand to be F.O. money and get paid with little fuss. And do it over and over.

On the one hand the replicability of such attacks concerns me. However I have been considering a future where we are embarrassed or exposed to embarrassing content on an industrial scale.

Embarrassment is a social concept that we all deal with, and deal with it we do. It could be that the AI impersonation mess gets so bad we all become inoculated to this type of content because virtually everyone notable has become a victim already. Could it become the cost of doing business?


> When dealing with Western actors, this is symmetric warfare. If you are a political party and the opposition impersonates you, you could do the same back.

No it isn’t symmetric.

There is no balance between party A being fraudulent so party B now needs to be fraudulent.

In both cases, the public C is the victim.

A permissive tit for tat view of fraud, simply encourages an arms race of victimization.

Organized crime groups compete too. That competition doesn’t result in some kind of optimal societal impact balance in the absence of legal responses either.


AI watermarking would be a boom business.


Why would the scammers apply watermarking to their fraudulent data? Or do you expect that FTC somehow will make the open source generative models disappear worldwide?


I meant some watermarking that can’t be redone even if the weights are shared

It is still an open question but I think money will be dumped into this area, for protection of IPs and compliance


> I meant some watermarking that can’t be redone even if the weights are shared

I've been working a bunch on neural networks, and I'm quite convinced that this isn't possible (as in, not that we haven't built a tool for this yet, but that it's not possible to build such a tool and weights can always be redone) - I'm not wholly certain as I have been surprised before, but I'd need to see some evidence to even consider this as plausible. Fine-tuning can change a lot, adversarial examples can target very specific aspects of the model.

Furthermore, people do train models from scratch - even if 99.9% of models would include unremovable watermarks, all fraudsters need is for a single unmarked model to exist; and if it doesn't, they can fund the GPU-time to train one since they're running a high-revenue business, fraud has more money than academia.





Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: