Hacker News new | past | comments | ask | show | jobs | submit login
AI is making Meta's apps basically unusable (fastcompany.com)
61 points by skilled 16 days ago | hide | past | favorite | 49 comments



> My Facebook is engulfed with AI-generated images and gullible users who seem to have no idea that they are looking at fakes.

I honestly believe that most, if not all, of the users replying to these AI generated posts may in fact be bots themselves, built to boost engagement. It's a downward spiral. AI generates image > Bots congratulate it > It learns to generate images more like it > Images become even more deformed. It's gotten pretty bad. https://twitter.com/henningsanden/status/1776303455717535840


So... what is this for? Like what's the end goal here? A page with AI generated spam doesn't make you money even if a real human looks at it, after these pages get popular do they plan on starting to incorporate ads and product placement into their spam


I don't think most of the Internet realizes the competition factors going on across social media right now... Content creators and even scholars compete against people who live in 3rd world countries, massive corporations, kids in their bedrooms, and many political and economic interest groups for attention over the entire internet.

People living in economically depressed conditions are eager and happy to make $10 off a creator fund, or even $5 for selling a user account with 1 million followers to a kid in America who wants to look like a fashion or music industry influencer... These platforms are not full of honest business people struggling to create public awareness for their brand, they're full of jaded honest people, corporate influence, political influence, and scammers.

There are tons of people literally creating drab content, and often lying in it about success secrets and tips to succeed on platforms as well. There are also tons of ad-sponsored posts run by everyone from scammers to people selling goods manufactured in sweat shops on top of all the legitimate (honest) product and service sellers.

The people competing to gain social media attention are as diverse as the schemes they generate, and that's inversely what is destroying any possible productivity for honest content, contributors, and artists on social media.

There is an endless stream of people that are eager to capture public consciousness, and the platforms weaponize it to make investors and management rich -- Just like a farmer's market, the landlord can basically sit back and charge everyone for rent, while the booth operators all furiously scramble to position themselves and sell enough to make a living. The platforms have grown to be far too large for anything great to come out of them for honest operators.


Meta don't want these bots, but as Zuck said in his interview -- it's adversarial. Some of these bots are run by nation state actors who can easily adapt to Meta's attempts to shut them down.


I know that Meta also wants these bots shut down

I'm asking who creates them and why? How does a nation state benefit from a spam post about a child riding a watermelon motorcycle with lots of AI generated comments


Because if fresh accounts immediately start pushing narratives for said nation state they would easily be identified and banned. The game of spam is about slowly gaining the trust of the target algorithm by appearing legitimate.


Because it gives the fake profiles the look of legitimacy. These bots will be used later for political manipulation (if nation state) or scams or selling influence (if criminal gangs).

These bots are all over twitter. Make a new twitter account and don't post for two weeks. You'll be followed by about 15 bots, each with an AI generated profile picture, no posts and a history of likes of a random smattering of posts with no coherence. The fake likes are purely for the veneer of legitimacy.


One could come up with dozens of plausible explanations in seconds of thinking:

1) build networks of accounts that amass many followers to facilitate data collection (scraping public or friend-only profile data, photo, and text) without hitting rate limits, for the purposes of building surveillance-ready profiles of a nation's or the world's citzenry and to build face identification datasets on foreign populations for later use, in other words building a web scraping network in a foreign controlled data ecosystem with mixed privacy settings for non-logged in accounts

2) maintain swaths of accounts with many followers for a "rainy day" when you want to be able to communicate en masse to a nation or language-sphere's populace with subtle propaganda or straight up messaging

3) distract a populace from any matter of subjects or manipulate them to pay attention to the things you want them thinking about / looking at

4) provide cover for #3 by creating such a mass of content that the stuff you really care about is more easily hidden amongst the noise

5) dumb down a foreign population to weaken them economically - everyone knows the Douyin (domestic TikTok) has legal limits in place on allowed content especially for minors that is intended to promote intellectual growth and productivity, there's no such limits on Tiktok, and it wouldn't be hard to imagine they could facilitate the opposite for the PRC's subtle benefit

... I'm just freestyling here and I'm sure I could reach 20 in a matter of 10-20 minutes


> AI generates image > Bots congratulate it > It learns to generate images more like it > Images become even more deformed

I think it’s pretty unlikely that they are instantly finetuning back on the feedback in this case without filtering and deduplicating. If only because it’s fairly well known to degrade results.


Downward spiral indeed. Is it downward to the lowest common denominator of the internet population?


> The Meta AI experience has so far been a spam-filled one. Nowhere is that clearer than on Instagram where the search function, once a place to look up a friend’s account, now exists seemingly to usher users into conversation with a chatbot. “Ask Meta AI anything” it now reads in my search bar. Um, no. I just want to look up my dog’s daycare to see if they posted any pictures of her.

I've experienced this and it's perplexing. I was recently looking for things to do in Osaka during my upcoming visit. Probably a high-value query for advertisers: I would definitely be receptive to ads from restaurants, museums, etc. But instead it dumped me to a chatbot that was not capable to funnel me to any ads like that???


> But instead it dumped me to a chatbot that was not capable to funnel me to any ads like that???

Streaming gave us content without ads for a while before introducing a form of advertising worse than what we had with cable, worse because it can be made unskippable.

Now they'll try to dupe us the same way with AI, expecting us not to figure out that the endgame is opaque and seamless conversational advertising.


Advertising has to clearly be announced as an ad, however.


Regulations are for companies incapable of releasing features that regulators don't understand.


Regulators do understand advertising. That ground is well-tread.

> Probably a high-value query for advertisers [...] But instead it dumped me to a chatbot that was not capable to funnel me to any ads like that???

I doubt that it's for advertisers. Come earnings time, Meta needs to be able to tell Wall St that their AI search has 100s of millions of users.


Feels like every product person in the software industry replaced their whole toolbelt with a hammer and is now on the hunt for nails.


Yesterday I saw a TV ad for an AI Powered air conditioner that supposedly sets the temperature itself... what? How would the AI (whatever that might mean) inside possibly know what the current occupants of a room prefer?


“One of the smartest features of the Breathe-o-Smart is that it cannot possibly go wrong. So. No worries on that score. Enjoy your breathing now, and have a nice day.”

https://hitchhikers.fandom.com/wiki/List_of_technology_in_th...


> How would the AI (whatever that might mean) inside possibly know what the current occupants of a room prefer?

This isn’t as weird as you think. Best thermostats do something similar: https://support.google.com/googlenest/answer/9247510?hl=en


Huh

So maybe it's not as pointless as it first seems

Calling it "AI" is a bit disingenuous, like calling the auto brightness feature "AI"


Yeah it's just a pretty simple algorithm.

When it constantly listens for us saying stuff like "oh it's cold" and grabbing a blanket and reacting, yeah that would be AI. I definitely don't trust Google with that kind of info though!!


AI is the new blockchain :P a lot of hype an nonsense marketing.


It's not just due to the introduction of LLMs that their software is basically useless.

I'm currently planning a bachelor party, and decided to create an account (terminated all Facebook/Instagram/Whatsapp accounts many years ago) on Facebook to reach everyone, and a group to organise in.

The user experience is horrendous. Comments in groups disappearing, notifications linking to items that don't exist, and messaging is so broken that I gave up and created a mailing list instead.

Then there are myriad of smaller quality of life issues as well.

Why anyone uses this mess on a regular basis is beyond me.


Wow the Meta AI pretending to have disabled children in order to inject itself into a conversation it was never invited into is incredibly creepy.


Tbh I'm kind of shocked that the PMs overseeing that system didn't have their devs use a fake username.


I know it's old-hat to say at this point, but I'm getting really exhausted of seeing companies trying to sate investors and techbros by introducing more and more AI-powered "features" into existing applications. I don't need to be able to generate images in my Facebook messenger chat with my parents, nor do I need to ask Microsoft Copilot how to use my own computer.


I work at a B2B SaaS and we have so many customers asking us about about our plans for integrating AI into our platform, for no other reason other than their leadership/stakeholders want to use more AI.

We ask them what they want the AI to do and they don't have an answer. They just want AI


> We ask them what they want the AI to do and they don't have an answer. They just want AI

Perhaps I'm just not management-brained enough but this just seems so foreign to me - why introduce features just because they're the buzzword? It's not even like they have a path to making it profitable, it's just the same buzz as "Web3" and "Crypto" repackaged.


That is simple. The board sets top level strategy goals. Could be AI at one point it was Mobile. Then groups fight for resources (money) in each silo who come up with projects that incorporate these strategic goals. If the goal is mobile, customer service department fights for phones for each cs agent. Sales wants all websites mobile friendly. It leads to internal tools team jumps in and makes internal tools mobile friendly (but no one using them has a mobile device). Much time/money is wasted but people get promoted, bonuses paid, jobs created because of these goals. Senior leaderships get to say we are more mobile than this company.

The entire top down strategy approach thing is a waste.


There was a time 8 years or so ago when Sundar declared each team needed an AI OKR. Every ambitious manager and team lead took it to heart and basically tried to wedge in some AI adjacent product feature to wedge into the product. That was not fun.


OKR: Objectives and Key Results


I can't really explain it either, but it's just how it works. Internal proposals that use the latest buzzwords succeed much more than those that don't. I remember seeing this with big data, smart city, sustainability, resilience and a few others - often when it was a huge stretch no find any relevance whatsoever.


As a business leader you want to take the best chances to overtake your competition. That means trying out tools like ai, crypto, mobile, big data, etc.

Would those tools help? Most times no, but it is your responsability to try ideas that might give you an edge.


Seems perfectly reasonable for people to be chasing buzzwords, when most people don't understand the underlying causal reasons for actual business success. Much safer to follow the herd rather than be left behind.


Every time I talk to my GCP account reps they kind of half-heartedly pitch me their new AI features. I respond that there are no business problems I would see value from approaching with AI / LLMs (as that’s what they really mean by “AI” these days). They basically immediately agree and we move on to real problems.


It’s one of those “X and Y problem” things people on StackOverflow talk about. The Y is “we want to achieve the same or higher productivity while reducing payroll” and they’ve heard that AI is a way to do that (an X).


This is also called solutioneering.


Maybe this'll work.


At this point it really is Our Lord and Saviour the Blockchain all over again.


"We have linear regressions in our metrics pages, yes we have AI"


"We have if/else statements in our code, yes we have AI"


At least 3 years of ignoring the prompt to review the new WhatsApp terms of service have been the perfect training for ignoring the 2 non-removable AI calls to action it's now added to the app's home screen, for no good reason I can think of. That's literally never going to be a thing I want to do if I'm opening WhatsApp.


My Hacker News headlines today.

35. AI is making Meta's apps basically unusable (fastcompany.com)

36. AI-powered cameras installed on Metro buses to ticket illegally parked cars (latimes.com)

37. The AI expert who cited himself thousands of times on scientific paper (elpais.com)


Just wait a few years and it will be

5. AI bot monitors your actions at home. Reports any unpatriotic actions to government.


haha, it's called Amazon Alexa.


Police already access ring cameras


> Frankly, it’s bizarre that the most powerful social media advertising company in the world has so little information about me after a decade-and-a-half of use that it cannot even give me personalized suggestions about how I might use this dang thing.

The data collection was never about helping you or making your life easier. It's always been about using that data against you. Most of that time that means manipulating you or separating you from more and more of your money. Facebook doesn't care if you have zero interest in something. If Facebook wants you to think about something it will put in front of your face either way.


I would imagine providing useful recommendations and search results would make it easier to bring you to locations you were more likely to part with your money.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: