"Before we begin, we'd like to say the following words to help our share buyback program: difficult, concerning, challenges, headwinds, losses, failure, negative, consequences, layoffs, closures, adverse events. Thank you. We will now begin our earnings call..."
I wonder if this opens the platforms to adversarial use cases, such as gate crashing the Q&A session with such keywords, eventually spoken out at some ultrasound frequencies so that humans can’t hear it, but machines will (if fed raw audio rather than meeting minutes, in an attempt to pick up emotions)... :-D
2025: CEO apologizes after barely audible cries of infant in the background of a WFH earnings call tanks stock price, as deep learning algorithms trained on misery detection interpret that to mean company is in dire straits.
I wonder how long until we see the first lawsuits for stock price manipulation using this feedback loop?
If an unscrupulous executive were to make factually true statements that were engineered to mislead automated assessment algorithms so that they or their associates could buy the dip.
A great example of why the success of ML in typical ML domains can't be trivially translated to finance. Typical domains lack feedback. A dog won't alter its appearance as a result of being classified by image recognition algos.
This is a good point but allow me to play devil’s advocate. If a breeder uses an ML system to select from many possibilities to find breeding specimens, then the product of those specimens is used for the next generation of ML system, there is now the possibility of a feedback loop.
What I think is funny about this is that some people worry that AI will control us. But if AI were really controlling our lives would we even know? In the example of the paper, machines have taught us how it would prefer we talk about equities.
Wouldn't a great intelligence just find subtle ways to incentivize us to do what it wanted without announcing itself or it's motives? Maybe the machines already took over.
Fascinating. Is there already a nice buzzword for this phenomenon in general? I mean IAs read financial reports and will do online shopping for us in the future. Maybe something like IAO? Intelligent Agent Optimization? Just asking for a friend of course...
Interesting how the CEO feedback effect also relies on a study of what volume of their stock is traded algorithmically. Also seeing things like this monitoring, for example in AI to predict whether students are cheating. https://unintendedconsequenc.es/ceos-students-algorithms/
> * Recommender engines (e.g. music similar to the one I'm listening to)
Is only ethical when opt-in, otherwise it violates user privacy to build preference profiles and requires additional private data to make accurate inferences.
> Face unlocks for phones
Heavily racially biased. Facial recognition in general is heavily racially biased.
Other than that, I guess that's a good list. If ML stayed in those arenas it might be a net good, unfortunately those things on the list are mostly side-effects rather than targeted intentions from where research is going (where it's funded). The vast majority of ML/AI work is focused on hijacking people's private data to steal their attention and psychologically manipulate them.
"Not working as good" is not the same as "actively working against". Unless of course a perfect Aryan visage serves as a master key to all face-locked phones.
The question wasn't "which ML use cases are potentially good", it was "which ML use cases are seen as predatory by users". So it doesn't matter that face detection has the possibility to be good, it's currently bad
Bad, but not predatory. For example most speech recognition systems I tried often produce hilarious results with my accent, but I don't feel the tech is out to get me, it is just less than useless.
On the other hand something like ad-tech feels more dangerous the better it works, to the point that I employ active countermeasures.
Not so sure about these. Personally, I mostly feel deprived of options. (E.g., I guess, there are some videos uploaded to YT every day, and there must be more than some 3 months old news clips and those videos I've already seen, but it has actually become hard to learn about them. Therefore, in my personal YT universe, human creativity came mostly to a stop.)
Most new cars sold in Brazil, for over a decade now, can run on gasoline or sugar-cane ethanol, or a mix of both in any proportion. One can fill up with either or both without bothering informing the car, and when turned on the engine will quickly adjust to the mix present. AFAIK, that adjusting is mostly ML.
Machine translation. Dictation. Assistive devices. Robotics. Control systems. Protein folding. Search that's better than ctrl+f. Helping you find a good movie to watch tonight. And much much much more.
I wonder how long until a corporate disclosure ends up accidentally (or purposefully?) serving as an adversarial attack.
Back when I studied it, there were plenty of known 'soft-spots' in different methods (implied negation, euphemism, etc.) that corporate nonsense-speak seems well-suited to replicate.
It reminds me a lot of quantum mechanics:
“The act of observing disturbs the observed“