Hacker News new | past | comments | ask | show | jobs | submit login
Investors use AI to analyse CEOs’ language patterns and tone (reuters.com)
109 points by pseudolus on Oct 20, 2021 | hide | past | favorite | 76 comments




We live in an era of 2.0s. This is probably going to turn out to be Phrenology 2.0.

It isn't even a settled debate whether CEOs know what is going on or are a particularly important driver of outcomes. It is unlikely that NLP models can foresee the future, even if bankrolled by hedge funds.


Of course they can not predict the future in the strong sense, but they may be able to extract some fraction of a bit of information about the distribution of possible outcomes, slightly reducing the variance. This is enough for a hedge fund to make money. Similar to knowing something like how full a department stores parking spots have been over the last quarter. It won't give you an accurate prediction, but it can give you a slight edge over the competition.


It isn't even a settled debate whether CEOs know what is going on or are a particularly important driver of outcomes.

It's pretty clear that a negligent/criminal CEO will lead to the bankruptcy of a company and a loss for investors. So yes, you can say at the extreme end that CEO's are drivers of important outcomes.

Now whether it's effective I think is another subject all-together. My own opinion is that micro-expressions, body language, etc. have been studied and used by the FBI/CIA for their field work. The idea being that once you establish a baseline behavior, you can notice "clusters" that deviate from the baseline based on verbal+non-verbal cues. So I don't see why an AI couldn't do the job.


Warren Buffett has a famous saying: "I always invest in companies an idiot could run, because one day one will".

There are many companies where it's likely that a negligent CEO (even a criminal one) wouldn't lead to the bankruptcy or even close. Many companies might be better off with a negligent CEO rather than one actually trying to do much. Right now Google could be run by a person on the beach sipping margaritas. Many other big public companies with very strong competitive positions probably could too.


Sure tell that to theranos and Enron.

A criminal ceo will often lead to the dissolution of a company so I don’t really see how this follows.


Theranos never had a real business. So it didn't dissolve so much as never existed. Sure, the legal entity was created and dissolved, but there wasn't ever anything approaching a real business.

Most of Enron (some was actually ok) wasn't a real business either.

It follows like this:

A great business can often be run by a fool and be fine, because the business is just so good. Think Google, Coke, and almost all newspaper companies in the 20th century.

The opposite is not true - a terrible business is hosed no matter how good the CEO is. Think farms, most retail businesses, newspapers in the 21st century.

And most companies sit somewhere in the middle - the CEO can make a difference.


What’s your definition of a “real business”? Enron had revenue of over 100bn, 30k employees, and delivered actual products and services for many years. How’s that not “a real business”?


In my mind there are multiple types of "not real" business. Two obvious types:

i) A business where the true state of it is being hidden to raise money to keep it afloat and it's not sustainable without raising said capital. Enron did this, raising lots of debt financing. Theranos did it with equity.

ii) A business which isn't sustainable over any time period without external money to keep it afloat. It need not be fraud, it could just be stupidity on part of investors, executives. Many internet bubble businesses were this type of "not real".

Enron had a lot of businesses under the corporate umbrella. Some of them were real (they owned hard assets, pipelines, energy generation assets). However, a lot of the revenue from other businesses was fake - derivatives revaluation accounting tricks (like, "hey this derivative contract is now worth $50 mill more because of some analysis we did, up goes revenue"), debt hidden via special purpose entities, and other similar things. The "not real" part of Enron was so big that it's debts brought down the rest of it. The chapter 11 process sold the real assets and the creditors got some money back and some employees stayed with those business.


A lot of things have been studied, the question is what evidence they have that it actually works. It seems similar to one of those things where people have made a lot of claims with only dubious evidence.


This sounds like AI hucksters justifying their existence… perhaps we’re moving into a later stage in the hype cycle.

I suppose you could have a portfolio of AI tools that would alternate between telling you “stonk go up” and “stonk go down”. Blame the customer for choosing the wrong one if they are unhappy.


It feels like people have been predicting the end of the "hype cycle" for the last decade.

I agree that this particular application seems... questionable, to say the least, but finance is probably the last place where data scientists need to work very hard to justify their efforts. Statistical modeling has been a core part investing for a very long time, and ML is just a subset of statistical modeling.


Speaking of phrenology when I was still in the investment business we were trained in the “facial action coding system” which purportedly teaches one to read the facial ticks of your conversation partner to see if they are being honest. I’m not really sure there is a signal there, but it was fun to learn, and fairly trendy in the industry at the time.

Of course we also did nlp to identify companies that were using language associated with negative returns in their quarterly conference calls. This was very successful, mostly by identifying pieces of shit we had not heard of yet. Once identified there were usually much stronger red flags. But the performance of that strategy was good. It definitely was not phrenology. Around 2001-2005.


Exactly - besides this technology will effectively be moot in the longrun :

If it works well enough, firms will put their CEOs speeches through models to ensure that their speeches rank well, and that will cause it to fail.


or more actively:

CEO ML team reverses model used by Investors and uses it to write own speech


Your CEO brain calipers don’t need to be worth a shit in order for you to make money with them. If the market moves predictably every time you publicly announce your findings, that’s all you need.


The example in the article was a case where information was being hidden. There was a mismatch between content and tone "Everything's fine!" [I know everything is not fine]. Humans can already pick on a mismatch between content and tone. We call it intuition. The things that trigger intuition are quantifiable. Using Machine Learning to systematize things we already do is pretty sane and reasonable.


They don't have to be a driver of outcomes, just an indicator.


Or perhaps, an instigator.


meh, you sre thinking like a 1.0.

what matters isnt what their intent is, what matters is how the greater fools will react.

as such, figuring out what the fools will do can lead to successful pumping and dumping, adding more liquidity skimming.

they did that successfully with trump


Next step: CEOs to stop attending quarterly earnings releases. IR meetings to become boring, with just the CFO reading numbers.

If I recall: Steve Jobs rarely attended IR events, Jeff Bezos just sent a letter in advance (note: beautifully written), but barely made live comments. Future guidandes are often released as `something between +30% increase or -30% decrease`, meaning useless.

My impression is that nowadays most PRs and releases are already meticulously reviewed. Quite an interesting paradox: The more regulations, rules, controls and surveys are introduced less detailed and exciting releases and disclosures are..


> Quite an interesting paradox: The more regulations, rules, controls and surveys are introduced less detailed and exciting releases and disclosures are..

Interesting example of what I see as counterintuitive behavior, which brings up a serious question.

If companies are prevented from making anything other than dry financial disclosures every quarter, how is a normal individual investor who is not an insider then supposed to judge investment opportunities?

Personally, I feel like the markets would operate better if companies, big shareholders, etc were always 100% free to make public comments without worrying about the SEC breathing down their neck micro-analyzing every statement. As an investor, I want to hear more from companies, not less.


They can research the products, the market/industry, the competitors. This is where the real information is.


As far as researching competitors goes, it’s tougher to do that if the initial principle exists and companies are all releasing dry statements.

That’s my point. I’d like to hear more forward looking statements of the kind that the SEC frowns on.


What you are saying is in some sense true, but only to a tiny degree. So tiny I'd say it's not relevant. The way to research competitors isn't to listen to the players involved. It's to deeply understand (via research, talking to real users, or becoming a user yourself) the players and their offerings and the components required to deliver those offerings. The players all have a nice coherent narrative about themselves and the competition. It's never the full story and in most cases wildly off.


One option is, ICOs -- they are not regulated by anyone and they seem to be making announcements all the time on Instagram with wads of cash -- I don't understand all this but somehow, cash and rappers are part of the "proof of consuming stakes" algorithm or something...


Step after that: companies have AI represent them in IR meetings, with just the robot reading numbers.


Or companies leverage the AI internally to listen to a report from the CEO - then sanitize it and release it to the public.


kkk lovely. AI full circle


This reminds me of work done with the Enron Corpus. IIRC, it was used to cross reference statements made in court by the same people. When the emails revealed the lie made in court, there were often differences in linguistic structure between other statements. The example that sticks out in my head is that use of the passive voice was a key indicator of the likelihood of a statement being a lie.

Of course these insights are statistical in nature, not definitive.


No way this hedging tone was imperceptible to the human audience. All executives do when responding to forward-looking questions about earnings is to hedge.


Everyone has been using sentiment analysis for a while with any recorded call with executives. Frankly it’s often not exceptionally helpful and having longer duration knowledge about a company and its executives can be better for understanding when something in their posture changes. The fun alternative analyses I’ve heard of are when people start tracking executives’ travel to see who they’re meeting with.


Hey, we used to do that one, too! ACARS data to see which executives were golfing together. It's hard to figure out who is buying whom, though. It's more useful for pure entertainment. The only way I actually made returns from CEOs on airplanes was when one happened to sit next to me on a commercial flight and proceeded to edit a powerpoint with "BUY XYZ CORP FOR $M.N BILLION" in 100-point bold letters on his gigantic laptop.


Reminds me of countless cases where in preparation of an investors-call, the CEO had to be quickly briefed on the status of a suddenly important topic he wasn't following for the last quarter. If he's not familiar enough with a matter, I'm sure him talking about it will add noise to this AI analysis as well.

The fantastic aspects of such "solutions" for the stock-market is, that they're probably as reliable as most other solutions, since they all try to conclude something from incomplete and ambigous data-points.

Their purpose is probably not to be fully accurate, but just to give more comfort when taking the final buy/sell decision.


In other news, CEOs will use an AI to send press releases. We already have AIs writing articles anyway, so at some point it will become useless to check the news as its all gonna be corporate b$. What are we gonna do then? Rediscover local journalism?


> What are we gonna do then? Rediscover local journalism?

We're probably already in the first stages of that journey, complete with the unreliability and all.


This comment is written by an AI


The funniest part of this is that you have people with no fundamental understanding of companies attempting to perform fundamental analysis...the solution which doesn't seem to have occurred here is...perform fundamental analysis.

All of the stuff that is being signalled here can also be worked out by just reading the financial reports (the issue with a lot of these models is that information leaks through...ofc, companies that announced downsizing are more likely to go bust...if you need NLP to work this out, finance is not for you...the example of semis is the same, everyone knew that ppl were uncertain about supply, you didn't need NLP for that). The only reason these results are in any way remarkable is because academics convinced everyone that there is no "signal" in conference calls...right, everyone who has ever invested money knows this is false, this isn't surprising.

And, just like value factors, this will go badly wrong because computers can't actually value businesses by themselves, they can't do fundamental analysis. Attempting to shortcut this is not smart.


State of the art NLP AI models actually have trouble figuring out text posts with explicit racism and hate speech, but people are trying to say that it can pick up subtle language clues from CEOs?

Consider me a skeptic.

AI NLP models have way too many false positives and negatives for this to be workable. Maybe in 10 years, but definitely not now.


> State of the art NLP AI models actually have trouble figuring out text posts with explicit racism and hate speech [...] Consider me a skeptic.

Perhaps the reason why detecting rascism or hate speech is so hard for an AI is that what is considered racism or hate speech is a moving target.


Just because NLP hasn't solved all problems doesn't mean it can't solve some problems. And some problems of seeming complexity may turn out to be more shallow than others that initially appear straightforward.

Off the top of my head, racism is often so casual that stochastic AI models may have difficulty discerning a difference in syntactic structure or other linguistic features compared to a similarly casual statements of a non-racist nature.


You don't need to be able to discriminate well, you just need a small non-consensus signal that may not even be perceptible to humans.

But you're right to be sceptical. Hedge funds are largely a sales job. How can you convince gullible investors to overlook the terrible performance of your sector and invest in you anyways, and then take a large slice of the profits when you get lucky? Buzzwords and neat sounding strategies help.


My favorite example...

"...we went from a negative growth in Q4 in storage revenue on a year-over-year basis. We now have flat..."

"DELL – Q1 2022 Dell Technologies Inc Earnings Call"

https://investors.delltechnologies.com/static-files/d70442f3...


I'd suggest it really depends on the exact problem space you are working on.

Some NLP related problems can be solved at a similar level of quality of humans - (humans actually make a lot mistakes as well).


> State of the art NLP AI models actually have trouble figuring out text posts with explicit racism and hate speech [...]

This is just wrong. State of the art NLP models are pertectly able to identify racism and hate speech if they are trained to do so.

The issue is just that most of the times political correctness is not in the training objective. In fact, general language models are trained to reproduce what they read as closely as possible, hence the racism/hate speech.

Again, the problem is how they are trained, not what they could/could not achieve.

> AI NLP models have way too many false positives and negatives for this to be workable.

Citation needed.


On the other hand you have access to probably hours of talks of C executives and them expressing different emotions less or more subtly

why wouldnt it be possible to train on it?


Next: CEO's strike back using AI versions of themselves to talk to the AI's that analyze them.


This is the perfect business: create a three-way strategy of 1) Free voice sentiment analysis for the public - get the public self-entertaining themselves by giving away the tools to be critical analysts of public speech 2) Enterprise voice sentiment analysis to analyze rival CEO/corporate spokespersons 3) Enterprise voice cloning of the CEO/corporate spokespersons with perfect intonation and glowing belief


How does one go about doing this? They convert speech to text or run ML on audio itself ? Any opensource tools out there to play with this concept ?


This might be the best NewsArticle headline on HN I've ever seen.

Why, what does it say? Can you log that in a reproducible Notebook with Docs and Test assertions please?

Or are we talking about maybe a ScholarlyArticle CreativeWork with a https://schema.org/funder property or just name and url.


Cue the (dystopian?) future where it gets used by anyone for anything to read the speaker between lines.

Maybe we'll end up preferring text communication. About bloody time, voice chatting is needlessly overused. But I'm sure extroverts will solve that another way however it suits them.


I figure when people want in-person over video, video over audio etc. both sides can benefit because there are "real" signals you want to be understood, but there's an element of detecting things the other side would rather you didn't as well.

So the people who are good at faking as well as reading those signals get an additional benefit over those who aren't.


2060, black box AI reverse engineers cognitive ability from voice and recruits the most unlikely edge cases for its advisory board.


Investors are using AI to now figure out if the CEO is making any sense, wow! AI is doing more and more now. AI investing. AI burger flip. AI art. I am sick and tired of hearing about how AI will improve our lives. What about human creativity in a complex world where we need to survive / pay rent / bills AND put food on the table and AI is taking all jobs away along with the creativity and connection that comes with humans spending time with other humans! whats next? AI meeting AI? a battle for who has a more precise prediction model ?- 2 cents.


whats next? AI meeting AI? a battle for who has a more precise prediction model

That already exists, it's called the financial markets and there are daily battles between competing quant strategies.


Of course AI cannot do any of those things.

But NFTs can.


Humans have been improving their ability to read subtext and intuit background information from spoken language and facial expressions for millenia.

In my humble opinion, it seems kind of unlikely that AI will outperform people in solving this problem.


So Google can help the CEO here with its new tensor Pixel 6 phones right?

You talk into the phone and it removes any trace of sentiment or tone or better still allows you to choose and then plays that version out to the listener.


Great, now everyone will sound like text-to-speech voice synthesis drones


Is this actually useful? Not that it doesn't do exactly what they say it does, but a lie detector isn't super useful when every word out of a CEO's mouth is a lie by default.


> Is this actually useful?

IKR.

Pennebaker thinks so. He's a big proponent of "computational linguistics".

https://en.wikipedia.org/wiki/Computational_linguistics

I recently read his book Secret Life of Pronouns.

http://secretlifeofpronouns.com

I thought it'd be fun to replicate the book's claims. Like maybe analyze political speeches.

But Pennebaker's current rulesets apparently aren't available. And I didn't have the gumption to figure how to use the tools which are available.

Another book about truthiness is Everybody Lies by Seth Stephens-Davidowitz. https://www.amazon.com/Everybody-Lies-Internet-About-Really/...

There's got to be some overlap, right?


Sounds like hedge funds trying to market their smarts. They still haven't beaten the index funds though, so it's more likely BS.


Hedge funds beat index funds all the time. In fact, due to the averaging nature of index funds, I'd say about half the time a hedge fund will beat it (excluding expense ratios)

However, the same hedge fund consistently beating the index fund, that's unlikely. But things like ARKK certainly do exist and are funds that can beat the market for a few years, then they tend to regress to the mean.


On average, hedge funds perform worse than an S&P 500 index fund:

https://www.reddit.com/r/market_sentiment/comments/p0emqj/do...


Many hedge funds do not aim to beat the S&P500 so it’s a fruitless comparison.


I hadn’t heard this wasn’t a goal. What is the goal of a hedge fund then if the competition is a low cost asset that will likely outperform you?


There are many potential goals, for example:

- Focus on a particular sector, if a biotech-focussed hedge fund is up 4%, SPY is up 7% and biotech stocks in general are up 1%, they have still outperformed. Clients will want exposure to that sector.

- Tail-risk hedging, losing small amounts of money most of the time to make huge amounts during unlikely events.

- Lower volatility, e.g. a fund which underperforms SPY slightly, but hedges against dramatic downturns in the market, so timing is not as important for redemptions.

Sophisticated investors (the only people allowed to invest in hedge funds anyway) generally have more complicated requirements than throwing their money at an index fund and waiting decades to retire on the returns.


Because past performance isn't a predictor of future performance, at least in the stock market.

ETFs/Hedge funds can have particular features, like equity protection or exposure to a certain market or markets, instead of growth. You wouldn't expect a bond fund to out perform the total market index, that's not it's purpose. It's purpose is to protect your wealth in case of a downturn.

Similarly, some hedgefunds (private equity and arkk are probably two good examples) that seek to find that rare 10X company that they can make millions or billions from, and are okay with generally losing most of their bets.


Saying hedge funds beat index funds is as helpful as saying some stocks beat index funds.


https://www.newtraderu.com/2021/01/31/current-renaissance-te...

"The Renaissance Technologies Medallion Fund has produced some of the greatest returns in the history of the markets. "


No one denies that individual funds can beat the market. That is the nature of the beast of averages though :)

I was curious about how much money Renaissance manages: it’s about two orders of magnitude less than Blackrock.


I was curious, so I looked up the figures:

RTC manages $165 billion Blackrock manages nearly $7 trillion


Soon we're going to have questions like, "Our AI interpreted your words as so-n-so. Care to comment?"


Is there a paper somewhere on this? Would really like to check out their process


I have some swampland in Florida I would like to sell these investors.


the worst thing about human communicaton (baseless assumptions) finally recreated with technology, I really hope it doesn't get used and abused, but of course it will ...




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: