Hacker News new | past | comments | ask | show | jobs | submit login
Financial services shun AI over job and regulatory fears (ft.com)
32 points by marban 3 months ago | hide | past | favorite | 34 comments



It's not surprising that a technology where "we don't really understand how it all works under the covers" is anathema to an industry where nearly everything must be auditable.

In finance the reasoning behind a decision (e.g. to extend a loan, to do a deal, to find a business, etc.) is nearly as important as the decision itself, and "because the black box machine told us so" is not a sufficient explanation.


Also, HFT firms are more latency sensitive than iamchris4life[0]. No way in hell they're going to throw all their finance data through an autoregressive transformer model and an HTTP API when they're already spending $$$ on FPGAs to run their algorithms a few milliseconds faster.

[0] Top-level US DanceDanceRevolution player.


But not everyone in the algorithmic trading game plays this "HFT firms running algorithm on FPGAs for tens / a few hundred nanoseconds advantage".

For med frequency the compute budget is enough.


HFT firms have been using ML and AI for years. They simply called those things with a different name: statistics.

The result models are then fast enough for them to make trading decisions in less than a microsecond.


You are totally right. Information obligations and liability are the critical issues here. No more luring customers into financial constructs; really, no one understands, but the bank wins in the end.

Informed decision-making is, by definition, boring. It appeals to the rational decision-making process.

I think this can be a chance for banks to virtually stand out. Constraints are the breeding ground for innovations. ;)


Since making a decision in itself is subjective, does it matter if ai makes a decision ? Two bankers may have two conclusions on the same data, or may have two different risk tolerances or strategies.

As for audibility, all of the data is kept and archived. Ai is not deleting the data that is captured. I’m certain you could use an llm to generate convincing arguments either way based on an application.

For rejections they should be manually reviewed in most cases to cover your liability.


> Since making a decision in itself is subjective

Not exactly, and that's the whole point. E.g. the whole reason things like credit scores exist is to provide an objective, calculable metric that determines if someone should be offered credit. And, critically, how that metric is generated follows some straightforward rules, so that it can be shown to be free of illegal biases.

Obviously there are other decisions that aren't simply boiled down to a single number, but in general even those decisions that include some subjectivity try to provide clear, objective rationale as to why a decision was taken.


If things are so objective, what is the role of ai in this process ?


There are really only two use cases for LLMs that have gained any traction in the enterprise: productivity and triage.

Productivity is largely being done to them, with devs using LLMs every day of their own accord, and most orgs leaving Microsoft to do the heavy lifting of making Copilot work over all their unstructured docs and emails.

Triage is the immediate prize. So many of these mega-corporations are doing mega-scale things (millions of customers, billions of transactions) that there is huge opportunity to put an AI layer in front of staff to guide and prioritize their work. Not to do their work, but to increase the chances that they are focusing on the most valuable work. The ideal AI here works like a secretary: “Good morning, I’ve reviewed all the recent calls/cases/leads/transactions and these are the top 20 that seem worth looking into.”

I don’t think anybody trusts AI to do the actual looking-into.


> huge opportunity to put an AI layer in front of staff to guide and prioritize their work

Glean is an example of this and they charge $$$/seat which means it is being selectively targeted at specific teams rather than enterprise-wide.

The problem is that companies don't have their data in a state that can accomodate such a layer. And with the death of EDWs and trend to siloed SaaS apps the problem just gets worse every day.

Which is why AI is going to be more like a mini-assistant inside every app you use instead of some god-like agent.


Death of EDW (assuming you mean Enterprise Data Warehouse)?

I don't hear many people use the term EDW, but data warehouses (e.g. Snowflake) / data lakes (e.g. Databricks) are very much alive. It just takes more connectors and data engineering to get all your data there.

That's a big part of the reason we started Definite [0]. It's hard to pick all the right components to get a data warehouse, we make it simple (built in warehouse with all the connectors you need).

0 - https://www.definite.app/


Disagree on the data silo issue. There's a growing trend of SaaS providers making data available to their customers by feeding it back into their DWs. It started with the likes of Segment and Heap, and has now grown to include companies like Stripe, Salesforce, and Zuora to name a few. I'd wager that making data accessible is only going to become more table-stakes over time.


I have implemented AI/ML solutions in Financial Services in 15 countries and know for a fact that this is not true. If they only refer to generative AI, I'd argue that the space is moving extremely fast right now. That makes it hard to implement anything as one might not know if next week another company will have a better model or AI safety alignments completely bork an existing model. On top of this comes the regulatory burden, which is in place for a good reason.


If "implementing AI" means implementing yet another chatbot, I'm happy they are not using it. One of the banks I use, just added a chatbot as default option when you contact support: I'm planning to move to another bank. When I have a real problem I want to speak with a human, not with a bot. I do use AI stuff for other tasks, but I don't want it to replace real customer support.


Which bank added the chatbot?


I think taking a slow path with AI around financial services is probably wise.

Unfortunately I don’t think other countries and multinationals will take the same approach.

So how do we avoid another arms race? This seems like a good public position, but ignoring AI isn’t the right private position if you care about your financial system.


Nobody is ignoring AI.

There has been unprecedented budget spend allocated for AI in the enterprise.

The results so far have been poor where apart from customer service there hasn't been any game changing use cases to justify the hype. There just isn't that much you can do with an inauditable black box that is consistently wrong 5-10% of the time and depends on high quality input data.

And most enterprises have been doing ML/DS for years and so have already have tackled low hanging fruit using NLP etc.


That. Plus, with the AI act in the EU it will soon be illegal to do things like matching people to jobs fully automatically unless the toolkit offers top to bottom explainability and bias mitigation. In my industry there are tons of sexy demos - and nearly no productionized systems utilizing LLMs for anything else than content generation and summarization. And only where mistakes are tollerated. Certainly no gamechangers, though lots of enterprise-scale snakeoil salesmen.


Even for customer service I’m not aware of good use cases. But probably I’m ignorant.

Can anyone share examples? I’m mean well working proven ones, not marketing BS.


I would guess that by fine tuning an LLM on product manuals, installation guides, FAQs and vetted and customer support cases, one could create a competent support chat bot. Using RAG you could provide it with the output of your real-time status page for the product, and use a prompt that had the ability to forward issues to a human if the customer seemed unhappy.

I'm not sure if there's a concrete example of this in reality, but why wouldn't it work, greed and incompetence aside?


At work we use a chatbot trained on our docs, it’s pretty good and sees lots of usage. Of course people can also just read the docs, but many prefer the bot.


Same here, but adoptoion is a challenge. People tend to stop using it after the first instance of a bullshit answer - they lose trust completely. Which is, seemingly, inevitable.


Why is it inevitable? I think that the proper way to model it is as a multi-armed bandit - a bad response from an agent should reduce your likelihood of using it again but not to zero. If users have sufficiently good alternatives, I expect usage of the AI agent would drop down, but it seems to me that the other options are generally worse (and often similarly likely to give bullshit answers), such that users will over a long timeframe would settle on a relatively high likelihood of using the AI.


Github Support is an example: https://support.github.com

With information about how it was implemented: https://docs.github.com/en/support/learning-about-github-sup...


Anecdotally I’m using Kapi trained on our docs to help answer customer questions and its amazing at getting answers to level 1 support. We still need to edit the answers but editing something 90% right is much faster then searching the docs ourselves.


Quantum Street running on the IBM Watson platform. Manages $5bn a year in portfolios I believe.


After 2008, regulators have taken a dim view of "magic" in financial services. It was a painful lesson and they don't want a repeat.



Regulatory fears, not fears of getting it wrong. That's just trying to pass the buck to agencies or lawmakers. As soon as someone in this chain gives in, e.g., an official short before retirement or in need of a consulting gig in the "AI" industry, the "best practices" will be applied everywhere just out of FOMO. The results will be ... interesting.

That being said, I still think LLMs will make for novel user interfaces.


It doesn't really matter whether the firms formally "shun" AI, their employees are all going to be using AI in their own customer communications, documents, decision-making, programming, etc. The productivity gradient is just too strong to keep leak-free.


Exactly. There's also the notion that this is a money incentivized business. People being more successful with the help of AI translates into more money for them. That's not going to take long to turn that whole industry on its head. Big bureaucratic companies not wanting that only creates opportunities for the smaller ones.

The counter argument of course is that most of what these big companies do is of course already quite inefficient and stupid. I've had a few interactions with German banks that really rubbed in just how spectacularly dumb and inefficient that industry can be.

On one occasion I had to wait 3 weeks for some paperwork to be copied out of an archive. Turns out they were still using microfiche for storage. That stuff was obsolete in the eighties. This was in 2014 and this bank was shuffling paper around like it was the 1970's!! We eventually got some copies. But I made a mental note to stay well clear of this bank and traditional German banks in general for all my future banking needs. They don't do smart, efficient, convenient, or fast.


And slowly said employees will be replaced by AIs


Any day now!

Along with Linux becoming the consumer favourite, blockchain being actually useful, and quantum computing - it will all surely happen


I can’t read the paywalled article but this just isn’t true (I work in this area).

Financial services is taking a thoughtful approach to where and how to apply AI, yes. “Shunning” it? Not at all.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: