Would you say that there’s a significant enough effort currently to gain an understanding? I work in fintech, and based the kind and variety of security and compliance controls we have in place I can’t imagine a world where we are permitted to include generative ai technology anywhere near the transaction hot path without >=1 human in the loop. Unless the value is so enormous that it dwarfs the risk. I can see at least one use case here, especially at the integration layer which has a very large and boring spend as partners change/modernize and new partners enter the market.
We already understand enough, and have for many years, to know that the Achilles heel of any system we have today that we consider "AI" is that they are fundamentally statistical methods that cannot be formally verified to act correctly in all cases. Modern day chatbots are going to have the worst time at this since there is very little constraining their behavior and they are explicitly built to be general-purpose. You can make the case for special, constrained tools that have limited variability within defined and appropriate limits, but you can't make the case that the no free lunch theorem has been defeated just because a statistical learning system happens to write text kind of like an English-speaking human might.
It's my personal opinion that there should never be a decision system based on statistical approximations without a human in the loop, particularly if the consequences can affect lives and livelihoods.