ChatGPT for financial advices? Not so fast

Posted by

The stereotype of chat-based large-language-model artificial intelligence products in early 2023 is that they are very good at sounding smart but not so good at being correct. If you ask a model like ChatGPT to write up some investment recommendations, it will produce plausible fluent prose that reads like a professional investment recommendation and that includes compelling citations to data points, but the data might be all made up and the recommendations might be worthless. It is good enough to convince the uninformed, but not good enough to act on.

Given that stereotype, you might imagine two ways for big banks to think about these models:

  • “Being fluent, confident and wrong is a core job function of a banker or investment analyst, and ChatGPT allows our employees to be much more efficient at it, so we should encourage them to use ChatGPT as much as possible.”
  • “Our clients will be annoyed with us if we are constantly fluent, confident and wrong, so we should ban use of ChatGPT until we are more confident that its recommendations are correct.”

Obviously the first approach would be very funny, but the second is more realistic. But in fact there is a third, less obvious, but even more correct way for big banks to think about the use of these models:

  • “If our employees type about business on a computer or a phone, it had better be in a software system that we control and that creates a searchable record that we can preserve forever, because otherwise our regulators will get mad at us. ChatGPT is an artificial intelligence model, but it is also a box for typing on a computer, and that’s too big a regulatory risk for us.”

Bloomberg’s Gabriela Mello, William Shaw and Hannah Levitt report:

Wall Street is clamping down on ChatGPT as a slew of global investment banks impose restrictions on the fast-growing technology that generates text in response to a short prompt. 

Bank of America Corp., Citigroup Inc., Deutsche Bank AG, Goldman Sachs Group Inc. and Wells Fargo & Co. are among lenders that have recently banned usage of the new tool, with Bank of America telling employees that ChatGPT and openAI are prohibited from business use, according to people with knowledge of the matter. 

In a regular, routine reminder of unauthorized apps including WhatsApp, BofA added a reference to ChatGPT specifically, and has repeated in internal meetings that new technology must be vetted before it can be used in business communications, the people said.

We have talked a few times about the US Securities and Exchange Commission’s crackdown on banks that use anything other than “official channels” to do business: If you text a client from your personal phone, or send her a WhatsApp message, that will get you and your bank in trouble. Not that you texted her about doing crimes, I mean, but sending perfectly innocent businesslike communications over unofficial channels will get you in trouble. You’re still allowed to talk about business in person, over lunch, but give it time.

In like five years, technology — and the SEC’s interpretation of the rules — will have advanced to the point that banks will get fined if their bankers talk about business with clients on the golf course. “You should have been wearing your bank-issued virtual reality headset and recorded the conversation,” the SEC will say, or I guess “you should have played golf in your bank’s official metaverse, which records all golf conversations for compliance review, rather than on a physical golf course.” The golf course is an unofficial channel! No business allowed!

Well, similarly. If you want to get advice from a robot about how to invest — or if you want the robot to help you write a presentation for clients — then you had better communicate with the robot using official channels! Typing in the ChatGPT box isn’t an official channel, so it’s not allowed.

Author

Comments are closed.