The stereotype of ChatGPT, Open AI’s publicly available conversational large language model, is that if you ask it a technical question it will give you an answer that is confident and plausible but not necessarily correct. This is of course an incredibly useful skill, and a very good model of a certain sort of human intelligence; ChatGPT seems dangerous as a software engineer but could have a good run as an investment banker.
How is it as an asset manager, or at least an asset-management salesrobot?:
In a bid to see how close technology really is to replacing Wall Street’s army of analysts, experts and money runners, we challenged ChatGPT, the artificial intelligence tool that’s taking the internet by storm, to create us a winning portfolio for the US stock market.
The result: A classic exercise in fence-sitting, with the tool explaining that the market is too unpredictable to design such a fund, while warning about the need to pick investments aligning with our goals and appetite for risk-taking.
“It is not possible for me to design an ETF that will beat the US stock market because the stock market is unpredictable and past performance does not guarantee future results,” ChatGPT says, disappointingly, adding a bunch of not-investing-advice boilerplate. Bleh. I would have expected more unearned confidence. Like I’d expect it to come up with a list of 50 tickers, no problem; I just wouldn’t expect them to beat the market, and maybe a couple of them would not correspond to actual stocks.
You can do better, where by “better” I mean “get more confident answers out of ChatGPT” rather than “actually get it to beat the market for you.” Last month Robin Wigglesworth at the Financial Times wrote about a quantitative research analyst who got ChatGPT to write research notes for him; they were, you know, adequate. And I once wrote about GPT-2, an earlier large language model:
If you trained it on a bunch of Warren Buffett annual letters maybe it would say some stuff that sounds like Warren Buffett? Not just in terms of folksy sex jokes but also in terms of penetrating investment insight? Maybe GPT-2 would digest Buffett’s mind, or rather specifically the parts of Buffett’s mind that are exposed when he writes prose, and it would use that understanding of his mind to write Buffett-like prose recommending Buffett-like investing decisions?
Or if you run an investment firm and you’ve got a corpus of memos from your analysts recommending investment decisions, why not take the memos that worked out—the ones recommending investments that went up—and feed them into GPT-2? Then have it write you a new memo and see if it’s any good?
I wouldn’t bet on it or anything; it’s interesting to speculate about, but it seems unlikely that a computer will get good at making investment decisions just by ingesting how humans have articulated investment decisions. (It’s much more likely to fixate on boilerplate about how “past performance does not guarantee future results.”) But I bet a computer could get pretty good at articulating investment decisions from that training set. (In a sense this is the opposite of a classic “black box” machine-learning investment algorithm: Classic quant signals might work, but you can’t explain them; ChatGPT might not work, but it can explain itself.) If your hedge fund lost money last year, maybe ask ChatGPT to write your investor letter for you. It can spin up some platitudes, quote some Stoic philosophers, and sound confident and plausible, which is all you really need for the letter.