Why game theory falls short in AI-driven trading market

The rise of artificial intelligence-driven trading has raised questions about the possibility of algorithmic investors crowding into many of the same ideas and amplifying stress during times of volatility.

Bo An, a leading AI academic and President’s Chair Professor in computer science at Nanyang Technological University, said that while the scenario is unlikely, financial market regulators are becoming more interested in how it could play out.  

“That’s something the government is super interested in. They want to know if everyone is using AI to trade, what’s going to happen? And whether the market is fluctuating more or if it’s easier to collapse,” An told the Fiduciary Investors Symposium in Singapore.

“I feel like we are not there yet, because nowadays they are high-frequency trading [firms using AI], but still, lots of times there are other firms not doing high-frequency trading.”

An was responding to a question from Professor Stephen Kotkin, senior fellow of the Hoover Institute at Stanford University, on whether game theory – the study of how rational individuals make decisions when the outcome depends on what decisions others make – could be applied to better understand AI-driven trading.

An is of the view that game theory doesn’t work in trading as the theory needs players, strategies and the payoffs of strategies all clearly defined. But in financial markets, that task is nearly impossible with numerous potential strategies and their payoffs shifting constantly. Investment companies are still very protective of their IPs, hence there is little visibility in individual models’ underlying methodology and “a lot of randomness” in the different models’ output, he said.

“I don’t believe game theory can work in trading, that’s my personal opinion. I think the only thing we can do is you need to keep your strategy adaptive to data,” he said.

Another one of AI’s characteristics investors need to keep in mind is that it is less mature when it comes to processing causation compared to correlation.  Large language models learn to recognise patterns in vast amount of text instead of logical rules, which is why the most advanced AI models can have trouble solving simple math problems.

“If you ask AI to calculate, let’s say 25 plus 27, how they do it is they will do the reasoning. The first step, they will tell you that the answer is between 45 and 55 then they will continue to do the reasoning, then they will say the last digit is two,” An explained.

“Even the answer is correct, but it’s totally different from the principal way to do the calculation.

“Human-level intelligence, you know, is adaptive. We have our minds, and we might start to do something out of our expectations, but AI is not there yet.

Sponsored Content

“They are still maybe trying to learn the patterns from the data, rather than having self-consciousness or those type things. We’re not there yet.”

Leave a Comment

Sort content by