AI will have a huge impact on financial services, particularly investment and risk management, in the next five years, a panel of experts said at the Fiduciary Investors Symposium at Stanford University. Machine learning systems that analyse data to mimic human cognitive behaviour around problem solving and thinking are growing as more data and cheap computer power become available.
Machine learning has the potential to generate alpha and affect business processes. Data is automatically processed, collected and presented by machines, rather than people within organisations, said Kay Giesecke, professor of management science and engineering at Stanford University, who said many more research projects in his department have been focused on AI over the last five years.
The availability of data and computer power has opened AI strategies to investors, Jens Kroeske, head of macro systematic strategies research at Aberdeen Standard Investments said. In the past, data was gathered by specialised companies that gave earnings transcripts. Now standard financial data providers present deep detail in new, easily accessible formats that professionals can plug into natural language processing algorithms.
“There has been a commoditisation of machine-learning algorithms,” Kroeske said. He added that more people can now access these programs and algorithms on specific data sets, even those without experience of programming, widening access beyond the preserve of specialised coders.
Although the growth in systematic strategies creates a natural evolution to indexing, Kroeske said the risk-management question still needs solving. In an industry built on trust and human relationships, machine learning still lacks a risk-management framework that explains it and bounds its behaviour. Once this is addressed, it will be disruptive and replace hedge funds and fundamental managers, he said.
The need to explain to stakeholders how machine learning reaches decisions is a key component. It’s something Kroeske believes is getting easier.
“Modern machine learning is designed in such a way that it should be easier to explain,” he said. Its focus on finding similarities, variables and past examples that are relevant help “explainability”. He predicts that the line between fund and quant investing will become increasingly blurred as the types of information provided to machine learning and a discretionary manager grow similar.
Data is king
As data has led to better insights, decisions and outperformance, its accumulation has become key. We will see trillion-dollar companies in the next five years that have grown not through producing widgets but through the mass collection of information, predicted Jagdeep Singh Bachher, chief investment officer University of California Regents. Taking the energy sector as an example, he said data accumulation would grow with the shift from centralised to decentralised sectors and economies.
The explosion in the flow of assets to exchange-traded funds and quant strategies has been driven by the “idea that data creates better alpha” Bachher said. Moreover, it is driving the University of California Regents’ own direct investments in companies. It now invests in Ola, India’s rival to Uber, where data patterns providing insight on behaviour and dynamic pricing are key to the company defining how it charges customers, driving the value of the business. In another example, Bachher cited the tech companies vying for access to healthcare records and data. He told delegates that investment in data and data sources was an area they should feed.
“It could lead to better insights to decision-making,” he said.
Balancing machine learning and fiduciary responsibility is challenging, the panel said. Investors have a fiduciary responsibility to explain where their investment performance is coming from. Machine learning involves replacing the traditional linear models used for predicting returns with a non-linear model, said Giesecke who referred to this as the “key driver” behind outperformance in machine learning. The world is not linear, and every variable depends on other non-variables, yet conventional approaches ignore this, he said. He noted the trade-off between drawing on non-linear data and “explainability”. “You can have explainabilty or give up some performance – but ideally you want both.”
Fiduciaries could introduce governance around which machine learning models they use. Conversations about this could include why certain models require so much data or ranking the importance of the data that is feeding algorithms, so fiduciaries recognise what parameters in the model made the difference.