Featured Story

AI is the challenge (and opportunity) of a lifetime for asset owners

Artificial Intelligence is reaching into almost every facet of the way we live, work, and play as it reshapes society, industries and businesses. The opportunities for investors are immense, but so are the potential pitfalls. As fiduciaries for hundreds of millions of individuals whose retirement savings they invest, AI may be the greatest challenge and opportunity facing the current generation of institutional asset owners. Top1000funds.com takes a deep dive into the world of machine learning and how and why the world’s leading asset owners are embracing AI in their assessment of investments and their own internal efficiencies.

One of the greatest challenges assessing the potential impact of artificial intelligence (AI) on large institutional asset owners is not identifying the things it might be useful for, nor the investment opportunities it may reveal, but first of all defining what AI even is.

The potential for AI to help identify new investment opportunities, to revolutionise whole industries and to streamline processes and enhance productivity has been extraordinarily well hyped. But it’s better at doing some things than doing others and working out whether it’s even worth trialling it in a particular task or function relies on a clear understanding of what it is.

IBM says AI is “a field, which combines computer science and robust datasets, to enable problem-solving”. McKinsey says it is “a machine’s ability to perform the cognitive functions we usually associate with human minds”. Elsewhere, AI has been defined as “a poor choice of words from 1954”.

Jacky Chen, director of total fund completion portfolio strategies at the $C25 billion ($18.4 billion) OPTrust, says AI is “systems, tools and machines that are programmed to think and act or learn like humans”.

“Essentially, [it] is trying to replicate human abilities,” Chen says. “But it’s all mostly done by a machine.”

Sponsored Content

A central element of Chen’s definition of AI includes the technology’s ability to learn, or at least appear to learn, by refining and revising its own rules or algorithms as it goes along. He says this makes it “quite different” from other forms of technology, even very highly powered computing solutions, that run to a fixed algorithm defined and written by humans.

“That really allows the system to learn from data and improve continuously by observing patterns,” he says.

“That is quite different from some of the traditional techniques [where] you just basically have some rule-based, human-decided rules programmed into the machine and try to make the machine do some things that are human-like.”

The $C244 billion ($180 billion) PSP Investments’ managing director of digital innovation and private markets solutions Ari Shaanan says computing is moving away from deterministic outcomes to probabilistic outcomes. In a deterministic model, a given input is subject to a set of rigid and defined rules and produces a predictable outcome. In a probabilistic model, a given input can result in a range of outcomes – and those outcomes may change over time as the rules change.

“It is actually much closer to the way the human brain works, in the sense that if you give someone the same inputs, if you give them the same core set of environments and contexts, people will react differently, depending on whatever is the day, the time, the hour, the additional thousands of potential inputs in there,” Shaanan says.

“We’re somewhat non-deterministic. You can’t always – sometimes you can but not always – just predict how people work, and software it’s the same thing. AI is that shift away from deterministic code, where if you put in one, press enter, you know exactly what the output is going to be. Machine learning is much more you put in one, and you have a probability of where it’s going to go, but [it’s] not determined, [it’s] not for sure.”

Human and machine interaction

Shaanan says a critical and relatively recent development in AI is how humans interface with it, which is why language-based models, such as ChatGPT, have captured the public imagination.

“You can now have much more natural interactions with models,” Shaanan says. “You can still run deterministic things on the back end – for example, true math problems, or statistical models, regressions, whatever you’re trying to do on the back end. But instead of having to see for, example, data tables, it can be communicated back in language. It just facilitates interactions between people and computers. It’s the next evolution.”

Global head of digitisation and innovation for the EUR508 billion ($549 billion) APG Asset Management, Peter Strikwerda, says APG’s definition of AI is “The ability of computer systems to execute cognitive tasks that require human intelligence, without human intervention”.

APG’s definition of AI is: “The ability of computer systems to execute cognitive tasks that require human intelligence, without human intervention”.

At the Fiduciary Investors Symposium at Stanford University in September last year, Fei Fei Li, inaugural Sequoia Professor in the Computer Science Department at Stanford University, and co-director of Stanford’s Human-Centered AI Institute debunked the notion that AI is a fad.

“One thing I take a lot of pride as a Stanford professor is we’re not in the business of hype, we’re scholars and technologists,” she said.

“My answer to all my friends in investment is that, as far as I can see, this is a genuine inflection point of technology. I’ve been in this field long enough, I’ve seen hype cycles, I’ve seen a lot of misinformation. But I do genuinely believe that AI’s moment has arrived, in the sense that this technology is ready to really transform businesses, to deliver products and services that would really have mass value.”

AI has existed in some form or another for half a century or longer. But the implications of this wave of AI are profound for asset owners. As the ready availability of vast oceans of data combines with exponential increases in computing power, they must obviously understand and assess the impact AI will have on the companies in whose shares they invest and whose debt they buy. But they must also recognise and exploit the opportunities AI presents for themselves as fiduciaries charged with stewarding the retirement savings of hundreds of millions of individuals around the world.

Asset owners embracing AI

“The largest part is and will be investment-related,” APG’s Strikwerda says.

“So that’s our investment department, portfolio management trading. Responsible investing is a very interesting and important area, I think, for AI. But we also see activity in reporting, in risk management, and even some in operations also – for example, clearing and settlement-type of preventing breaches. But I think in the bigger picture, let’s say, the whole core investment process is the starting point for us.”

Shaanan says the biggest impact of AI for PSP will be on the assets held within its portfolios but says developing in-house AI expertise helps to support analysts and portfolio managers in their understanding of the where AI is headed, and its potential ramifications.

“That part is important to PSP and it should be important to everyone,” Shaanan says. “We spent a lot of time actually sharing the knowledge that my team has gained on these AI projects with our investor teams, to think through the impact it could have on the portfolio.

“We’ll share knowledge from our projects, but we’ll also interface a lot with our partners in what they’re doing on AI in their portfolio. And then we’re trying to bring that back again to our investors and actually more just stimulate sort of a PSP-wide level discussion around AI and upskill everybody in terms of knowledge on the topic, how to use it, where it’s valuable, where it can make a difference, where it’s going to impact society.

“We’re really trying to raise PSP’s game in this from a knowledge perspective, more than anything.”

Chen says that AI is currently being evaluated within OPTrust and used initially to manage risk. He says it is particularly well-suited to analysing vast volumes of data and very quickly identifying patterns or relationships within the data, which are then brought to the attention of the fund’s investment teams.

Chen says OPTrust has formed internal working groups to help the organisation understand better how it can harness AI to make it more productive, and to train staff on using AI-enabled tools so that when new applications become apparent it has the internal capabilities to capitalise on them.

“And that is on top of what we have been doing while we continue to use machine learning to enhance our investment process,” he says.

“What we’re essentially doing is using machine learning to understand market patterns, to see whether there’s increased uncertainty in the market that potentially informs higher risk of certain assets. And as a result of that, we can act [on] that information that was summarized by the machine.”

PSP’s Shaanan says the application of AI in any of the fund’s operations must pass a rigorous use-case test and have the clear potential to deliver enhanced returns, lower risk, or lower costs. There are no formal hurdles or thresholds for these measures, except to say the benefit must clearly exceed the cost.

The application of AI in any of the fund’s operations must pass a rigorous use-case test and have the clear potential to deliver enhanced returns, lower risk, or lower costs.

It’s difficult to prove in some cases, and these tend not to get off the ground; in others it’s relatively easy to forecast: one or two better investments made in private markets, for example, can easily compound to tens or even a hundred million dollars of additional return for a fund the size of PSP.

“Those things scale very quickly on the $500 million tickets that we write,” Shaanan says. “That’s why it’s hard to prove it with an exact science of it’s this [exact] threshold; but you do need to be able to say, yes, it’s got potential for hundreds of millions [of dollars] of impact, if you make two better investment decisions.”

Shaanan says there is no shortage of ideas put forward for AI enhancements; the trick is separating out those that will generate the greatest bang for the buck. PSP works in three phases: ideation, incubation, and then – when the use-case is proven – implementation.

Projects create their own momentum. Shaanan says that once his team has worked with one area of PSP’s business and demonstrated results, it often prompts other areas of the business to want the team to do the same for them.

“And that’s a major piece, I call it major idea stimulation,” he says.

“You’ve proven success somewhere, where else could we scale it, where else could we apply it? Yes, absolutely.”

Impact on costs and performance

But as more asset owner organisations take up AI and it becomes part of business as usual, it’s increasingly unlikely it can deliver a sustained competitive advantage. Some organisations may steal a march on competitors, even if only temporarily, as they discover new or innovative applications. But not utilising it effectively will almost certainly put an organisation at a competitive disadvantage.

And there’s another issue, too. As more and more investors use AI to analyse ever-increasing amounts of data in a bid to eke out additional investment returns, their actions will make investment markets more efficient – making it even more difficult to extract meaningful alpha.

Japan’s ¥158 trillion ($1.4 trillion) Government Pension Investment Fund (GPIF) uses AI to improve the selection of active asset managers, addressing the issue of spending tens of millions of dollars in management fees for no or even negative alpha.

The $700 million Abu Dhabi Investment Authority (ADIA) is spending heavily on in-house technology following the realisation that a reduced capacity to generate alpha was linked to a lack of investment in big data and AI.

The technology is being used by the $200 billion Teacher Retirement System of Texas (TRS), where a managing director, Mohan Balachandran describes its use as “a giant leap forward”, and is using it to identify signals found in large data sets that are then passed to a portfolio management team for further evaluation.

And it’s far from only the giant funds that are tapping into the potential of AI. The DKK217 billion ($30.6 billion) Industriens has developed algorithms to support a range of investment-related activities, including optimizing asset allocation, uncovering anomalies in data, performing automated text analysis, minimizing tracking errors and maximize Sharpe ratios. However, the fund’s investment risk and data manager Sommer Legaard cautions that human oversight is still critical, and that “we never use our models or programming without some human validation”.

“Everything we do we try to automate, but we also vet things manually to check if it looks right,” she says.

For asset owners the impact of AI is potentially at least threefold. It presents opportunities to invest in companies that themselves will benefit from the increasing adoption of AI, like semiconductor manufacturers, cloud computing providers, and companies that produce cables and colling systems. AI also has the potential to make asset owners’ internal processes more efficient, thereby lowering costs and improving net returns. And, of course, it facilitates the analysis of reams of data to uncover new investment opportunities and sources of alpha across all investment markets and asset classes.

Failing quickly

APG’s Strikwerda says that around eight out of every 10 trials of AI end up being killed off. But APG is adept at killing them off before the investment has been too significant. It starts by breaking down the potential use case into small, manageable chunks and funding those, so if it becomes apparent an idea is not going to work as it gets built out further “we manage the risk of investing in something that may not be worthwhile in the end”.

“We only scale it up, meaning we put in some serious investments, if we have proven that there is enough value, that it can be done, that it is within certain confines of risks and policies, what have you, or we kill it, if it proves not to work. We’ve done this for about seven years, and we’ve run I think close to 100 initiatives in this methodology.”

Despite the best intentions and brightest initial promise, “sometimes it just doesn’t work”, Strikwerda says.

“It also needs to be adopted in the end internally. If we have something brilliant that nobody needs or wants to buy, so to speak, it’ll sit on a shelf. It’s [about] proving that there is value, proving that it can be done at reasonable investments, costs, risks, et cetera, and having someone at enough senior level owning it.”

“An interesting example of an AI experiment that we ran a while ago…was that we would be able to predict, let’s say societal turmoil, based upon Twitter. Being pensions investors, we typically are on the radar when something happens, and then could lead to bad press.

“The hypothesis was that Twitter would be early signals, and the interesting part was that we were almost there. We used billions of tweets and quite advanced AI to process that and to predict. What we found out, as I flag it always, is that we were able to predict that there would be a fire in Amsterdam, but not exactly where. It’s quite good but it’s not actionable, so we kill it.”

But when it works, it can be spectacular. APG’s work on how individual company products contribute to the UN’s Sustainable Development Goals led to the spin-off of Entis, now a stand-alone business.

“Fast forward five years, it’s a very mature offering which we share with like-minded investors, which has been commercialised to the market – which is not our primary goal, let me be clear there,” Strikwerda says.

“It has to be a healthy financial situation, but we’re not in it for the profits.

“It’s very advanced AI that has been used, it’s been ever-evolving, it’s getting better and better [using] enormous amounts of data, structured unstructured, coming from everywhere. This is a classic and a very interesting one; they produce alpha factors for us. From the same data, we also bring in hypotheses, like if we combine this and this and this, could we find some alpha factors? That has started to pay off, too. This is quite a big one.

“This has become a whole business model, a multi-million-dollar business model.”

Productivity improvements

Sometimes an AI revolution has more humble, though no less impactful, origins. The $A75 billion ($49.3 billion) Rest superannuation fund is taking part in the Microsoft 365 Copilot Early Access Program (EAP), which embeds AI into the Microsoft 365 suite, including those stalwarts of businesses worldwide, Word and Excel.

Rest is one of the first organisations in Australia and one of only 600 in the world to be invited to take part in the program, and the fund’s chief technology and data officer Jeremy Hubbard believes AI is already delivering personal productivity improvements and serving as a good introduction of AI to the fund’s workforce.

“It sets us up for the next phase, which is our Phase Two mode: can we start using Rest data to tailor that model in a way that it’s able to help our business with the context of Rest information – information about our policies, procedures, standards, our systems, et cetera?” Hubbard says.

Hubbard says Rest has built its own version of Chat GPT, dubbed RestGPT, a “little Teams bot using [Microsoft’s] Open AI GPT 3.5 model, which enabled us to give access to all of our business via a very simple interface, being Microsoft Teams, and ability to interact with a Chat GPT-like solution, but using Microsoft’s enterprise security”.

Hubbard says Rest currently has a small dedicated internal innovation team, “but with a broader sort of virtual team”.

“We’re trying to build a community around that AI team,” he says.

“Given it’s quite a small investment at the moment, we haven’t set hard targets that we need to deliver to, but for me, what we need to be delivering to is multiple what we call proof-of-value experiments. RestGPT would be one, and I would say that adoption and usage of it is good, definitely a good example.”

There are other areas where it may be easier to put figures on the value provided to the fund and its members, Hubbard says, particularly in the software engineering space.

“There are some really prime examples we’ve found that we’re just currently experimenting with,” he says.

“If we’re upgrading a development framework, and we have to do a fairly simple rewrite of all the code to work on the new function, we will be able to automate some of those pieces.

“For me, what’s exciting there is we can estimate with our estimating methodology, this is how long it would have taken a team of developers to update, and then we can do the same thing with AI. And we’ll be able to have, I think, a really black-and-white view that this saved us x hours or x weeks, and x hundreds of thousands of dollars. That’s emerging, but that’s another space where I think we can prove very tangible value.”

At this stage, and for the foreseeable future, AI will not be autonomously making investment decisions based on what it learns. One hurdle, among a number, to using AI this way is a relative lack of data to train on. That might sound ridiculous, given how much financial data is created every second of every day, but it pales into insignificance compared to the entire contents of the internet, which is effectively ChatGPT’s training ground.

At this stage, and for the foreseeable future, AI will not be autonomously making investment decisions based on what it learns

Chen says that financial data is “not as rich compared to other areas, especially during some of the stress environments because you don’t get financial crises that often”.

“Those are the periods that really matter to us from an investment program,” he says.

“Something that I think is a great opportunity for generative AI is to help us to build synthetic data, simulated or synthetic data. So what that means is data that is not entirely the same as the one we have observed in the history, but still plausible scenarios that potentially can help us to analyse our investment strategies.

“There is some progress actually being made on this. I have actually read a paper on this recently, that you can actually use simulated data, use AI to generate something that is similar to real data that you can test on.

“That is still, not easy to be doing, because you essentially have to be able to understand the different markets and simulate all this all together. There’s still some ways to go but I think that will be really important for asset managers to have better data and better potential scenario analysis tools,” Chen says.

AI is best understood at least by the public though large language models like Chat GPT. It’s relatable, because it feels human, and it’s “a good way of expressing or showing how capable this technology is”, Chen says.

“That’s why I think a lot of people have been focusing on this. But if you’re really looking at the research that [is] ongoing, there’s many other breakthroughs in AI that is not necessarily large-language models as well.

“Just thinking about how you currently unlock your iPhone, there are a lot of deep-learning analysis involved in detecting your face, and those are things that are probably not as tangible from a user perspective.”

Regulations and ethics

Chen says these less visible developments are eventually as likely to have as great an impact on how large asset owners operate as any of the visible developments to date. But he says asset owners will be subject to the same ethical considerations and regulatory requirements as any developers as they figure out the best uses for AI.

Chen is also an adjunct professor at the University of Toronto Rotman School of Management, where he teaches an innovation course and is engaged with the school’s Financial Innovation Hub, and his research focuses on the applications of machine learning techniques for portfolio hedging, derivatives pricing, and risk management. He says AI undoubtedly is “providing a lot of benefits to us”.

“But on the other hand, it’s important to have the regulatory frameworks and all the ethical guidelines,” he says.

“If you really think about the positive of AI, it’s a lot of ground-breaking innovations that are happening, from healthcare to environmental science. And these advancements are not simply a technological evolution, it’s also actually going to enhance our capabilities and improve our life quality. It’s important to keep that positive going.

If you really think about the positive of AI, it’s a lot of ground-breaking innovations that are happening, from healthcare to environmental science.

“But at the same time, strong regulatory frameworks and ethical guidelines will be crucial. That requires us, as a society, to see collaboration between industry, governments, and also the public. We are all stakeholders in this, in these discussions, and we need to make sure that there is a collaborative effort that helps us to shape the landscape going forward, and we [that] are not just wanting to focus on the innovative side. We need to make it inclusive.”

Stanford’s Li told the Fiduciary Investors Symposium that when the university’s Human-Centered AI Institute was founded “our mission statement did not put a national boundary” around the possibilities for AI, nor around the responsibilities of those that develop it.

“I think this technology is fundamentally universal,” she said.

“I think doing good to all humanity is fundamentally important. The geopolitics today at the human level is reality but it’s also sad, but there’s many things that this technology can do, whether it’s healthcare, or climate, or scientific discovery [that] transcends or geopolitics, transcends national boundaries.

“At Stanford, we’re really privileged. We educate students from all over the world, and we build technology that I hope can be used to benefit people from all over the world.”

Join the discussion