Investor Profile

How Denmark’s Industriens is exploring AI to overhaul risk analysis

Industriens, the DKK 217bn ($30.6 billion) Danish pension fund, is using advanced technology and exploring  AI models to bring sweeping advantages to its risk management processes.

The hope is the benefits will do much more than speed up analysis and end having to manually hunt for errors in Excel. Instead, the technology will allow the investor to optimize its asset allocation, uncover anomalies in data, perform automated text analysis and put in place restraints, for example around ESG at a level impossible to replicate in Excel.

Elsewhere, the technology can support assimilation, minimize tracking errors, maximize sharpe ratios and feed in sentiment analysis, lists Julia Sommer Legaard, investment risk and data manager at the pension fund for the last year, brought in to help bridge the gap between IT and programming, and portfolio management.

“The idea is to develop a few generalized functions in Python, which can be used for multiple purposes,” she says. “We can find errors much faster and check for abnormalities in the market value or duration of an investment, for example.”

Data gathering is step one when it comes to developing powerful models, says Sommer Legaard. Much of her time (she works in a team of eight, including two students) is spent extracting, analysing, and validating data. It is the lifeblood of the models and solutions which provide risk management across the portfolio, spanning everything from duration risk in bonds to investment limits in sectors and countries, credit, solvency, ESG and counterparty risk, regardless of the asset class.

Diving into the detail, Sommer Legaard says successfully building a model involves optimising the code to ensure it can handle different tasks, data types and fields – warning if the data she feeds in is invalid, it scrambles the process.

Sponsored Content

Once, she recalls, the code in the model couldn’t correctly read the data because it had been set up to search for numbers rather than letters. “You can’t get a good result from the model if your input is not valid,” she explains. Real data is messy – cleaning it can be 80 per cent of the work. “There is a difference between knowing a bit of Python and applying it on real data sets. I always check if the data is valid before saving it to the database.”

Part of her job is outlier detection. Determining if an outlier is valid or not is something that requires human expertise. “We never use our models or programming without some human validation,” she says. It’s easy to think something deviates from the norm particularly during bouts of volatility, and only closer, human examination reveals that it doesn’t. Data drawn from periods of market volatility might have multiple outliers in which only one is valid. It will require looking into each one, she says. “Everything we do we try to automate, but we also vet things manually to check if it looks right.”

For investors new to technology or developing in-house models, she suggests starting slow, and phasing in support around how the code works. “A lot of it is about trust,” she says. It is often hard to trust a model because it is rooted in such complex maths, but she finds comfort by constant back testing to check how well it would have performed.  “Working with programming daily, it is important to be able to explain what your code does and what is behind the models used for AI to make it transparent.”

A key challenge is the shortage of historical data because the model’s demands often supersede the availability of data. Moreover, historical data often quickly becomes out of date because the technology is moving so fast.

Other hazards lurk too. Like the risks of using data at a time regulators, artists and media organizations (amongst others) are increasingly questioning the use and risks of data being consumed by the technology. “European General Data Protection Regulation is a case in point. You must make sure the data you are using is safely secured and that it is only used to feed the model.”

ESG FOCUS

Data gathering to support ESG integration at the pension fund brings another layer of complexity. Issues include navigating mismatches with the model’s fields and data from the pension fund’s external vendors, including ESG data providers. It often results in time consuming excel comparisons and even calculating some numbers herself.

Some of the benefits are already obvious. For example, the technology may be able to help approximate the ESG data points in the private equity allocation. Still, the need for companies to draw on ever larger lakes of data for ESG integration raises important questions around the very process of integrating ESG risk – and if AI goes against the mission of ESG.

“The amount of computer power required to store the data is unsustainable,” she says. “Investors need to minimise their computer power, but the models demand ever more data. We need to take ESG into consideration when we think about future tech advancement.”

As AI becomes more prevalent, she believes retail investors will increasingly harness the technology, potentially muting the investment edge the data is meant to give. She predicts this will lead investors to rebalance more frequently as the market will catch up with strategies faster.

She concludes that success also depends on portfolio teams grasping the technology.

Although many of her colleagues still need encouraging when it comes to programming, she questions whether pension funds really need external companies to develop AI models and applications on their behalf. “It is becoming increasingly easy to use Python so investors might not rely so much on external vendors. It is a question of pushing internally – and I am a pushy person!”

 

 

 

Join the discussion