When Google’s chatbot, Bard, made a factual error, the company’s shares plunged 9 per cent in a single day, temporarily wiping off $100 billion in market cap. Last year, OpenAI was fined €15 million ($17.5 million) for processing users’ personal data without adequate legal justification, violating the European Union’s General Data Protection Regulation.
By 2024, Google’s carbon footprint had increased by 48 per cent since 2019 due to energy use associated with GenAI, jeopardising the company’s commitments to reach net zero by 2030. Meanwhile, in Chile and Uruguay, the company is under pressure for its excessive use of freshwater in new data centres.
AI medical diagnostic systems can have racial and gender biases – skin cancer is under-detected in black patients as training materials use images of mostly white skin, for example, and many large companies have experienced data breaches where attackers used AI technology to expose employee and customer data. So the list goes on.
Much is written extolling the investor opportunities inherent in AI at a time when policymakers tilt towards prioritising deregulation and innovation over safety, but a ground-breaking report from £34 billion Railpen on the risks AI holds for investors’ portfolio companies provides a valuable reality check.
Most companies have adopted AI in at least one business function, ranging from informing decision-making to supporting physical operations. Although investors know the financial impacts of these risks can be significant, research and knowledge around financial materiality is scarce and many struggle to price AI risk.
In partnership with Chronos Sustainability, Railpen’s report highlights the short-and medium-term risks and sets important guidance on how investors can assess companies’ preparedness for what lies ahead via an AI governance framework.
Data, climate and cyber
The report authors argue that large data (and energy) requirements of AI expose companies to vulnerability related to both data provenance and security. Uncritical interpretation of inputs is another risk, producing outputs deemed harmful or that reproduce biases present in the training data. Similarly, AI systems possess no inherent ability to discern between true and false information and AI models are increasingly ‘black boxes’ – the larger and more complex a deep learning system is, the more difficult it is to trace the origin of a particular output.
Cyberattacks are an issue across all companies using IT systems but the increasing use of AI significantly amplifies the risks they pose, alongside data privacy and security.
AI has the potential to transform the nature of labour across the whole economy, although estimates of employment gains/losses and wage growth/stagnation remain highly speculative. It’s been estimated that up to eight million UK jobs may be at risk, and 11 per cent of tasks are already exposed to the ‘first wave’ of automation. Lower-paid ‘routine’ cognitive and organisational tasks are at the highest risk, with a disproportionate effect on women and youth.
What can investors do?
The report suggests investors begin by identifying where AI is (or may in the near future be) most significant for the companies in their portfolios. This identification is crucial for prioritising their stewardship efforts effectively. The level of risk (and opportunity) a company faces varies based on its role in the AI value chain, its operational dependency on AI, and how its sector uses AI.
The more a company relies on AI, the greater the related risks. High dependency on AI means that incidents can lead to larger financial, operational, legal or reputational consequences. Therefore, categorising companies by AI significance provides a practical way to allow investors to prioritise where the risk will be most material.
Questions investors should know the answer to include if a portfolio company is an AI developer, deployer, or both. How significantly is AI being used within the company and how does the company’s sector use AI? Is AI oversight structured at the senior leadership or board level and in what ways does AI influence strategic decision-making? Investors should also be mindful of what information the company discloses about its AI operations.
“We encourage investors to participate in dialogue with companies who are at the forefront of AI development and deployment. We also encourage investors to proactively feed into the emerging policy and regulatory discussion on the management of AI risks, and the effective harnessing of AI opportunities. Our sense is that the investor perspective is missing from these policy debates,” wrote the report authors.
Railpen is rapidly developing its AI policy. The investor expects companies developing or deploying AI to demonstrate accountability across the AI value chain, with actions proportionate to their risk exposure, business model, and potential impact. This includes clear board oversight, robust risk management, and transparency.
Where these expectations are not met, and there is evidence of egregious social or environmental harm and inadequate governance, Railpen may vote against the director responsible for oversight.
“We may also support shareholder resolutions addressing AI-related reporting, board accountability, human rights, misinformation, and workforce implications. Plus collective initiatives and policy advocacy,” said the authors.
To help investors assess companies’ approaches to risk management, Railpen and Chronos Sustainability have developed a stewardship framework that moves the responsible AI principles from theory to practice.
Although the long-term capabilities and associated risks of AI remain largely unknown, the framework allows investors to understand companies’ preparedness for these uncertainties as well as the steps that companies may need to take to manage the risks and harness the opportunities.
“Systemic risks are large-scale threats that cannot be diversified away by individual investors or asset owners, as they affect the entire financial system and economy, and therefore all portfolio constituents.
“To address these portfolio-wide risks, investors should consider deploying system-wide stewardship strategies to help understand and mitigate a range of challenges such as climate change, biodiversity loss, wealth inequality – and the rapid development of AI,” it concludes.