Opinion

We need to be better than ‘optimal’

The dictionary definition of the word “optimal” is “best or most favourable”. While an investment strategy is hardly ever described as “the best” – perhaps for fear that this might sound somewhat hubristic or over-confident – it seems to be perfectly acceptable to describe a strategy as optimal, despite the same intended meaning.

So where did the over-use of optimal come from? Google’s chart of use over time shows that optimal was a word rarely used until around 1950, after which time its use grew exponentially. This supports the hypothesis that optimal entered the investment lexicon following the widespread adoption of mean-variance optimisation, which Harry Markowitz developed in the 1950s. Although the usage statistics for optimal have levelled off in recent years, the advent of robo-advice has given a fresh impetus for the use of mean-variance optimisation as a portfolio construction tool.

Mean-variance optimisation uses the expected returns, volatilities and correlations of a range of asset classes in order to arrive at an ‘efficient frontier’. Any strategy sitting on the efficient frontier maximises the expected return for a given level of volatility and is, therefore, described as efficient or optimal.

The problems with the term optimal as a descriptor of investment strategies are, therefore, intimately connected with the weaknesses of mean-variance optimisation as a portfolio construction tool. These include:

  • The use of volatility as the only measure of risk. As a result, volatility and risk have become synonymous, despite the fact that a highly volatile asset might be much less risky than one exhibiting low volatility if risk is viewed as the possibility of suffering a large drawdown or losing money in real terms, for example. This isn’t to say that volatility is useless as a measure of risk – far from it – simply that it shouldn’t be the only measure of risk.
  • Volatility and correlation inputs for a mean-variance optimisation process are typically backward-looking and assumed to be stable over time. The output based on such assumptions is, therefore, vulnerable to regime changes that materially alter correlation and volatility dynamics. Optimality based on backward-looking inputs is fragile when looking forward.
  • There are many possible approaches to setting the expected return assumptions that go into an optimisation, each with their own strengths and weaknesses. Different approaches can produce very different numbers but the high level of uncertainty in these assumptions is frequently ignored, with many of them often specified to an accuracy of two decimal places. Given the sensitivity of an optimisation to the input parameters and the high level of uncertainty inherent in the assumptions, to describe the output as optimal is misguided at best.

Mean-variance optimisation had not been invented when John Keynes wrote The General Theory of Employment, Interest and Money, yet the essence of the problem it has created is captured nicely in the following quote from the book: “Too large a proportion of recent mathematical economics are merely concoctions, as imprecise as the initial assumptions they rest on, which allow the author to lose sight of the complexities and interdependencies of the real world in a maze of pretentious and unhelpful symbols”.

This is not to say that mean-variance optimisation should be avoided altogether. With a clear-sighted recognition of its limitations, optimisation can be used sensibly as one input within a wide-ranging investment strategy discussion. Such a discussion would seek to address the many unquantifiable trade-offs investors face: the attractions of investing in less liquid assets vs. the opportunity cost of reduced flexibility; the costs and benefits of leverage; a desire to diversify vs. a preference for simplicity, among many others. This discussion should make use of a mix of quantitative and qualitative inputs, with expert judgement playing an important role.

Sponsored Content

Bias towards quantitative tools widespread

Many investors already follow such an approach when setting strategy and may consider this tirade largely unnecessary. However, a bias towards quantitative and precise numerical tools remains prevalent in the investment industry, as highlighted in Andrew Lo and Mark Mueller’s fascinating 2010 paper “Warning: Physics envy can be hazardous to your wealth”. Perhaps more worryingly, a few minutes spent on the websites of some well-known robo-advisers demonstrates that efficient frontiers and optimal portfolios are very much alive and kicking.

In today’s environment of heightened political uncertainty, with monetary stimulus of a type and on a scale never seen before, and with structural trends such as climate change, global ageing and technological disruption likely to change the investing environment radically, we need thoughtful, constructive debate like never before. An over-reliance on simplistic quantitative tools will lead to naïve portfolio construction, wrapped in a comfort blanket of “optimality”.

Instead, we need to face the radical complexity of the real world. Rather than aiming for some quantitative definition of optimality, we should seek the more humble and realistic aim of robustness under a range of plausible scenarios. Stress testing and scenario analysis – straightforward deterministic tools – provide a useful, powerful alternative to the relatively complex stochastic and optimisation tools that are more frequently used.

Optimality is an illusion – it’s time we removed it from the investment lexicon.

 

Phil Edwards is European director of strategic research at Mercer.

Comments
    Nathan Fabian

    Thank you Phil for this thoughtful article. Do we need alternative theories of investment in order for your many tools approach to gain traction in the market?

Join the discussion