Beyond backtests: considering the robustness of smart beta

Systematic equity investment strategies – so-called smart beta strategies – are usually marketed on the basis of outperformance. However, it is important to recognise that performance analysis is typically conducted on backtests that apply the smart beta methodology to historical stock returns. Concerning actual investment decisions, a relevant question, therefore, is how robust the outperformance is, writes Felix Goltz.

In general, robustness refers to the capacity of a system to perform effectively in a constantly changing environment. For smart beta strategies, we distinguish between ‘relative robustness’ and ‘absolute robustness’.

A strategy is assumed to be ‘relatively robust’ if it is able to deliver similar outperformance in similar market conditions. Single factor indices aim to achieve this kind of robustness.

Absolute robustness is the absence of pronounced state and/or time dependencies and a strategy shown to outperform irrespective of prevailing market conditions. Multi-factor indices often aim to improve absolute robustness.


Causes of lack of robustness

Factor fishing and model mining risks

Investors who wish to benefit from factor premia need to address robustness when selecting a set of factors. Recent research (Harvey, C., Y. Liu, and H. Zhu, 2015 …and the ‘Cross-Section of Expected Returns’, Review of Financial Studies, forthcoming) documents a total of 314 factors with a positive historical premium, but shows that many factors may be a result of data mining: strong and statistically significant factor premia may be a result of many researchers searching through the same dataset to find publishable results.

To avoid exposure to data-mined factors, a key requirement for investors to accept factors as relevant in their investment process is that there is a clear economic intuition as to why the exposure to this factor constitutes a systematic risk.

Failure to recognise a suitable proxy for the rewarded factor will harm the relative robustness of the strategy.

Model mining risk is the risk of having an index construction methodology that results in a good track record in backtesting, without any persistence going forward. For example, many value-tilted or dividend-focused indices include a large set of ‘ad-hoc’ methodological choices, opening the door to model mining.

Exposures to specific risks

All smart beta strategies are exposed to unrewarded strategy-specific risks. Specific risks correspond to all the risks that are unrewarded in the long run, and therefore not ultimately desired by the investor.

In line with portfolio theory, among the unrewarded risks we find specific financial risks (also called idiosyncratic stock risks) that correspond to the risks that are specific to the company itself.

It is this type of risk that asset managers are supposed to be the best at knowing, evaluating and choosing in order to create alpha, but portfolio theory considers it to be neither predictable nor rewarded, so it is better to avoid it by investing in a well-diversified portfolio.

Specific risks can also correspond to important financial risk factors that do not explain, over the long term, the value of the risk premium associated with the index. The academic literature considers for example that commodity, currency, and sector risks do not have a positive long-term premium. For example, value strategies often lead to pronounced tilts towards financial sector stocks. During the financial crisis of 2008, exposure to the financial sector proved to be a major determinant of performance of these strategies. It should be noted that the tilt towards the financial sector may not be desired, but it came as a by-product of holding value stocks.

Model-specific risks that are specific to the implementation of the diversification model are also a form of unrewarded risk. As per Modern Portfolio Theory, every investor should optimally combine risky assets so as to achieve the highest possible Sharpe ratio. Implementing this objective, however, is a complex task because of the presence of estimation risk for the required parameters, namely expected returns and covariance parameters.

In practice, the costs of estimation error may entirely offset the benefits of optimal portfolio diversification.


Dependency on individual factor exposures

Systematic risks come from the fact that smart beta strategies can be more or less exposed to particular risk factors, depending not only on the methodological choices guiding their construction (implicit), but also on the universe of stocks supporting this construction scheme (explicit).

For example, fundamentals-weighted portfolios typically have a value tilt and minimum-volatility strategies exhibit a low-beta tilt.

Each weighting scheme exposes investors to implicit risk factors that may or may not be consistent with their risk objective. It is important to note that periods of poor performance in all factors are common throughout long-horizon historical tests, and the underperformance occurs at different points in time.

Therefore, investing in a single factor is not a robust approach in absolute terms, as the performance will vary greatly over time across different time periods.


Improving robustness

Avoidance of data or model mining through a consistent framework

Establishing a consistent framework for smart beta index creation limits the choices – while providing the flexibility needed – for smart beta index creation. Consistency in the index framework has two main benefits.


First, it prevents model mining by limiting the number of choices through which indices can be constructed. A uniform framework is the best safeguard against ‘post hoc’ index design, or model mining (i.e. the possibility of testing a large number of smart beta strategies and publishing the ones that have good results).

Second, analysis across specification choices is vital because the range of outcomes gives a more informative view than a single specification, which could always have been picked.

An index that performs well across multiple specification choices is more robust than an index that performs only in a single specification choice, which could very well have been by chance rather than because of the robustness of the strategy.

Pre-packaged indices do not allow investors to compare across specifications in order to obtain a view on the sensitivity of performance to index specification choices, thereby exposing investors to a risk of unintended consequences of undesired risks.

Another approach to the inconsistency of the conceptual framework is to look at the evolution, or change of methodology over time, for the same strategy or the same factor.

Some index providers have launched new factor indices when they already had factor indices for the same factor on the market. In this case, the new indices have the same objective as the old ones, but different construction principles.

This phenomenon has a striking resemblance to the practice of funds or asset managers of creating new funds – or changing the strategy of funds – in order to overshadow the poor track record of the old fund. Thus, an inconsistent framework over time is also a kind of model mining that allows the index providers to launch new indices with better track records.

Improving relative robustness by reducing unrewarded risks

Relative robustness can be improved by minimising the unrewarded risk as much as possible. There are numerous approaches to estimating risk parameters.

The sample estimator of a covariance matrix produces extremely high estimation errors when the ratio of universe size to sample size is large (sample risk).

One solution to this problem is to reduce the number of parameters to be estimated by imposing a structure on the covariance matrix.

Although this method reduces sample risk, its drawback is that the estimator is biased if the risk model does not conform to the true stock-return-generating process (model risk).

State-of-the-art estimators for risk parameters aim to achieve a trade-off between sample risk and model risk.

One serious concern with optimisation-based weighting schemes is that the stocks with the highest estimation error may receive the highest weight—a process commonly known as ‘error maximisation’ – which is detrimental to the relative robustness of the strategies. In practice, various kinds of deconcentration constraints are adopted to improve diversification.

For example, recent research introduces flexible constraints that put limits on the overall amount of concentration in the portfolio, rather than limiting the weight of each stock in the portfolio, thus leaving more room for the optimiser while avoiding concentration overall.

Even though different weighting schemes offer efficient diversification of stocks, there is a need for additional diversification of the weighting schemes to diversify away the strategy-specific risks – a concept called “diversifying the diversifiers” (see Timmermann (2006), Kan and Zhou (2007), Tu and Zhou (2011) and Amenc, Goltz, Lodh, Martellini (2012) on the benefits of combining portfolio strategies).

The combination of different strategies diversifies risks that are specific to each strategy by exploiting the imperfect correlation between the different strategies’ parameter estimation errors.

Thus, diversifying the model risks further reduces the unrewarded risks, and renders the weighting scheme more robust (in a relative manner).

Improving absolute robustness by diversifying across factors

As discussed before, investors who rely on single factor exposure take the risk of the likelihood of the underlying factor underperforming over short periods.

The reward for exposure to these factors has been shown to vary over time. While this time variation in returns is not completely in sync for different factors, allocating across factors allows investors to diversify the sources of their outperformance and smooth their performance across market conditions.

Felix Goltz is head of applied research, EDHEC-Risk Institute; Research Director, ERI Scientific Beta.



Join the discussion