Forecasting the cash-flows of private equity funds is an essential task. It allows investors to control the liquidity of their portfolio, manage their risks, and construct more efficient investment programs. It is also important for LPs active in the secondary market as it allows them to price their portfolio.
Data-driven approaches provide an extremely fast, cost-effective, and systematic way to forecast cash-flows. The two main techniques are based on the top-down and bottom-up approaches.
Top-down techniques rely on statistical and machine learning models calibrated on historical fund-level performance data. They are very convenient because they only require the user to enter fund-level input, which is simple to gather.
Despite their usefulness, top-down models have a handful of limitations:
- They do not work very well for concentrated portfolios
- They do not work very well for funds in certain regions/strategies
- They do not capture recent developments in the market such as credit facilities, NAV-based facilities, GP-led transactions, etc.
- They do not provide transparency on the risks of the underlying assets (e.g. concentration)
Therefore, when granular data is available, the bottom-up approach provides an extremely efficient alternative.
Here are our top-three reasons to use bottom-up models when possible:
1. Diversification and the law of large numbers
The bottom-up approach consists of forecasting the future proceeds for each underlying asset in a fund and then aggregate them at the fund-level. Hence, the fund cash-flow predictions are constructed by using the most granular data available.
In statistics, when computing the average of a sample, the more data points in the sample, the more precise the average becomes. The same phenomenon plays out in private equity (and in many other areas of finance such as insurance and banking).
For example, if an investor has a portfolio with 30 funds, each having on average 20 underlying assets, the model will seek to forecast the proceeds of 600 individual assets. Because of the number of underlying assets, the bottom-up approach is likely to provide much more precise forecasts than the top-down framework, which only uses fund-level information to compute predictions.
Incidentally, this is also one of the reasons why some secondaries firms are outsourcing some parts of their analytical processes to external service providers in low-cost regions instead of using their team. Even though the quality of the analysis will be lower, when looking at diversified opportunities, the law of large number plays in their favor.
The bottom-up approach provides deep insights about the future proceeds of your portfolio. For instance, it provides details on:
- the expected level of fees and carried interests to the GPs
- the use of credit facilities and the resulting cash-flows
- the concentration of future proceeds across various dimensions such as sector, currency, deal, etc. (see Figure 1)
This level of granularity allows to easily identify fundamental risks such as alignment with the GPs, concentration, and leverage.
Statistical and machine learning techniques used in top-down models can be notably difficult to explain and to rely on. Providing a clear explanation to an investment committee on the drivers of a prediction can be tough.
The bottom-up approach provides a very high degree of transparency on the drivers of future proceeds. It allows the users to track and challenge the assumptions for each underlying asset making it easier to trust the model and to use it in business applications.
This is the reason why at RockSling Analytics we have built a systematic data-driven bottom-up algorithm. The algorithm is fully automated and can mine your data to provide you with the deepest level of insights about the risks and returns of your portfolio.
Are you interested to try it out? Do not hesitate to contact us at email@example.com