Portfolio Opportunity Distributions (PODs) comprise all of the portfolios that could have been held by an investment manager adhering to a specified mandate, such as the S&P500 or the Russell 1000 Growth. These comprehensive, scientifically designed peer groups solve the serious problems associated with traditional peer groups and indexes by combining the better characteristics of each in a manner that eliminates the shortcomings of both. Unlike traditional peer groups, PODs are timely, unbiased and accurate. Unlike traditional indexes, PODs determine statistical significance for periods much shorter than decades.
Popular Index Portfolio Opportunity Distributions (PIPODs) unify the better aspects of peer groups with those of benchmarks to create a performance evaluation background that is fairer, more accurate, much more timely than current approaches, and it provides indications of significance that are unattainable elsewhere. Please read the following article for more details.
Unifying Best Practices to Attain Best-of-Breed Investment Performance Evaluation
How do you evaluate investment performance? Good chance that you use both peer groups and benchmarks. A recent survey of 700 consultants and investors, conducted by the Investment Management Consultants Association (IMCA), found that 90% of respondents use peer groups to evaluate investment performance and 95% use benchmarks. In other words, most evaluators use both peer groups and benchmarks.
Why do you suppose the industry hasnt embraced a single approach? Because neither is clearly superior. Peer groups and benchmarks have both good and bad characteristics. Lets identify some of the bad characteristics, with an eye toward possible improvements. Peer groups are plagued by biases, including survivor, classification and composition biases. Survivor bias raises the hurdle by including only those portfolios that have remained in business for the entire evaluation period, which is generally 5 years or more.* Classification and composition biases can raise or lower the bar, and its hard to know which is happening. Classification bias results from trying to pigeonhole managers into style bins, when the fact is most managers are blends of styles. Composition bias relates to the collection of funds and products gathered together by the universe provider. In addition to these biases, peer groups suffer from a serious lack of timeliness. It generally takes 4-6 weeks to assemble most peer groups, so clients need to be patient, and consultant ingenuity is a must for those early meetings. In the IMCA study, 95% stated that timeliness is important, which is probably one of the reasons that benchmarks are somewhat more popular than peer groups.
Benchmarks solve most of the problems with peer groups, but come with a unique and serious problem of their own it takes decades to determine with confidence whether the manager is actually skillful or not. If youre using a benchmark, sooner or later youre going to want to know if that 2% return above the benchmark is a big deal or not.
Consequently, common practice is to evaluate a manager against a peer group, and to also show a benchmark against that same peer group, thereby compensating for the inadequacies of both approaches. But there is a better way to combine these two approaches. This new unification removes the biases of peer groups, is available virtually immediately, and it eliminates the waiting-time problem of benchmarks. Heres how it works. Pick your favorite benchmark. Then instead of calculating a single return that is the combined performance of all the stocks in the benchmark, calculate the performance of all of the portfolios that could have been formed from stocks in the benchmark, following some reasonable portfolio construction rules. The result is an opportunity set for all managers who are evaluated against the benchmark, and it looks just like a traditional peer group, floating bars and all. We call this new approach Popular Index Portfolio Opportunity Distributions, or PIPODs.
The median of a PIPOD peer group is the return on the index, and the percentiles around the median are indications of the significance of success or failure. A ranking in the top decile of a PIPOD universe says that there is a 90% probability that skill, not luck, was involved, regardless of the time period. And the answer to the question What portfolios are in a PIPOD universe? is All of them., so you are assured of a fair and accurate evaluation.
Some examples will demonstrate the benefits of PIPODs. Lets start with significance. If your manager underperformed his/her benchmark by 5% in the 4th quarter of 2002, would that be really bad or just sort of bad? How significant would it be? Well youll probably say that it depends on the volatility in the managers style, as represented by his/her benchmark. 5% underperformance in a very conservative style would be more disappointing than the same underperformance in a very aggressive, volatile approach, where theres an implied greater tolerance for risk. But where do the lines get drawn? The following exhibit delivers the answer.
A return of 2.5% lagged the Russell 2000 Small Cap Growth index by 5%, but because this is a volatile mandate, this underperformance is not significant, ranking in the 75th %ile. By contrast, underperforming the more conservative, large company, S&P500 by 5% is significant at the 94% confidence level a big deal, indicating a significant management mistake, not bad luck. Dont try this at home kids, unless you have PIPODs.
PIPODs solve the waiting-time problem inherent to benchmarks. In the example above we determine significance at a very high confidence level for a short period of time, namely one quarter. Benchmarks require decades to draw similar inferences.
Now heres another example: choosing the right benchmark. Consider the manager shown in the next exhibit, who is being benchmarked against the S&P500, everyones favorite benchmark. In the IMCA survey, 96% said they use the S&P500, making it the most popular choice.
For periods of 3 quarters or longer it looks like this manager is sensational, off the tops. But wait a second. If PIPODs are all of the portfolios that could have been held using stocks in the index, this manager must have held stocks outside the index, and plenty of them the manager is not managing to the S&P500. Its the only way the manager could be off the tops.
Well heres the reason. We fabricated this example to make a point, by using the median returns for small cap value, as shown in the next exhibit. The point is that a mediocre manager can look really good, or bad, when compared to the wrong index.
As you can see, if it looks too good (or bad) to be true, it probably is. The important thing is that you get the most accurate look that you can. PIPODs deliver fairness and accuracy by letting you select the best benchmark for your manager, and by feeding back some reasonableness checks for your consideration.
There are a couple more benefits of PIPODs that we havent mentioned yet. PIPODs are available monthly, mere days after each month's end. March PIPODs will be available around April 3. April PIPODs will be coming out at about the time that traditional peer groups for March are being released. And of course you wont see April peer groups for separate accounts. Your only other choice for timely monthly peer groups is mutual funds, which clearly dont make sense for separate accounts because mutual fund returns are net of fees, whereas separate accounts are usually evaluated gross of fees. The other benefit of PIPODs is the ability to further customize the peer group to your managers degree of diversification, as characterized by the number of securities typically held. More diversification (more names) shrinks the range of the floating bar, and less diversification (more concentration) expands the range. This adds to the accuracy and fairness of the evaluation.
So there you have it. Unifying the better aspects of peer groups with those of benchmarks creates a performance evaluation background that is fairer, more accurate, much more timely than current approaches, and produces indications of significance that are unattainable elsewhere. The current practice of showing both peer groups and benchmarks on the same page doesnt solve the many problems with these two approaches, but it does confirm that the evaluator is aware of the problems. Try PIPODs. Theyre available for free here. See the difference for yourself.
Ron Surz (Ron@PPCA-Inc.com)
*The analogy that's frequently used to describe survivorship bias is the marathon with 1000 runners and 100 finishers. Is the 100th finisher dead last, or in the top decile? He's in the top decile.
John O'Brien originated this approach at A.G. Becker in the 1970s, and recently suggested the adaptation to popular indexes. John is the executive director of the Master's Program in Financial Engineering, and the adjunct professor at the Haas School of Business, U.C. California at Berkeley.