|1.||In 100 words or less, what are PIPODs?
PiPODs are a scientific way to evaluate investment manager performance. Virtually all portfolios that a manager could conceivably hold are computer-generated, using the manager's own benchmark and diversification, as characterized by the number of securities typically held. The manager's actual performance result is then ranked within this opportunity set to determine the probability of success or failure. PIPODs are a modern-day application of classical statistics; they determine the likelihood that the manager's return could have resulted from mere chance.
|2.||How are PIPODs really different from manager universes and why are they better?
PIPODs are better than managed universes, also known as "peer groups," because they are the same as these universes when they should be and different when they should be. Managed universes have well-documented biases, including survivor, composition and classification. At least one purveyor of such universes has used PIPODs as a standard to reduce these biases where possible. In other words, the better universes are similar to PIPODs, while the poorer ones are not. Because of survivor biases, no managed universes match PIPODs over long periods of time of, say, three years or longer. Importantly, PIPOD universes are generated monthly and are available just a few days after month’s end. Traditional peer groups are generally offered quarterly and are available 4-6 weeks after quarter’s end.
|3.||How often is it updated? Is once-a-month really timely enough?
PIPODs are updated monthly. It's good practice to evaluate performance over longer periods of time. Accordingly, monthly updates seem sensible.
|4.||How will it help in manager searches -- or won't it?
PIPODs won't help in manager searches when there are plenty of candidates. The top performers against PIPODs will be the top performers against a reasonably-sized managed universe. However, PIPODs can be of great help in a search for a specialized manager, with few candidates. An example would be a manager specializing in small value companies. PIPODs can tell you whether or not the few managers who offer the investment service have succeeded or failed in their stated area of expertise.
|5.||How do the results compare with manager databases over the short run? Over 3-5 years?
As mentioned in question 2, PIPODs are somewhat similar to good (large, unbiased, etc.) manager databases in the short run. However, survivor bias generally causes managed universes to be overstated for longer periods, such as 3-5 years, so they become different from PIPODs.
|6.||Why would I want to compare manager results to the lower PIPOD medians?
The absence of survivor bias causes PIPOD medians to be lower over long time periods, but the removal of other biases frequently offsets this effect. Evaluators who use custom benchmarks know these offsetting effects well. A manager can appear to be successful against a misspecified index or universe, when an appropriate benchmark reveals failure.
|7.||How can I use it for attribution analysis?
The middle, or median, of a PIPOD universe is the manager's custom benchmark – it captures the manager's style and diversification. The difference between actual return and this median is the value added or subtracted by security selection and style rotation. The ranking within a PIPOD universe is the significance of this value added or subtracted.
|8.||Let’s say I want a complete set of PIPOD data -- what exactly am I getting and in what format am I getting it in?
PIPOD universes are delivered as text files, with a .PRN extension. These can be opened in most programs – Notepad, Word, Excel, etc. We also provide a program that accesses the files and graphs your fund’s returns. See the next question.
|9.||Do I have to do any programming to get PIPOD analyses?
If you like, you could write a short program that opens the PIPOD data described in question 8, but we provide free software to help you.
When you install the software, it will place an icon on your desktop. Double click on the icon, and the lefthand side of the above exhibit will appear. The graphic area on the right will be grayed out. Go to the upper left and click FILE –> OPEN to select the time period you’re interested in. The file naming convention is PODyymm.PRN, where yy=2-digit year and mm=2-digit month. Click onto the desired file and then click OPEN, and the graph on the right will appear.
The rest is straightforward. Enter the name of your manager in the FUND NAME box at the top left. Drag the slider just below Fund Name to the number of securities usually held by the manager. Enter the manager’s returns – you don’t have to enter all of the returns, just the ones you know, or are interested in. Click the REFRESH CHART button. This will plot the manager’s returns as blue diamonds on the floating bars. Last, but not least, underneath BENCHMARK INDEX, there’s a drop-down box that you should use to scroll to the right benchmark for this manager. Voila, scientific rankings.
|10.||Is this data available for calendar year comparisons? We have downloaded the demo and noticed only trailing year information.
The answer is yes, but the free software you’ve downloaded won’t open it. This software was intended more as a proof of concept, but we are talking to someone about upgrading it. The professional version we call HedgePODs does have annual universes and much more.
|11.||Regarding the methodology, are you taking the most recent underlying holdings for those Indexes available and applying the methodology to past time frames? In other words, the Russell 1000 Growth has most likely changed composition over the last 10 years. Are you calculating a trailing 10-year return based on current Russell 1000 Growth holdings?
Absolutely not. PIPOD portfolios trade, just like the real thing. Please see our PIPOD Web page for recently released articles.
|12.||This may be an overly simplistic question, but are those indexes with fewer holdings harder to "beat" using this methodology, i.e. number of possible portfolios/return sets for Russell 1000 seems smaller than Russell 2000 and thus making those ranges more narrow? Or are the same number or return sets the same for each PiPOD, regardless of the number of holdings within that particular index?
We generate 10,000 portfolios for each mandate, regardless of the number of names in the index. The ranges are primarily a function of the underlying dispersion (risk) in the asset class.
|13.||Also, in our brief tests of PIPOD data to current Mobius universe data, it appears that the differences in data occur most drastically in the 5th and 95th percentiles. Where the pipod universe numbers outperform Mobius universe numbers in the 5th and underperform them in the 95th, creating a more broad range of returns with a higher top end. Is there an easy way to explain this occurrence?
We are working on a paper called Replicating Peer Groups for Fun & Profit: Everyone Loves a Clone. The Mobius database is just one universe set we‘re replicating. TUCs and Morningstar are the others. We set two parameters to accomplish replication: survivor bias and number of names.
The first adjustment raises the POD median. The second controls the spread (5th and 95th percentiles) – more names reduces the spread and less widens it. We need more names for small company mandates than large to match Mobius, which conforms to what we know about the typical number of holdings. You can control the spread yourself by tinkering with the number of names slider in the sample program.
|14.||Generally speaking, it appeared that the Pipod universes were harder to beat because of the reasons in question 13. We'd be interested in learning more about the methodology, however, would have a hard time transitioning if it were a harder peer group to beat (even though it may be a cleaner, quicker approach). Any comments on pipods as they compare to Mobius universe data as a whole and why we may be finding results less favorable than anticipated?
Here’s the current (July 2006) situation, which will change: value has outperformed growth. Consider a line with value on the far left and growth on the far right, with most managers somewhere in-between. Most managers in value peer groups will underperform their value benchmarks for recent periods because most funds in value peer groups have some growth exposure. Similarly, most funds in growth peer groups should be outperforming their benchmark. In other words, you should see value benchmarks ranking above median in Mobius value universes and growth benchmarks ranking below median. Have the growth guys gotten smarter and value managers dumber? I think not. PIPODs doesn’t solve this problem, but POD does.
PODs were originally designed to be customized, as blends of styles or derived from "normal portfolios”, but no one wanted custom universes. Someday I hope to give people what they need rather than what they seem to want.