We need new pension metrics.
In the 1980s, baseball writer and analyst Bill James recognized that the traditional baseball statistics, like Batting Average, RBI, and Runs Scored, were so severely limited as to be wildly misleading. Batting average didn’t distinguish between home runs and singles, and didn’t count walks at all. RBI depended heavily on batters in front of you getting on base. Likewise Runs Scored depended heavily on batters behind you driving you in. Park size influenced ERA. Slow fielders got to fewer batted balls, so paradoxically recorded fewer errors. And so on.
By writing and thinking and analyzing what data was available, James created entirely new stats, intended to isolate individual performance, or tease out surprising results about team performance. So now we have the “Slash Line,” of Batting Average, On-Base Percentage, and Slugging. We have Runs Created, Park-independent ERA, and Range Factor. James created a whole new field of baseball analysis, and now there’s not a major league team without an analytics department using them to evaluate team and individual performance.
In the last few years, the country has had a growing realization that its public pensions are in trouble. Communities, struggling to meet unrealistic commitments that by and large their members didn’t make, have undertaken a series of reforms to existing defined benefit programs, and have even been converting those plans to 401(k)-style defined contribution and cash balance plans.
But in doing so, policymakers are still guided by a small number of traditional metrics that give only a vague sense of a plan’s health, and little if any guidance on potential fixes.
Right now, policymakers focus primarily on Funded Status and Amortization Period. The first tells you how much money a fund has on hand to cover promises made, in present-day dollars. The second tells how long it would take to reach fully-funded status. But each has severe limitations. A plan’s funded status is a snapshot of where it is right now, but does nothing to capture the trend, or the risk that things will get worse. The Amortization Period helps describe how far off-track a plan is, but quickly becomes extremely sensitive to small changes in plan dynamics.
Much like baseball in 1980, public pension analysis is ripe for new statistics. Over the last few years, plans’ financial reports have become more detailed and include more historical information, so there’s much more information for analysts to work with. This shouldn’t just be playing games with numbers; new statistics should be designed to help policymakers, plan managers, and citizens understand how bad the problem is, and where the risks to their plans lie.
Here are the criteria I propose:
- Each statistic should be a single number
It shouldn’t have error bars, or only be understood in combination with other numbers. A full analysis should require more than one number, but Slugging Average means something by itself, it doesn’t need other numbers to make sense.
- It should have a clear meaning and definition
People should know what they’re looking at, and understand exactly what the number is meant to describe.
- We should be able to calculate it, or at least estimate it, from publicly available information
Reproducibility is key. We shouldn’t have to rely on pensions to make these calculations for us, or for legislatures to commissions studies, and we should be able to ding plans that start removing useful data from their reports
At the same time, it’s important to keep in mind what we’re not trying to do:
- There is no Holy Grail here
We’re not looking for a single Public Pension Score that captures everything. There will be numbers that are more desriptive, statistics that encapsulate more information, but there’s no reason to create some artificially-weighted “Pension Health Score” that tells you too much while telling you nothing at all
- Closed Definitions
There can be debate on the best way to calculate these numbers. There are very strong competing opinions about the proper discount rate to use, for instance, or the proper tax base, and so on. Those are legitimate debates to have. We’re striving for clarity, not absolutism
- Fairness Metrics
Recent studies such as one by the Urban Institute have also looked at pension fairness, and how well a pension plan performs its functions. That may help in making the political case, but it’s outside the scope of what I’m proposing here.
In 2014, the Colorado legislature mandated a sensitivity study of PERA. As part of that study, the consulting firm developed a simpler, “Signal-light” structure, that included calculations of a plan’s risk of going broke over a period of time, based on variations in investment returns. That’s an example of a great, simple number that encapsulates a great deal of information and helps policy-makers decide the risk to their communities.
It’s a good start, but I’m pretty sure we can do better. For one thing, while the methodology behind that calculation may be fairly simple, it’s almost impossible to recreate it without developing a complex actuarial model of the pension plan. That might be an interesting and useful exercise, but it breaks requirement #3.
Just sitting down and brainstorming, I came up with a list of a few metrics that might be useful, as a starting point.
- Projected Inflow vs. Outflow using projected rates of return, based on historic rates of return
- Liability vs. State (or Jurisiction) GDP
- Unfunded Liability vs. Jurisdiction GDP
- Liability vs. Tax Rates
- Liability vs. Overall Jurisdictional Budget
- Required rate of return to lower the amortization period
- Effects of increased risk on likelihood of ruin
Real analysis would whittle down from a list a of 20 or so but the purpose here should be obvious. How much flexibility is there in the jurisdiction’s finances to deal with its problem? Can it cover current costs? Can it raise taxes to cover them? How big a bite out of actual services is the pension contribution taking?
There’s enough good information out there now that we don’t need to keep flying blind.