9/255 George Street,
Sydney NSW 2000
02 9221 4066
- JANA Investment Trusts
- Client Login
JANA’s Matthew Griffith argues for a risk-adjusted metric to be included in the YFYS performance test, rather than a mere focus on implementation
Asset consultant JANA would like to see the Australian Prudential Regulation Authority (APRA) introduce a risk-adjusted measure to the Your Future, Your Super (YFYS) performance test to provide a better view on how superannuation portfolios perform.
In its current form, the test is merely a measure of implementation rather than an indication of good design, Matthew Griffith, Principal Consultant at JANA, says in an interview with [i3] Insights.
“We think that if there is a risk-adjusted metric included as part of the fabric of the test that will ameliorate the need to have an infinitesimal amount of benchmarks,” Griffith says. “Investing is multifaceted and the inclusion of a risk-adjusted metric will round out the test to provide a better lens of what the performance has actually been like.”
Griffith said it wouldn’t be too hard to introduce such a metric and doesn’t require a wholesale change of the test.
“A risk-adjusted metric of dividing return into realised risk is a relatively simple metric,” he says. “It is easy to calculate and so there could be an appraisal of the efficiency of the portfolio on those terms relative to the benchmark.
“That takes away the need to cut up the portfolio into an implementation test which has a large number of benchmarks,” he says.
Earlier this month, Treasury introduced a number of changes to the YFYS performance test after consultation with the superannuation industry. Griffith said that JANA welcomed this review, and commended Government and Treasury for their approach to this most recent round of industry consultation.
A key change was the expansion of the number of benchmarks funds can use to measure their asset allocation against, resulting in a more granular approach to performance measurement. For example, Treasury added benchmarks related to emerging market equities, credit and alternative assets.
Overall, Griffith says these changes form an improvement on the previous test metrics.
“In aggregate we think it is good, but the caution of course is the ex post implementation of it,” he says. “Every time there is a new benchmark [added] it essentially applies from 1 July 2014 and it is not necessarily going to reflect what funds were thinking at the time.
“Take emerging market equities, for example. We think that is a good inclusion, but not all funds had standalone emerging market equity SAAs (strategic asset allocations). Had they known that this Test would be applied eight years ago, I can almost guarantee that there would have been a discussion by Boards and internal teams around an SAA weight for emerging markets,” he says.
Griffith also questions whether the current approach will just see more and more benchmarks being added over time, because there is no clear point at which a collection of benchmarks accurately reflects the investment universe.
“Where do you draw the line of what is in and what is out? Do you have a standalone ILB (inflation-linked bond), rather than an aggregated ILB in the Australian index, for example? Do you have standalone country equity benchmarks?”
“But to be clear, I think these benchmarks in aggregate are an improvement, because they cut up the fund in smaller divisible parts. I think at the margin that reduces tracking error, or the concern of introducing things that don’t fit the benchmark very well. So that is a good thing,” he says.
An important pain point of the previous test was the way it treated alternative investments. Any hedge fund was measured against a benchmark consisting of 50 per cent equities and 50 per cent fixed income.
But as interest rates were cut dramatically in the period after the Global Financial Crisis, funds started to invest in more defensive-type alternative assets to replace their bond exposures. To have these measured against a benchmark whose risk was largely determined by its equity exposure was a clear mismatch.
Following the review, Treasury has now added two alternative asset benchmarks, including a growth and defensive benchmark. Although Griffith is positive about the new additions, he also notes that the new benchmarks still do not reflect the role alternative assets play in a broad multi-asset portfolio.
“Alternatives have been improved by having three categories, not a single category, but the category still uses a blend of bonds and equities. That is not how they are thought of in the industry,” he says.
“They are typically sought after as diversifiers that are truly different from equity and bond exposures. So the linkage of the benchmark to liquid market exposures has always been a challenge.
“But we think it is probably better than where we were with a straight 50/50 benchmark,” he says.
There are also some classification issues with the new benchmarks, Griffith says. For example, Treasury has added credit-focused benchmarks, which are an improvement on the bond benchmarks of the earlier test, but the credit definition APRA uses and proposed benchmarks do not necessarily reflect market practice in all cases.
“Also the focus on a high yield benchmark is an improvement, but it leaves out floating rate debt. That is a very substantial part of the market these days, which is not in the proposed index.
“So really you are asking a client to say: “Do you want to take an off benchmark floating rate exposure versus the high yield index?”
Treasury has also expanded the time period over which the test measures performance from eight to 10 years. Griffith says this better reflects common practice in the design of MySuper options.
“We think it is a good thing overall. It aligns with disclosures for investment horizons for default options across the industry,” he says.
Griffith also says the longer timeframe better reflects the long term nature of pension fund investing. But asked whether the timeframe should be further extended to 15 or 20 years, Griffith says this would be impractical.
“A test has to be meaningful for the period of time where a team or a board have control over the portfolio. If it is too long, then the test becomes somebody else’s problem later on.
“You want the test period to be meaningful, but also within the time horizon of an investment cycle, which depending on who you read is somewhere in the range of seven to 10 years,” he says.
9/255 George Street,
Sydney NSW 2000
02 9221 4066
18/140 William Street,
Melbourne VIC 3000
03 9602 5400