Recently Fitch Ratings (Fitch) released a paper arguing that rating users and regulators should treat an “A-“ A.M. Best (Best) Financial Strength Rating (FSR) as equivalent to a “BBB” Fitch Insurer Financial Strength (IFS) rating, and to the “BBB” ratings of the other main Credit Rating Agencies (CRA’s) globally active in rating re/insurers: Moody’s and Standard & Poor’s (S&P).

Fitch’s position in part reflects its view of key differences between it (and it believes the other CRA’s) and Best in the rating criteria Best uses in four different situations, each of which Fitch believes can result in an “A-“ from Best, when Fitch and other CRA’s would limit their ratings to “BBB+”/”BBB”.

Fitch then compares its data on how frequently senior unsecured bond issuers default at given rating levels with Best’s data on how frequently FSR rated operational re/insurers become ‘impaired’, which generally relates to regulatory action. Extrapolating between the two, Fitch suggests that the overall data supports the general contention that an A.M. Best “A-“ is equivalent to the other CRA’s “BBB”.

Litmus Analysis’ (Litmus) team of former CRA analysts have reviewed Fitch’s paper in detail, and the ‘impairment’ paper from Best’s referenced by Fitch. More generally the Litmus team have decades of experience in senior positions with two of the four CRA’s (Best and S&P) and for the last 6 years Litmus has been supporting rated companies in working with the CRA’s – in particular via in-depth knowledge of their rating criteria.

A number of the issues discussed here are covered in more depth in the Litmus guide to non-life re/insurer ratings.

Litmus has discussed Fitch’s paper with both agencies and shared a draft of this report with both prior to its release. We have noted their responses and, to the extent we have agreed with them, these are reflected in this finalised report.

Litmus perspective on the subject of ratings equivalence

The views in this note reflect in part Litmus’ opinion that rating equivalence should not mean CRA’s being expected to rate re/insurers (or any other sector) at the same level as each other, or to use the same criteria. Indeed, a plurality of views is exactly why it is desirable that there should be more than one CRA. Rather it relates to whether a rating symbol from one agency generally means the same thing from another in terms of their view of the degree of credit risk the rated re/insurer represents.

Thus, in this case, when Best rates a re/insurer “A-“, does the agency mean broadly the same thing as when Fitch rates an insurer “A-“?

There can be confusion when comparing Best’s financial strength ratings to the equivalent from the other CRA’s as Best have long used their own unique rating scale for FSRs. However Best also publish an Issuer Credit Rating (ICR) on each rated insurer and this is the base-case rating from which it derives the FSR. The ICR scale is essentially the same as that used by S&P and Fitch.

As the ICR is Best’s view of the rating for the senior most unsecured creditor position in general, and since policyholders are considered to be that, the ICR represents the policyholder risk rating level expressed in the more commonly used rating scale.

Happily, given the subject, Best maps its ICR of “a-‘”to its FSR of “A-“. (In other parts of the scale two ICR levels can converge on one FSR as there are more gradations in the ICR scale).

Litmus comments on Fitch’s paper and its main conclusions

1. Use of re/insurer ratings by market participants

Litmus strongly agrees with a key element of Fitch’s underlying perspective, namely that ratings users, and especially those advising clients through the use of ratings, should be well informed about rating criteria and its impact. Indeed, Litmus considers that market professionals using ratings should have a broad understanding of what ratings do and do not mean, how rating decisions are reached, the information they are typically based on and what their inherent limitations are. Litmus does not believe such knowledge is as common as would be desirable among professional users of, or commentators on, ratings in the re/insurance industry.

Litmus also agrees with Fitch’s observation of the disconnect between debt capital market usage of ratings (where debt rated “BBB-“ and above is considered ‘investment grade’) and re/insurance market usage (where “A-“ financial strength ratings are often seen as the ‘floor’ of acceptability). To the extent that criteria based differences in some areas between the agencies lead to ratings falling either side of the “A-“/”BBB+” boundary, that clearly matters. However, we should note that any consequences of that are a function of re/insurance market participant usage of ratings, not an issue of rating equivalence.

2. Criteria differences between the CRA’s and their implications

While Litmus does not always fully concur with Fitch’s descriptions of Best’s criteria, Litmus believes Fitch’s conclusion that these can lead to “A-“ ratings in the four areas highlighted to be broadly correct.

Fitch might be presumed as implying that Best therefore inherently rates at the “A-“ level in these areas. In fact we understand that was not Fitch’s intent: and Litmus has direct advisory experience of a relevant case where Best did not conclude the carrier was “A-“ (the rating was unpublished).  Since, as Fitch cites, many re/insurers choose not to publish ratings below “A-“,  in practice we cannot know how many instances there are of Best not assigning “A-“ ratings.

In each of the four areas Litmus understands Best’s approach to have a reasonable analytic basis, albeit one that differs notably from the three other main CRA’s. As such they are simply reaching a different decision about the optimal criteria to be applied, and hence this should not be viewed as an issue of ‘rating equivalence’.

Litmus notes that all four CRA’s have numerous criteria differences which can lead to rating higher than the others in particular analytical circumstances. That said, because of the variances in the four areas highlighted by Fitch, Litmus believes that the degree of aggregate variance between Best and other CRA’s is likely to be higher than it is between the other three (but we would need details on each agency’s unpublished ratings to verify that).

3. The comparison between Fitch’s default data and Best’s impairment data

Litmus is not sure that Fitch’s comparison between the bond default data published by Fitch/S&P/Moody’s and the impairment data published by Best is sufficient to support its conclusion on ratings equivalence. Our understanding of the data sets is that they reflect fundamentally different issues which Fitch nonetheless believes should lead to only moderate differences in outcome. That might be the case – though we would intuitively believe the differences should be material. There is no proof either way.

Litmus also notes that the point at which S&P assign its ‘R’ rating to US insurers due to regulatory intervention (which the agency treats as analogous to a ‘default’) is when a mandatory takeover is required.  A.M. Best’s definition of impairment includes insurers a long way from that point (the US regulatory system is quite prescriptive and clear about the link between the risk-adjusted regulatory capital position on an insurer and different degrees of intervention by the regulator).

Both the Fitch and Best data sets also require manipulation to make any comparison between them. There is more than one way to do that and Fitch’s approach leads to an outcome that aligns with its conclusion. However, Best’s previously presented approach to the same exercise supports the premise of equivalence between an “A-“ from Best and an “A-“ from Fitch and the other CRA’s.

4.The issue of ‘Ratings Shopping’

Litmus strongly agrees with Fitch’s observation that the practice by some rating users of only seeking one rating on a re/insurer can lead to the issue known as ‘ratings shopping’. This is the jargon for where rated firms select the agency whose approach is most likely to lead to the higher rating. Litmus has long suggested that having ratings from two or more CRA’s is healthiest for both re/insurers and rating users alike. However, it is important to note that a CRA can have a particularly strong franchise with rating users in a given country or industry segment; in such cases rated companies may have only one rating simply because that is the agency that is recognised most strongly by the rating users that matter to them. Indeed, Litmus has direct experience of rated re/insurers not selecting the agency whose criteria would appear to offer a better chance of a higher rating precisely because that agency was seen as having a weaker franchise among rating users.

Hence, while we believe two or more ratings are highly desirable, the fact of only one rating does not in itself necessarily indicate rating shopping.

Litmus broad recommendations to ratings users

  • Be aware of the differences in approach of the four agencies, particularly in the areas where a single agency might have a different underlying opinion.
  • Consider looking for ratings from at least two agencies on each re/insurer you use.
  • Consider the different financial strength rating scale used by A.M. Best and consider referring to their ICR ratings, which use the more common international standard scale.
  • Understand the differences between ‘impairment’ and ‘default’. Also remember that an issue like going into voluntary solvent run-off does not constitute either and is not something the agencies are rating to.

Finally, we would welcome further statistical analysis from the rating agencies to assist the market in understanding the relative performance of their ratings compared to one another.

Stuart Shipperlee

Managing Director, Litmus Analysis

stuartshipperlee@litmusanalysis.com

+44 (0)20 3651 5044

 

Rowena Potter

Senior Consultant Analyst

rowenapotter@litmusanalysis.com

 

2 Comments
  1. Fitch / AM Best Ratings

    An excellent piece. I think Litmus is playing an increasingly important role in the market by reviewing and commenting (objectively) on issues of this nature.

    It is difficult to think of other sectors that have undergone as much structural and regulatory change as the insurance sectors and CRAs have tended to proactively evolve / refine their criteria in response to those changes. That makes past ratings performance in the sector less reliable (than for other sectors) as an indicator for future ratings performance – though the expectation could be that past ratings performance is conservative.

    As (or if) new CRA entrants penetrate the market it is going to become even more important that ratings-users do understand differences in criteria and ratings definitions.

    • Many thanks for your kind words Paul, and very interesting thoughts about the development of rating agency criteria. Certainly we see this evolution as being positive in many areas, with far greater transparency and the development of improving ERM criteria being key benefits. We’re now looking forward to A.M. Best’s latest iteration of their new criteria and expect to comment on that in the near future.

Leave a Reply