When is a BBB rating really the same as an A+?

For most traditional reinsurers a credit rating from one of the main agencies remains a business requirement.  And in developed markets that rating often needs to be “A-“ or higher, at least if the reinsurer is going to be shown the kind of business it really wants.

This binary use of ratings (“A-“ and better is OK, below “A-“ is not) has never really made sense.  The credit markets differentiate between investment grade (“BBB-“ and higher) and non-investment grade (“junk” or “high-yield”, “BB+” or lower) but with far less absolute distinction.  Some bond investors can’t hold paper that carries a lower rating, many might limit the amount they hold, but bond pricing across the “BBB-“/”BB+” boundary suggests at least a reasonable degree of credit market risk/reward coherence.  By contrast the reinsurance market treatment of the “A-“/”BBB+” boundary is out of all proportion to its real meaning.

But there is another way to think about this.  That is to focus on the historical frequency with which organisations with a given rating have defaulted.  While none of the most globally active agencies in reinsurance (A.M. Best, Fitch, Moody’s and S&P Global) define their ratings in terms of expected default rates, all publish extensive “default studies*” about the histories of these by rating level.

Default* generally means non-payment of obligations on time and in full as a consequence of financial duress.

The credit risk at any rating level is a function of the length of time the creditor will be exposed to the rated entity.  Using an extract from the most recent S&P global default study data below we can begin to see what this implies:

 

 

 

 

 

 

 

 

The data shows, for example, that if history repeats itself, a cedant with a reinsurance recoverable due in 2 years from an S&P BBB rated reinsurer is taking the same credit risk** as a cedant with a recoverable due in 5 years from an S&P A+ rated reinsurer.  The observed historical risk of default across all of S&P’s corporate universe in both cases is close to 1 in 200 (0.5%).

The data also shows only a tiny historical difference in the 2-year default rate of a BBB+ (0.31%) and the 3-year default rate of an A- (0.28%)

Which, surely begs the question as to why the “A-“/”BBB+” boundary exists, at least in its most binary form?  To ignore the length of the tail being reinsured is, frankly, to miss a large part of the credit risk management point.  Most insurers don’t manage their bond portfolios that way, so why do so when selecting reinsurers?

*The default data referenced here reflects the full S&P rating default history across all types of corporates (including insurers). Unlike bond defaults, observing the point in time when re/insurers default on policyholder obligations can be challenging.  A.M. Best publishes an “impairment rate” study but this is not easy to definitionally align with a concept like bond defaults, in part because of the critical impact on the resulting data of deciding at what point “impairment” took place (although the agency has done some further work around seeking to make that comparison to bond default data easier). S&P also publishes “insurance company” only rating default data but the sample size is inherently small, and the agency only does so at the “rating category” level.  Accordingly, and since S&P seeks to have any given level of rating indicate the same opinion of credit risk across sectors, the full default data is a useful, if imperfect guide.

**In terms of the historical likelihood of default. In practice a full consideration of credit risk also includes the percentage amount expected to be recovered and the timing of that recovery.

Stuart Shipperlee
Head of Analysis
Litmus Analysis Ltd

Leave a Reply