In partnership with brickbybrick, the #1 community for modern risk managers.
← All posts
MetricsReportingRisk Management

The COI Compliance Metrics That Actually Matter

Most COI dashboards measure the wrong things. Here are the metrics that actually predict risk reduction and program health.

The RiskStack Team

Walk into a risk management team using any major COI tracking platform and you'll see roughly the same dashboard: a big "compliance percentage" number at the top, a list of expiring certificates, a list of non-compliant vendors. The numbers move up and down based on activity. Leadership glances at them quarterly. Nobody really uses them to make decisions.

That's because most of these metrics are measuring the wrong thing. A 94% compliance rate sounds great until you realize it's measuring document collection, not actual risk reduction. The real metrics — the ones that predict whether your program is working — are usually buried below the dashboard headlines or aren't tracked at all.

Here's what we think actually matters, why, and how to track it.

The metrics most platforms surface (and why they're shallow)

Total compliance percentage. This is the headline number. It usually means "percentage of vendors with current certificates on file." That's a document-collection metric, not a risk metric. A vendor can be 100% compliant on paper while actually having forged certificates, missing endorsements, inadequate limits, or coverage that doesn't apply to the work being done. The number is easy to game and easy to misinterpret.

Number of certificates expiring soon. This is operational hygiene, not a risk indicator. Useful for the person doing renewal work, not very useful for understanding program health.

Vendor count. "We track 1,247 vendors" doesn't tell you anything about the quality of the tracking.

These metrics aren't useless — they're just thin. Reporting them as if they're a complete picture of program health is misleading.

The metrics that actually matter

In our research, the metrics that correlate with program quality (reduced claim exposure, faster issue resolution, better vendor relationships) are different from the headline numbers. A few that we'd argue belong on every dashboard:

1. Endorsement compliance percentage, separated from certificate compliance. As we've written elsewhere, the certificate is shallow; the endorsements are where coverage actually lives. Tracking "vendors with current certificates" as one number and "vendors with verified endorsements matching contract requirements" as a separate number reveals the gap between paper compliance and actual coverage. In most programs, this gap is uncomfortably large.

2. Time-to-resolution for compliance gaps. When a vendor is flagged non-compliant, how long does it take to resolve? Average days from flag to resolution is one of the strongest indicators of program health. Programs where gaps linger 60+ days are programs where compliance is theoretical. Programs where gaps resolve within 14 days are working.

3. Verification depth distribution. Of your active vendors, what percentage have coverage verified at the carrier or broker level versus accepted at the certificate level? A program where 80% of vendors have only PDF-based verification is structurally exposed to fraud and stale data. A program where 60% are broker-verified and 20% are carrier-verified is meaningfully more protected.

4. Coverage-to-requirement ratio. Are vendors meeting limits requirements with margin, or barely? If your contracts require $1M GL and most vendors carry exactly $1M, a routine claim could exhaust their policy and leave you exposed. Tracking the ratio between actual limits and required limits — and watching it over time — surfaces this.

5. Mid-term cancellation detection rate. When a vendor's policy gets cancelled mid-term, how do you find out? If the answer is "when the certificate expires and we ask for a new one six months later," you have a six-month detection lag. Tracking how often you catch mid-term cancellations (and how quickly) is a leading indicator of fraud and lapse exposure.

6. Vendor onboarding cycle time. From new vendor signed to fully compliant: how long? This is a vendor experience metric. Programs where onboarding takes 30+ days create friction with procurement and operations; programs where it takes 5-7 days don't. The difference is usually about platform vendor flows and internal process design, not contract complexity.

7. Audit-readiness score. Could you produce documentation for any vendor's compliance status, with supporting endorsement documents, in under 10 minutes if asked? Mock-test this quarterly. Programs that pass this test are audit-ready; programs that don't aren't, regardless of what the dashboard says.

8. Claim-relevant compliance rate. Of the vendors with the highest claim potential — high-risk work, high-value contracts, sensitive operations — what percentage are fully compliant with deeply verified coverage? Aggregate compliance treats every vendor equally; risk-weighted compliance recognizes that not every vendor matters equally.

How to actually track these

Most COI platforms don't surface these metrics natively. You'll need to build them, either through the platform's reporting layer or by exporting data. A few practical approaches:

Endorsement compliance as a separate field. If your platform tracks endorsement documents, you can usually filter for "vendors with all required endorsements verified" as a saved view. Compare that count to the headline compliance number to see the gap.

Time-to-resolution from issue logs. Every platform tracks when a compliance issue is opened and when it's closed. Pulling those timestamps and computing average duration is straightforward. If your platform doesn't expose this data, that's a sign the platform is workflow-light.

Verification depth as a tag. Some platforms tag the verification source per certificate (PDF-only, broker-verified, carrier-verified). If yours doesn't, you can usually tell from metadata — broker-verified records typically have additional fields populated. TrustLayer's approach of carrier-direct integrations creates a verification depth distinction that's worth tracking.

Coverage ratio as a calculated field. Required limit divided by actual limit, computed per vendor, aggregated as a portfolio metric. Most platforms support custom field calculations of this type.

Risk-weighted compliance from your own segmentation. Tag vendors by risk tier (high, medium, low) based on contract value, work type, or your own criteria. Calculate compliance rate per tier. The aggregate hides what the segmented view reveals.

What the metrics tell you about platforms

One useful side effect of tracking these metrics: they tell you whether your platform is actually working.

A platform that produces high headline compliance but low endorsement compliance is collecting documents without verifying coverage. A platform with long time-to-resolution is workflow-heavy without being effective. A platform with low verification depth (everything is PDF-based) isn't doing the verification work — it's a filing cabinet.

When evaluating platforms, ask whether these metrics are surfaced natively or whether you'd need to build them. The platforms that take risk management seriously surface them; the platforms that are workflow tools surface vanity metrics.

A note on benchmarking

Industry benchmarks for these metrics are scarce. Most published benchmarks come from platform vendors who have an interest in making their customer base look good. Take them with skepticism. Within your own organization, year-over-year comparison is more useful than industry comparison. If your endorsement compliance was 62% last year and is 78% this year, that's progress regardless of where the industry sits.

For directional reference, here's what we'd consider strong performance based on our research conversations:

  • Total certificate compliance: 95%+ for mature programs.
  • Endorsement compliance: 80%+ is strong, 60-80% is typical, below 60% is concerning.
  • Time-to-resolution: 14 days or less for active issues.
  • Verification depth: 50%+ of vendors with broker or carrier verification (not just PDF) is strong.
  • Mid-term cancellation detection: catching 50%+ of cancellations within 30 days is strong; most programs don't measure this and probably catch less.
  • Onboarding cycle time: under 14 days for most vendors is strong.
  • Audit readiness: should always pass.

The bigger picture

A risk management program is only as good as the metrics it manages to. If you're managing to vanity metrics, the program will be optimized for those metrics — not for risk reduction. Switching to deeper metrics changes what gets attention, what gets resourced, and what improves over time.

The shift is uncomfortable at first. Going from "we're 94% compliant" to "we're 94% compliant on certificates and 67% compliant on endorsements with median resolution time of 38 days" is not a more flattering story. But it's a more accurate one, and accurate stories drive better programs.

Compare how platforms support deeper metrics in our research.

Find your COI tracker in three minutes.

Eight questions, personalized shortlist. No sales calls.

Start My Comparison