Skip to main content
PMGuru
Revenue Operations8 min readMarch 31, 2026
Share:

Customer Health Scoring That Predicts Churn

A health score built on usage, support, and commercial signals predicts churn 60-90 days before cancellation. Here's how to build one.

Key Takeaways

  • A well-built health score predicts churn 60-90 days before cancellation. I've validated this across seven engagements.
  • Three signal categories: product usage (login frequency, feature adoption), support (ticket volume, sentiment), and commercial (payment behavior, expansion signals).
  • Weight product usage at 50%, support at 30%, commercial at 20%. Recalibrate quarterly against actual churn data.
  • Teams that act on health scores within 14 days of a red flag save 25-35% of at-risk accounts.

A well-built customer health score predicts churn 60-90 days before cancellation. I've validated this across seven engagements at companies doing $10M-$100M in revenue. The score combines three signal categories: product usage, support interactions, and commercial behavior. Most companies either don't have a health score or have one built on vanity metrics that doesn't correlate with actual churn. The fix takes 4-6 weeks to build correctly and starts paying back immediately in saved accounts.

What Is a Customer Health Score?

A customer health score is a composite metric that rates each account's likelihood of renewing or churning based on behavioral signals. It's the leading indicator on the retention branch of your KPI tree.

Think of it as a diagnostic for your customer base. A $48M B2B SaaS company I worked with in 2024 had 1,200 accounts and no systematic way to know which ones were at risk. The CS team relied on gut feel and renewal dates. By the time they flagged a problem, the customer had already decided to leave. The health score gave them 60+ days of lead time to intervene.

Why Do Most Health Scores Fail to Predict Churn?

Most health scores fail because they measure the wrong signals or weight them incorrectly. I've inherited health scores at four companies, and three of them had the same problem: they tracked NPS and login count, gave both equal weight, and called it done.

NPS is a lagging indicator. A customer can give you a 9 in Q2 and churn in Q3 because their internal champion left. Login count without feature depth is noise. An account logging in daily but only using one basic feature is at higher risk than an account logging in weekly but running the core workflow.

The gap between a useful health score and a cosmetic one comes down to signal selection and weight calibration.

How Do You Build a Health Score That Predicts Churn?

The build takes 4-6 weeks across three phases: signal selection, weight calibration, and operationalization.

Step 1: Define your signal categories

Three categories cover 90%+ of churn prediction power. Product usage: login frequency, feature adoption depth, time-in-app, and workflow completion rates. Support: ticket volume trends, resolution satisfaction, escalation frequency. Commercial: payment behavior, contract renewal timing, expansion conversations or lack of them.

At a $35M healthcare SaaS company, I pulled 18 months of data and tested 22 individual signals. Only nine had meaningful correlation with churn. Login frequency, feature adoption breadth, support ticket escalation rate, and days since last executive contact made the final model. The rest were noise.

Step 2: Weight the signals against actual churn data

Start with a 50/30/20 split: 50% product usage, 30% support, 20% commercial. This baseline works for most B2B SaaS companies. Then run a correlation analysis against your last 12 months of churn data.

At the $35M company, usage signals predicted 55% of churns on their own. Support signals caught another 25%, mostly accounts with repeated escalations. Commercial signals caught the remaining 20%, primarily late payments and declined renewal meetings. I've seen the weights shift to 60/20/20 at product-led companies and 40/30/30 where services drive retention.

Step 3: Set thresholds and build the dashboard

Green means the account is healthy across all three categories. Yellow means one or two signals are declining. Red means the account is likely to churn within 60-90 days without intervention.

Keep it simple. A $52M fintech company I worked with tried a 100-point scoring system. The CS team couldn't explain what "a score of 67" meant to their manager. We simplified to green/yellow/red with three bullet points explaining why each account earned its color. Dashboard adoption went from 30% to 85% in two weeks.

Step 4: Build the intervention protocol

A score without action is a dashboard nobody checks. Every red account triggers a 14-day protocol. Day 1: the CSM calls the primary contact. Day 3: escalate if no response. Day 7: VP of CS or account executive joins a recovery call. Day 14: executive sponsor outreach.

Teams that act within 14 days of a red flag save 25-35% of at-risk accounts. I've measured this consistently across engagements. The accounts that churn despite intervention usually never got full value from the product, which is a different problem that feeds back to onboarding and unit economics.

Get the Growth Diagnostic Framework

The same diagnostic I run in the first 14 days of every engagement. Three biggest revenue gaps, prioritized with dollar impact.

Book a diagnostic

How Do You Keep the Score Accurate Over Time?

Recalibrate quarterly. Pull every account that churned last quarter. Check what their health score was at 30, 60, and 90 days before cancellation. If the score was green 60 days out for more than 20% of churned accounts, your signals are missing something.

At the $48M company, the first-quarter calibration revealed that "days since last executive sponsor contact" was stronger than we'd weighted it. Accounts where the sponsor went quiet for 45+ days churned at 3x the rate of engaged accounts. We increased the commercial signal weight from 20% to 30% the next quarter.

This quarterly review is The KPI Tree Framework applied to retention. The board-level churn metric branches into health score accuracy, intervention success rate, and time-to-intervention. Each branch has clear KPI ownership. The CS leader owns intervention execution. The data team owns score calibration. The VP of Product owns the usage signals that feed the model. Without this operating cadence around the score, accuracy drifts and the team can't course-correct before accounts leave.

What Went Wrong When I First Built a Health Score?

At a $22M B2B platform in 2022, I built the health score from assumptions instead of data. I assumed login frequency was the top predictor because it's the most common signal in every health score article online. The data told a different story.

Login frequency had weak correlation with churn at this company. Their product was a workflow tool that users needed 2-3 times per month, not daily. The real predictor was workflow completion rate: accounts completing their monthly processes churned at 4% annually, while accounts with incomplete workflows churned at 28%. I'd wasted three weeks building around the wrong signal. The rebuild took another two weeks. The lesson: pull the churn correlation data first, then build. Never copy another company's signal weights.

What to Do This Week

Pull your last 12 months of churned accounts. For each one, check login frequency and feature usage in the 90 days before cancellation. Compare those patterns to your healthiest accounts over the same period. The gap between those two groups is your first signal.

If you don't have this data accessible, that's the problem to solve first. If you want help building the score and the intervention protocol, book a diagnostic.

Frequently Asked Questions

What signals should a customer health score include?

Three categories: product usage (login frequency, feature depth, time-in-app), support (ticket volume, resolution time, sentiment), and commercial (payment history, contract renewal timing, expansion conversations). Weight them 50/30/20 as a starting point and recalibrate quarterly against actual churn data.

How do you weight different health score signals?

Start with 50% product usage, 30% support signals, 20% commercial indicators. Recalibrate quarterly by running correlation analysis between score components and actual churn outcomes. I've seen weights shift to 60/20/20 at product-led companies and 40/30/30 at services-heavy businesses.

How far in advance can a health score predict churn?

A properly calibrated health score predicts churn 60-90 days before cancellation. I've validated this across seven engagements. The key is including leading indicators like usage decline and support sentiment, not lagging indicators like renewal date.

What should the CS team do when a health score turns red?

Launch a 14-day intervention protocol. Day 1: the assigned CSM calls the primary contact. Day 3: escalate to the CS manager if no response. Day 7: VP of CS or account executive joins a recovery call. Day 14: executive sponsor outreach if the account is still at risk. Teams that act within 14 days save 25-35% of at-risk accounts.

How often should you recalibrate a customer health score?

Quarterly, against actual churn data from the prior quarter. Pull every account that churned, check what their health score was 30, 60, and 90 days before cancellation, and identify missed signals. Adjust weights and thresholds based on what the data shows.

Related

Dhaval Shah, professional headshot

Dhaval Shah

Fractional Leader

26+ years in product and revenue operations. $50M+ revenue influenced across healthcare, fintech, retail, and telecom.

Connect on LinkedIn

Want help executing this?

If you want clarity on your situation, book a 30-minute diagnostic. I work inside PE-backed and founder-led companies doing $10M-$100M as a fractional operator and find your biggest growth gap.

Start with proof in case studies, then review engagement models.

Book a diagnostic