Revenue Attribution: The Metric Most Companies Get Wrong
First-touch, last-touch, multi-touch. They are all incomplete. Here is the attribution model that actually tells you where your revenue comes from.
Key Takeaways
- First-touch and last-touch attribution overcount by 30-50%, giving credit to the wrong channels.
- Multi-touch attribution is better but still relies on assumptions about how much credit each touchpoint deserves.
- Incremental attribution, measuring lift over a control group, is the only model that tells you true causation.
- Start by auditing your current model: pull 100 closed deals and check if the attributed channel matches what the buyer actually says influenced them.
Your CMO shows the board a slide: "Paid search drove $3.2M in revenue last quarter." Your VP of Sales shows a different slide: "The sales team closed $4.1M, mostly from referrals." Both are using the same CRM data. Neither number is wrong. Neither is right.
This is the attribution problem, and it costs companies millions in misallocated spend every year.
Why Every Standard Model Fails
First-Touch Attribution
Gives 100% credit to whatever brought the buyer in first. Clicked a Google ad six months ago? Google gets credit for the $50K deal that closed after 12 touchpoints.
The problem: it ignores everything that happened between awareness and purchase. The demo, the case study, the three emails, the sales calls. All invisible.
Last-Touch Attribution
Gives 100% credit to the final interaction before purchase. The buyer clicked an email link and signed the contract? Email gets full credit.
The problem: it ignores everything that created the demand in the first place. The brand awareness campaign, the content that educated the buyer, the webinar that built trust. All invisible.
Multi-Touch Attribution
Spreads credit across touchpoints. Linear gives equal credit to all. Time-decay gives more to recent touches. Position-based gives 40% to first and last, 20% to the middle.
Better, but still flawed. The weighting is arbitrary. Why does a position-based model assume the first touch deserves 40%? Because someone decided it sounded reasonable, not because the data proved it.
The Model That Works: Incremental Attribution
Incremental attribution measures lift. Instead of asking "which channel touched the customer?" it asks "what would have happened if we had not spent money on this channel?"
The mechanics: you create holdout groups. For every channel, you withhold spend from a random segment and compare their conversion rate to the segment that saw the ads. The difference is the incremental lift. That is your true attribution.
One retail client I worked with was spending $8M annually on display ads. Their multi-touch model said display drove 18% of revenue. When we ran an incrementality test with a holdout group, the true lift was 4%. The other 14% would have converted anyway through organic search and direct traffic.
That is $6.4M in perceived value versus $1.5M in actual value. The remaining $6.5M was being misattributed.
How to Build Incremental Attribution
Step 1: Pick your biggest channel. Start with wherever you spend the most money.
Step 2: Create a holdout group. Withhold that channel from 10-20% of your target audience for 4-6 weeks.
Step 3: Measure the difference. Compare conversion rates between the exposed group and the holdout. The gap is your true incremental lift.
Step 4: Calculate true ROAS. Divide the incremental revenue (not total attributed revenue) by the spend.
Step 5: Repeat for each channel. Work down your spend stack from highest to lowest.
The Audit That Takes One Afternoon
You do not need to rebuild your entire attribution model today. Start with a simple audit.
Pull 100 closed deals from the last quarter. For each deal, look at what your CRM says influenced the deal. Then call or survey the buyer and ask: "What actually made you decide to buy from us?"
I have done this exercise with six companies. In every case, the CRM attribution and the buyer's reported influence matched less than 40% of the time. The gap between what your data says and what your customer says is your attribution error.
Your First Step
Pull 100 closed-won deals. Check what your CRM attributes them to. Then ask 20 of those buyers what actually influenced their decision. If the answers do not match, your attribution model is lying to you, and every dollar you allocate based on it is a gamble.
Book a diagnostic if you want help rebuilding your attribution model from first-party data.
