Your event attribution model inherited inbound logic. That's why your $200K line looks like $0.
Last-touch and 90-day windows were built for form fills, not 8-month deals that start at a booth. A first-principles attribution model that fits events.
A Series C cybersecurity VP Marketing told me last quarter that her RSA booth looked like a $0 line item in the CRM. RSA cost her $240K. Her attribution dashboard showed three closed-won deals tied to it, roughly $180K in revenue, which would make the event barely break even on paid media math. She knew the number was wrong, because two of her biggest deals that year started at that booth. But the model said what the model said, and her CFO reads the model.
The model was a 90-day, last-touch attribution rule, configured in Salesforce and inherited, untouched, from the inbound marketing team that set up the CRM three years ago. It was built for form fills, not for an 8-month enterprise sales cycle that begins with a conversation at a trade show.
Most event attribution problems are modeling problems. Your attribution model was designed for a channel with fundamentally different physics, and you are trying to stretch it over events. The math does not fit.
Why inbound attribution logic breaks at a booth
Inbound attribution assumes three things:
- The lead self-identifies through a form, a download, or a demo request.
- The intent is captured in the same session as the action.
- The sales cycle is short enough that a 30, 60, or 90-day window makes sense.
A form fill works inside all three. A booth scan breaks all three.
The person who scanned your badge at RSA did not self-identify as a buyer. They were walking the floor, collecting vendor information, and sometimes scanning as a polite exit strategy. Their actual intent lives in the conversation your AE had after the scan, and in the meeting your team booked three weeks before RSA even started. The intent is not co-located with the scan event.
The sales cycle is not 90 days. For enterprise security, RegTech, and fintech, it is 6 to 18 months. By the time the deal closes, the attribution window has expired, the last-touch is a demo request that happened six months after RSA, and the $240K booth gets zero credit for the $800K deal it actually sourced.
This is a structural mismatch. An inbound attribution model will systematically under-credit every event, every year, forever. You cannot fix it by tuning the dashboard. You have to replace the model.
What a working event attribution model looks like
Three pieces, in order.
One. Event-sourced and event-influenced, tracked separately.
Sourced means the opportunity’s first qualifying touch was at or before the event, tied to a specific attendee record. Influenced means an event touch appears in the contact roles on the opportunity, even if the first touch was elsewhere. These are different numbers. They belong in different fields on the Opportunity object in Salesforce, or as properties on the Deal object in HubSpot. Collapsing them into one “event ROI” number destroys the signal.
In the RSA case study we worked on, the Series C cybersecurity team tracked sourced pipeline at $2.4M across RSA, Black Hat, and a regional summit. The influenced number was larger, because two deals that closed later had RSA contacts in the deal record even though the opp originated from outbound. Without the split, the finance team could have argued the influenced contribution was marketing cope. With the split, it was a defensible CRM field.
Two. An attribution window that matches your sales cycle.
The 90-day window is a Salesforce factory setting. It was never meant to be a strategy. If your average enterprise deal cycle is 8 months, your attribution window should be 8 months, not 90 days. Yes, this means events you ran last quarter keep earning credit this quarter. That is the point. Event pipeline compounds over multiple quarters, and your attribution model has to let it.
Setting this up is not hard. In Salesforce, it is a custom field on the Opportunity called “event-sourced at” with the stamp of the original event touch. In HubSpot, it is a deal property. The ROI query then asks “show me opportunities where event-sourced at falls within X months before close date” and X is whatever your sales cycle actually is.
Three. Per-stage touch credit.
Last-touch attribution in an enterprise sale is a reporting crime. The last touch is always the demo, the proposal, or the contract review. This is why your dashboard shows “inbound: 82%” and “events: 3%” when your AEs will tell you in private that half their top deals started at conferences.
A working model credits the touch that created the opportunity, plus every qualified touch in the cycle. If RSA sourced the account, RSA gets the sourced credit. If Black Hat had a follow-up meeting with the same account, Black Hat gets an influenced credit. If the deal closed after a demo six months later, the demo gets a closed-influenced credit. Three different events, three different credits, one opportunity. You can still roll up to a single pipeline number. The attribution lives per touch.
This is the model Luminik writes back into Salesforce and HubSpot within 48 hours of floor close, with HIGH, MEDIUM, and LOW confidence badges on each match. The model is the product, because the reporting is what the CFO actually reads.
A worked example with real numbers
Here is the Series C cybersecurity team’s numbers, before and after the model change.
Before. RSA 2025, 90-day last-touch window. 1,500 booth scans, 20 opportunities created, 3 closed-won in 90 days, $180K revenue attributed. Reported event ROI, roughly break-even. CFO’s internal read, “RSA is a branding exercise.”
After. Same RSA event, same 1,500 booth scans, same 20 opportunities. But the model changed to event-sourced plus event-influenced, 8-month window, per-stage credit. The numbers: 6 closed-won across the 8 months (not 3 in 90 days), $720K sourced revenue, another $1.1M influenced pipeline still open, 40 pre-booked meetings that became 12 opportunities (not captured in the old model because they never hit a booth scanner). Total RSA contribution: $720K closed plus $1.1M open pipeline, on a $240K investment.
Same event. Same team. Same conversations. Different model. The first model reports a rounding error. The second model reports the highest-ROI channel the team has. The event did not change. The lens did.
What this means for your next event
If your attribution model was set up by the person who configured Marketo or HubSpot forms, it was designed for demo requests. You have been evaluating your event program with the wrong instrument for as long as that model has been in place.
Before your next flagship, do three things.
One. Audit the attribution window on your Opportunity object. If it is 90 days and your sales cycle is 8 months, that is your single biggest reporting leak. Fix it first. Custom field, stamp the event-sourced date, re-run the query with a matched window.
Two. Split sourced and influenced into two fields. Stop rolling them into a single “campaign influenced revenue” number. Your CFO will not trust the number if the methodology is opaque; the split methodology is legible.
Three. Capture pre-booked meetings as first-class attribution events. The 40 pre-booked meetings at RSA drove more pipeline than the 1,500 scans did. If pre-booked meetings do not exist as attribution objects in your CRM, half your event program is invisible.
Every time I walk an event marketing leader through this, the reaction is the same. “We have been getting punished for years by a model we did not design.” The fix is a measurement model that matches the physics of the channel.
If you want to see how the same event looks under both models in your own CRM, the post-event attribution page walks through the writeback fields. Or read the real cost of bad event attribution for the framework your CFO will actually read.