The conversation nobody wants to have.
Every monthly report meeting hits this point. Someone, usually the CFO, sometimes the owner, asks why the Google Ads dashboard shows one number and the GA4 dashboard shows another. The agency says something about "attribution differences." The CFO says the numbers should match. The agency says, well, technically they can't. The meeting gets shorter. Everyone leaves uncertain.
Then next month it happens again. Usually with different numbers and the same confusion.
Here's the underlying truth: the numbers aren't going to match. They're measuring different things with different rules. Accepting that is the first step. Documenting what each number actually means is the second.
What each platform is counting.
Google Ads counts conversions that its pixel or tag fires for. Its attribution window is 30 days by default (some accounts are on 7-day, some on 90). Its logic is "did someone click an ad, and then later do the conversion event?" It credits the last Google Ads click within the window, regardless of what happened between.
GA4 counts conversions the same pixel fires for, but with different attribution logic. GA4 uses data-driven attribution by default on most properties now, which means it distributes credit fractionally across channels and touches, not giving 100% of the credit to the last Google Ads click. The reporting UI sometimes shows last-click, sometimes shows data-driven, depending on which report you're in and which attribution model the property admin selected.
Your commerce platform (Shopify, WooCommerce) counts all orders, from all sources, as they occur. It doesn't know or care about paid media attribution.
Your CRM counts leads when they enter the CRM, which might be immediately on form submission or might be on a schedule.
Same transaction. Four or five different numbers, each one computed correctly according to its own rules. None of them wrong. None of them agreeing.
Why the gap is structural, not a bug.
The gap isn't primarily about tracking errors, though tracking errors make it worse. It's about definitional differences. Here are the big ones.
Attribution window
GA4 default is 30 days click, 1 day view for session-based reporting. Google Ads can be 7, 30, or 90. The same conversion falls inside different windows depending on which platform is looking.
Attribution logic
Last-click vs data-driven vs position-based vs first-click. Each model distributes credit differently. A conversion that touched four channels on its path to purchase shows up with different credit allocation in every system.
Conversion definition
What counts as a conversion in Google Ads (the set of actions flagged as primary in the conversions column) is often different from what counts as a key event in GA4. Small businesses often have three to five actions counted as conversions in Google Ads and one or two in GA4. They report different numbers and both are "right."
Timing
GA4 processes data with a 24 to 72 hour delay for most dimensions. Google Ads shows real-time data. A conversion that happened at 11pm on the 30th will appear in different months in different reports.
Cross-device and cross-session
GA4 stitches sessions when it can, using signed-in user IDs or modeled identifiers. Google Ads does its own cross-device work. These two stitching processes don't see the same data and come to different conclusions.
What to actually do.
Three things.
First, pick a source of truth for each decision. For "did this campaign work?", the answer is usually the ad platform's view, because the ad platform's attribution is the mechanism that drives its bidding. For "how many total leads did we generate?", the answer is the CRM or the commerce backend. For "what's our channel mix telling us?", GA4's path reports are the most useful.
Don't try to pick one number that serves all three questions. There isn't one.
Second, reconcile once a quarter. Pull the numbers from all four systems. Don't try to match exactly. Try to understand the ratios. If Google Ads is reporting 40% more conversions than GA4, that's a big gap but it might be consistent month over month, which means it's structural. If the gap was 10% last quarter and 60% this quarter, something changed. The change is what's worth investigating.
Third, write the reconciliation down. Not as a formal document. As a one-page explanation in your monthly report. "Google Ads counts conversions X way, GA4 counts them Y way, the gap is usually about Z%, and here's why." When the CFO asks the question again, there's a page to point to, and the meeting moves on.
What this does for the business.
The goal of this kind of documentation is not statistical purity. It's meeting efficiency. If everyone around the table understands that the numbers won't match and has a one-page document explaining why, the monthly report becomes a review of decisions instead of a debate about methodology.
Most operators don't need perfect attribution. They need to know which lever moves what, and roughly by how much. Getting there requires accepting that the platforms aren't going to resolve their disagreement, and building a reporting layer that explains the disagreement rather than trying to hide it.
A good monthly review doesn't pretend the numbers match. It explains what each number means and what decision it should drive. Everything else is window dressing.
Work with a senior practitioner.
Pacific Northwest Digital Marketing runs paid media for small and mid-sized businesses. Every engagement is run by a senior practitioner from first call through monthly reporting.
Start the conversation →