- Marketing in a Box
- Posts
- Wrong answer to the right problem
Wrong answer to the right problem
Segment #02: You can't out-maneuver Google Ads.
The Real Job is Picking the Right Problem
The most dangerous thing you can do in a startup is execute brilliantly on the wrong thing. It's when you lack a forcing function to step back and ask:
Is this actually the right motion?
If this works…does it move the metric that matters?
Am I optimizing this because it's familiar or because it's high-leverage?
Welcome to a new Marketing in a Box weekly segment called Wrong answer to the right problem, where we dissect one sharp, well-executed tactic that looked like progress and explain why it didn't matter.
Not because it failed but because it solved the wrong layer of the problem.
Was this email forwarded to you?
Segment #02: The maneuver that didn’t fix Google Ads
🎯 The Setup
This used to be one of the eternal questions in Google Ads marketing: do we let Google automate the bidding or do we build something ourselves that we "think" will be good enough to beat Google? I've worked with startup founders who believe their customer buying behavior and their business offering are so unique that Google can't possibly understand it the way the internal team does. They believe a manual approach with CPC bidding has a higher ceiling and an internal predictive model based on their own data science will beat Google's smart bidding.
This is magical thinking.
I previously worked at a marketplace where we ran target CPA bidding in a consolidated Google Ads account structure. During peak season, we were smashing conversion and revenue goals. Our month over month conversions were steadily increasing while our cost per acquisition declined or remained stable.
Then, in the final month of peak season, conversions crashed and CPA skyrocketed. Conversions were cut in half month over month. CPA more than doubled. This performance behavior spilled over into the following month and the following month and never fully recovered.
During that time, we had switched to manual CPC bidding and un-consolidated the campaigns. We built a marketing machine: a standardized system with:
set list of universal keywords and match types for all campaigns
max CPCs per keyword
templatized ad copy
uniform mobile and desktop bid adjustments
Even though we had a stellar peak season up until the final month, we thought we had a click volume problem: we were getting fewer clicks that led to fewer conversions. We thought smart bidding was constraining our ability to "win clicks." So we pivoted to manual CPC with more campaigns. We thought if we controlled the cost per click per campaign, which was highly variable due to the nature of our offering, we could outsmart Google and scale the account to higher levels.
⚠️ The Wrong Answer
When conversion volume dips and marketing spend inefficiency rises, the first thought, and rightfully so, is to check the marketing engine.
Are we spending on bad keywords in places we don't have supply?
Are we getting outbid in auctions because of poor ad ranking?
Are we sending traffic to the wrong landing page?
Here's the trap: we believed our customer buying behavior and our business offering was so unique that manual bidding was the only way to go.
We build a marketing machine with minimal "editorializing" and individual decision making we thought would be better than Google's machine learning. We didn't understand that CPC bidding is an unsophisticated approach to auction bidding or that click volume is highly correlated to conversion volume (it wasn't). We operated Google as if it was 2013 instead of 2024.
We didn't need to restructure the Google Ads account with mCPC bidding and hundreds of campaigns. That didn't address the root cause: the business offering and pricing structure fundamentally changed while conversion volume expectations remained the same.
🧩 The Real Problem
There were two fundamental problems:
Our offering and pricing structure fundamentally changed without proper validation before site-wide rollout.
There was significant distrust in Google Ads data, and thus the desire to take as many decision levers out of Google's hands.
To our credit, we did need to standardize the ad account. Similar campaigns needed to have similar ad group structures, keywords, and match types so that we could debug and scale better. We couldn't answer basic questions like:
Why did CPA go up?
Did a keyword drop in volume?
Are we even bidding on that query in every market?
By standardizing, we made performance insights comparable. It also gave us a blueprint to test changes across markets and actually learn something.
However, standardizing the account wasn't the primary issue. The primary issue was the offer change and the price increase without testing it. We fundamentally changed the entire offer, checkout flow, and pricing structure and expected Google Ads to remain stable…even after we fundamentally changed the Google Ads account structure and bidding strategy.
The right problem was: How do we create a Google Ads program that allows us to test into hypothetical on-site changes like pricing and offering? The consolidated campaign approach wouldn't allow for this, but neither would the switch to mCPC bidding.
The answer we gave was: let's roll out the offering and price change without testing and let's concurrently rebuild the Google Ads account in a more unsophisticated way.
✅ The Right Lever
The problem wasn't the bidding. It was feedback.
We didn't need to rebuild the entire Google Ads account from scratch. We needed a system that let us test change without committing to it wholesale, especially when the change was something as fundamental as our offering and pricing.
The right lever wasn't "switch everything to manual CPC." The right lever was controlled experimentation.
If we had truly understood what was at stake, we would have tested the new offer and pricing in a small, representative set of campaigns:
Keep the rest of the account untouched (same offer, same landing page, same structure)
Isolate a few cities or skills to test the new offer and checkout flow
Track performance divergence: Does CPA rise? Does conversion rate tank? Do new landing pages underperform?
Use Smart Bidding in those campaigns to understand how Google’s auction reacts to the change
That kind of A/B thinking, where campaigns are used as levers to test hypotheses, not just drive volume, is what would have revealed the real issue. Instead, we:
Changed the offering
Changed the landing experience
Changed the price
Changed the bidding model
Changed the account structure
…all at once.
And then we asked, "Why doesn't this work?"
The right lever wasn't (the illusion of) more control. It was more structure for testing and more trust in a learning system. Manual CPC gave us the illusion of precision, but it blinded us to what was actually happening: a complete shift in the customer value equation, rolled out with zero validation.
🔥 TL;DR for Founders
We thought Google Ads broke, so we rebuilt the whole thing: new bidding model (manual CPC), new campaign structure, and a standardized system.
But performance didn't improve, because that wasn't the real problem.
The real issue was a fundamental change to our offering and pricing, rolled out everywhere without testing. We killed what was working and had no way to isolate why things were failing.
What we should've done:
Test the new offer in a few campaigns, keep the rest stable, and let Google's smart bidding show us how the market reacts. Use Google Ads as a testing system, not just an acquisition engine.
More levers ≠ more control. Better feedback loops = better decisions.
This newsletter is for you. What marketing challenges are you facing in your startup journey? Reply directly to this email with your questions or topics you'd like to see covered in future issues.
Until next week,

P.S. Found this helpful? Forward it to another founder who might benefit. We're all in this together.
P.P.S. Don't forget to download the Growth Marketing OS by clicking the button below.