Last week, we examined the what vs. why trap: how growth teams conflate what happened with why it happened.
This week, we're looking at a related problem: how teams misjudge where impact actually comes from.
We're taught that if you have limited resources, you need to take "big swings." The logic sounds right: you can only run so many experiments, so each one should be as impactful as possible. And the way to maximize impact? Change a lot of variables. Test a completely different page, a completely different flow, a completely different approach.
The assumption buried in there: more variables changed = bigger expected impact.
In my experience, that's just not true. Or, more precisely: it's not that simple.
What I Tried First
Over a decade ago, I was one of the first growth hires at Grammarly. I built their experimentation program from the ground up. At the time, a major focus was improving homepage → sign up rate.
Our funnel data pointed to that step in the funnel being a constraint (among a handful of other signals). That was the 'what' — now we needed to figure out the 'why.'
I did everything we're taught to do. I maintained a high testing tempo. I took big swings. New homepage designs, completely different copy angles, long-form pages versus short, entirely different signup flows. I was following the playbook: move fast, ruthlessly prioritize, test aggressively, ship constantly.
I was proud of our inputs.
The problem was that the conversion rate barely budged. And if I'm being honest, we weren't learning much. I was missing something.
What Actually Worked
So I did something that felt like the opposite of best practice. Instead of asking "what should we test next?" or "how do we maintain a testing cadence of X per month?", I stopped shipping. I slowed down. And I started asking why.
Grammarly had just released its browser extension and switched to a freemium model. Historically, Grammarly had been a paid product. So I started looking at our situation through the lens of how our channel, product, and model all fit together (would've been really helpful had the Foundational Five framework been a thing back then).
A few signals started pointing in the same direction:
Channel context. Most of our traffic was high-intent search traffic. People searching for "grammar checker" or similar terms. They weren't browsing. They'd already decided they wanted something like Grammarly. But when they searched those terms, the results were full of clearly free alternatives. Low quality, but free. Our visitors were being primed to expect free before they even landed on our page.
User feedback. I talked to our support and social teams. One of the most common questions they were fielding: "Is Grammarly free?" We had a perception problem leftover from the paid-only days.
User behavior. Heatmap data and user recordings showed that most users spent little time exploring the page. Per the channel context, they knew what they wanted.
Put it all together, and the hypothesis became clear: these visitors need to know it's free, and they need to see it exactly where they're making the decision.
So I added "it's free" to the CTA buttons. Less than a minute to build.
The result? About 8x the lift of any other experiment I'd run. And probably a top ten result out of hundreds of tests during my time at Grammarly. At least from a sheer conversion rate standpoint.
Two words. Not a new page. Not a new flow. Not a big swing. Two words that solved a real problem because I'd finally stopped to figure out where the actual leverage was.
The Point Isn't Micro vs. Macro
It's tempting to take this and conclude: small tests beat big tests. That's not what I'm saying.
What I'm saying is that in my experience, the relationship between the size of an experiment and its impact can be unintuitive. The thing that determines impact is whether you're hitting the right lever. And just like with product, "the right lever" means solving a fundamental problem for your user.
Here's a silly way to think about it that I share with teams I advise. What's the single most impactful change any company could make to their funnel?
Remove the buy button.
If you have a 5% purchase rate and you take away the button, you now have a 0% conversion rate. One variable. Maximum impact. Obviously absurd, but it proves the point. Impact doesn't come from how much you change. It comes from solving user problems (or in this case, creating them 😅).
Finding Your Levers
So how do you find your version of "it's free"?
The short answer: you have to slow down and do the work that most teams skip.
At Grammarly, I didn't stumble into the answer. I looked at the data to understand where the problem was. I talked to customer-facing teams to understand what users were actually confused about. I studied the competitive landscape to understand what expectations visitors were coming in with. I observed actual users as they navigated the site. Each signal on its own was just a clue. Layered together, they pointed to an answer I never would have found by just shipping more tests.*
That's the pattern. Quantitative data tells you where to look. Qualitative research helps you understand why. And frameworks like the Foundational Five help you see how the pieces connect so you're not just looking at isolated metrics.
Most teams skip the qualitative layer entirely. Shipping new tests is exciting. Digging into user psychology, talking to support, studying the competitive landscape? Not so much. But that slow work is where the real leverage gets uncovered.
*I certainly could've produced the same result through brute force. But I wouldn't have gotten the same learning. It's entirely possible we would've eventually built a random variation that happened to include "free" in the CTA. It may have increased CR%, but we wouldn't have had any clue as to why. Again, that 'what vs. why trap' from last week.
What's Next
I realize we've now spent two editions talking about common experimentation mistakes without fully answering the question: how do you actually do this well?
That's coming. I'm working on a more tactical breakdown and hope to share that next week. Stay tuned!
Justin Setzer
Demand Curve Co-Founder & CEO
No comments:
Post a Comment