Growth Hacking For Dummies
Book image
Explore Book Buy On Amazon
Growth teams can make common but serious mistakes when implementing the growth hacking process. This article provides an overview of some of the more typical problems and looks at ways for you to work around them.

As you’ll see, many of these pitfalls are related to cultural issues and usually result from not following the established growth hacking process itself. Let the data and your empathy for the customer guide you and you’ll mitigate most of the problems that afflict growth teams who are just starting out.

growth hacking method ©Shutterstock/ra2 studio

Ignoring the need to constantly evangelize progress and learning

An inability to sing from the rooftops the praises of the growth hacking process is probably the biggest stumbling block when it comes to creating a company-wide culture of growth. Especially as you’re getting your growth program off the ground, you have to be able to evangelize experimentation and share those quick wins in high-decibel fashion.

When the business isn’t used to operating this way, seeing data for the first time can create a sense of empowerment throughout the organization.

Be consistent with what, how, and where you communicate progress when it comes to growth. Whether it’s an internal wiki, an internal communication platform channel, email, company-wide meetings, lunch-and-learns, or any other venue, posting summaries of tests and the insights the team gleaned leads to greater understanding and adoption of the growth mentality.

Ultimately, your goal is to build companywide excitement for the program in order to break down silos and gather input from every corner of the company on ideas that hold growth potential.

Not sticking to the growth meeting agenda

The one hour you set aside for the weekly growth meeting goes by fast. Having an agenda and sticking to it is critical for working your way through communicating key metrics, sharing the insights gained from the past week, and then deciding on tests to prioritize in the next testing cycle.

You should expect that additional questions may emerge when discussing what happened last week, but you absolutely cannot afford to let the agenda be derailed by anyone, especially if they’re someone higher up in the organization.

For good reason, the agenda of the growth meeting has fixed time slots for discussing specific aspects of growth. If the discussion extends beyond the allotted time, end the meeting with a commitment to pick it up again later, and communicate a summary with the rest of the team.

The other common trap is for the growth meeting to become a brainstorming session. Avoid this at all costs. All brainstorming should happen between growth meetings. Within the meeting, you simply discuss ideas that have already been prioritized from the backlog before the meeting. If you don't impose some discipline here, you won’t be able to establish the rhythm of a growth process.

Testing things that don’t need to be tested (right now)

When you first start testing during the growth hacking process and you’re looking for quick wins, it’s okay to not necessarily be informed by your growth model. But as you gain a bit of confidence, you can’t afford to be scattershot about it. It’s easy to fall into this trap, where you meet the goal of conducting three or more tests per week but barely move the needle on our NSM, which was incredibly frustrating.

It’s easy to understand the lure of testing in areas you’re comfortable with, yet you cannot allow yourself to do only the simple things or the ones you like doing. Growth hacking is about doing more of what works. Finding out what works is a matter of investigating your growth model and setting objectives around the highest leverage opportunities (or problems).

Not taking big enough swings

Recognizing a big swing when it comes to tests isn’t obvious when you start. In fact, almost everything in the growth hacking process looks like a big change the first time it's made, because something like that has never been done.

It’s important, however, for the team to undertake the process of making swings that aren’t big enough and to realize the small impact, if any, on your NSM. That’s when the light bulb will go off — the team will realize that what they thought was a big deal really wasn’t. This will trigger better growth hacking discussions on what it truly means to be big.

Undertaking this process of taking swings and realizing you can go bigger is necessary to for the team to build the courage to take bigger swings and also realize that they have permission to do so.

This is not to say that small wins don’t matter; of course they do. But the bigger wins in growth hacking come when you do something so radically different that the reaction to it has to be big. If that reaction happens to be a positive one, you’ll see it reflected in your NSM. Then take all the small swings you want, to incrementally make that big positive result as big as you can. Just be aware that small tests give you small results.

If you want big results from growth hacking, conduct bigger, bolder tests. It’s as simple (and as hard) as that.

Blindly copying what others have done

You’ll find no dearth of articles espousing awesome results from tests. It’s these types of posts that have led people to think that growth hacking is all about tactics. This isn’t to say that you shouldn’t be inspired by others. Of course you should. External sources of growth hacking inspiration are a powerful source of ideas.

It’s important to understand why a test worked rather than get caught up in the tactic that resulted in spectacular results. This is why, if you were to blindly implement a referral program, like the Dropbox example, or create a contest offering an iPad as a prize, it likely wouldn’t produce the same breakout results as in others.

Understanding your own users' motivations and knowing what they value should guide your growth hacking strategies more than what has worked for famous companies or for competitors.

Measuring what you did versus what you learned

It’s easy to get caught up in running tests and increasing conversion rates. If you didn’t add value for your customers, however, all that just isn’t very meaningful — and therein lies the difference between what you did (your outputs) versus what you learned about helping your customers achieve their goals (your outcomes).

You should understand that, ultimately, customers are interested only in outcomes — the benefit they receive from your product. This can happen only if you understand their needs, priorities, motivations, and challenges.

What you did, on the other hand (running tests or optimizing your product, for example) is just an output of the growth process. Measuring what you learned about delivering value is ultimately what creates the biggest difference in your customers’ lives, so make sure you’re measuring the right things.

Not understanding the opportunity costs of testing

Failing to understand the opportunity costs of testing goes hand in hand with running tests that don’t need to be run right now. Every test has an opportunity cost associated with it, and you need to calculate how much more value a test provides versus doing more research.

Now, if you can conduct a quick test all by yourself and it doesn’t interfere with another test, just do it, because there’s no downside here. But if you’re doing something more involved or what you're doing deals with an integral part of the product, any changes here could have a noticeable negative impact.

In this case, check to see whether you have enough data to justify a test, and reduce the amount of uncertainty associated with that test.

The other opportunity cost is related to not being able to identify tests that are actually winners. Sample size and test duration both impact your ability to call a true winner, but the longer any given test runs, you also potentially risk losing out on bigger gains from other, more impactful tests. Think about the size of the insight you’re testing for before determining how much resources and time you allocate to any given test.

Committing to a process where you run big, bold tests to discover big changes in response, followed by the smaller tests to optimize that response, helps mitigate some of that opportunity cost.

Having lots of tests that yield inconclusive results

First, you can expect tests now and then to yield inconclusive results. It’s bound to happen. What should not happen is that it becomes normal to see inconclusive results for tests. This situation occurs for one of these reasons:
  • You lacked a good hypothesis to begin with.
  • The experiment itself wasn’t designed well.
  • You didn’t set up how to measure the test correctly.
  • The analysis itself was incomplete.
  • This last reason is the most difficult to correct for, because the others are related more to process issues, which you should analyze and correct the first time you encounter them.
The bigger problem that emerges in growth hacking is the fact that, if you have inconclusive results, you don’t learn anything. A test you didn’t learn from is effectively wasted, which adds to the opportunity cost of testing.

Continually encountering these dead ends can become quite demoralizing. They start to instill apathy regarding testing, in the sense that everyone starts to expect that conducting a certain number of tests will be wasted effort, no matter what.

This can have a negative impact on the effort that people invest in tests, and the result can be extremely insidious, leading to a negative feedback loop of less interest and effort, leading to bad tests with inconclusive results, leading to even less enthusiasm for the next round of tests.

Not analyzing your tests in a timely fashion

This one hurts. Running a lot of tests is one thing, and you can get a lot of the process right by creating the right objectives and generating a lot of ideas to test. But running tests and leaving them unanalyzed is the kiss of death to a growth program. After all, growth involves learning more about what works. How else will you do more of it?

One growth hacker experienced this problem firsthand, where he was responsible for analyzing acquisition tests related to selling more tickets for an event. Because he took longer to analyze these tests, the growth team didn’t learn about some opportunities they could have capitalized on, and that inevitably led to selling fewer tickets than usual. You can b sure he has never let it happen again. Remember, not capitalizing on a win is also costly.

On the topic of acquisition, because you have no control over how channels will change, it’s even more important to stay on top of such tests. You may find that, although an initial test gave you positive results, if you waited too long to learn about it and the channel evolved in a way that’s unfriendly to that specific tactic, future tests won’t work

So it’s doubly important to analyze your acquisition tests quickly, to get an early sign that things might be changing so you can adapt in a timely manner.

Not checking on whether the gains still hold

When you’re in the rhythm of testing, and launching tests weekly, it’s easy to forget about tests you’ve already analyzed. They’re in the knowledge base, but out of sight and out of mind.

The problem with this part of the growth hacking process is that no winning test provides infinite gains — not to mention the fact that most tests have small gains. You should have a process to go back and analyze how well the gains of any winning test have held over time and whether the gains show signs of decreasing (or have already decreased). You can use that information as a trigger to perform a new test around the area that provided the gain.

Without this information, you’ll feel like you're running in place or, even worse, like your momentum slows because the efficiency of earlier tests that you were counting on as stepping stones are no longer available to you.

To start, it may be useful to review the winning tests every couple of months to see how well they’re performing and then adjust your growth hacking approach from there.

About This Article

This article is from the book:

About the book author:

Anuj Adhiya learned growth hacking as a community moderator and then Director of Engagement and Analytics at GrowthHackers (founded by Sean Ellis, who coined "growth hacking"). He's mentored and coached a number of startups on the growth methodology at Harvard Innovation Labs & Seedstars. He's currently the VP of Growth at Jamber.

This article can be found in the category: