Promising Metrics

Reading Time Time to read: 5 minutes

How to conduct a growth experiment

Thomas MacThomas

Marketer in Residence @ Forward Partners

It is my firm belief that the best businesses are built around exceptionally strong processes. Having a clearly defined process for growth can help your startup deliver impressive performance, punching like a heavyweight when in fact they are only a lightweight.

Having a rigorous testing regime facilitates this, but at the core of the tests are the experiments themselves. Keep reading to uncover how to structure great experiments.

Key takeaways

  • Having process ensures you optimise your time;
  • Experiments should have a rigid structure;
  • Always test against something you can measure;
  • Celebrate success as well as failure.

The Growth Experiment

I’m a scientist by trade (a chemist to be exact) and so I enjoy revisiting some chemistry experiment planning. Before every experiment commenced we’d have to prepare a document outlining the experiment. It consisted of: aim, hypothesis, method, results, conclusion. I like to recreate these little experiment plans when I plan a growth experiment.


The aim of the aim is to clearly and succinctly define the purpose of your experiment. What are you actually trying to figure out by conducting this experiment. Aims should importantly be tied to some of your business KPIs or derived business KPIs. Aims should also be detailed so that you can’t get distracted mid-way through your experiment, which might send the experiment off-track.

Good examples of aims might be:

  • Increase signup conversion by personalising the landing page based on IP location.

  • Decrease week 1 churn by creating a push notification on-boarding flow

  • Improve email open rate by using user-specific send times

Bad examples (not detailed enough .’. easy to lose focus):

  • Increase signup conversion

  • Decrease week 1 churn

  • Improve email open rate

Bad examples (not tied to KPIs):

  • Improve on-boarding

  • Improve retention

Importantly you should be able to measure the thing you are aiming to improve. If you can’t you’re wasting your time.


This section is all about predicting the future. Write here what you think will happen by conducting this experiment. This section is really important, you need to be able to come up with a hard outcome based prediction that you can test against. This should ideally include the metric you want to move and a guess at how much you want to move it by.

Here are some examples:

  • By adding in IP based geo-targeted imagery to our landing pages we expect geo-targeted campaigns’ signup conversion rate to increase 10%.

  • By rolling out this personalised on-boarding flow I expect the week 1 DAU number to increase 20%.

  • By sending emails at times where we know people open them on a 1-2-1 basis I expect open rates to increase 15%.

Again, here’s what not to do:

  • Signups increase (too vague, no target)

  • On-boarding flow results in better on-boarding (no metrics, no target)

  • Email deliverability improves (doesn’t match the aim)


This section in a traditional science experiment would probably be the longest. Super detailed and in the passive voice (for a reason I never really understood). Here it doesn’t have to be. Simply list down the steps you’re going to take for your test. There should be enough detail for someone else to be able to pick this document up and be able to recreate the test without any other knowledge. This section also helps you figure out who else you might need to involve in your test. For example, do you need the help of a developer to make some changes?

Here is an example of what a method could look like, using the ‘Increase signup conversion by personalising the landing page based on IP location’ aim:

  1. Make a copy of landing page A, name it landing page B

  2. In landing page B add some javascript logic to pull out the IP address of the client.

  3. Using an IP lookup database, match the client’s IP to a regional location.

  4. Build up a list of IP locations that we want to test against (London, Glasgow, Manchester)

  5. Build logic to switch landing page images based on city location.

  6. Release page.

  7. Use Optimizely to A/B test these pages

  8. Ensure Google Analytics is set up on these pages so we are measuring the signup conversion rate per test.


The results section is obviously pretty fundamental in determining whether your test was a success or not.

This section is usually tabular and should cross-reference your hypothesis. The data should essentially be able to support or disprove your prediction.

Using this hypothesis: “By adding in IP based geo-targeted imagery to our landing pages we expect geo-targeted campaigns’ signup conversion rate to increase 10%.” Let’s see how the results might look:










This section really is about summarising the numerical results and confirming whether you hypothesis is indeed correct or not. In the example above this might look like the below paragraph. Note that as the test had one failure point (Manchester) it’s worth highlighting this and potentially suggesting some further investigation.


The results from this test indicate that we should roll out this test to the remaining locations we have available. I would expect a 16% uplift from the control - thereby beating our 10% hypothesis. There is a chance that some imagery and/or targeting for some locations could be improved, as we saw a negative effect with Manchester. The Manchester test should be A/B tested with new imagery to identify this and hopefully rule this out.

The Growth Experiment Board

Visibility. This is why we have growth experiment boards. They might not be physical boards - trello makes a really good alternative - but they should be accessible by all so that people can check in on what experiments are running and also what assumptions have been tested.

Here’s what an example Trello board might look like. Feel free to copy and use for yourself. By using Trello you can keep a growth experiment within a card, and allow collaboration. It works really well. I’ve also added details within the cards themselves so be sure to have a look.

CPA Walls of Shame

When you’re running acquisition tests things don’t always go to plan. If you’re running an acquisition test you should (most likely) be optimising to CPA (whatever A is in your specific case).

The CPA wall of shame is there to unearth all the tests that went wrong. Tests sometimes don’t work, that’s ok, it’s part of the acquisition marketing game. It’s much better to be open and honest about tests failing than hiding them. That’s where the CPA wall of shame comes in. Get a board up on a wall in the office then list all your tests with your initial CPAs. The biggest (i.e. worst) CPA sits at the top as a gentle reminder of what you’ve tried and what hasn’t worked. Conversely at the bottom are all the best channels. Keep it light hearted, award prizes for the lowest and second highest!

Thomas MacThomas

Marketer in Residence @ Forward Partners

Tom is the former Head of Marketing at Forward Partners. He is an award winning growth marketer, having gained experience heading up the marketing function at high growth daily deals site Wowcher, online gaming firm William Hill Online and more recently the mobile app Bizzby. Tom helps our startups with marketing strategy and support, everything from PPC all the way through to TV.

Apply for Office Hours

We’re looking for great entrepreneurs with great ideas.

Apply here

Similar Guides