6 signs you shouldn't run that as an A/B Test

There’s a couple opposing philosophies on this:

‘Why are we running everything as tests? It’s frustrating and slowing us down to launch new features’


‘Why are we not testing impact? things are rolling out without seeing if they even work..’

I find this dilemma super interesting and will likely write some more notes on this. Especially as VEED’s product growth division continues to grow. To keep things concise, here’s 6 signals which suggest you shouldn’t run this as a test.

Any point in itself may not be enough to determine whether to test or not, but they’re worth taking into account:

1. The risk is low

‘What’s the worst that can happen?’

The overall risk is low. Risk can be broken down into a few different parts:

  • Business risk: If this goes wrong, will the org suffer financially (lost revenue, churn)
  • Usability risk: If this goes wrong, will users struggle to do what they came here to do
  • Engineering risk: If this goes wrong, have we wasted a lot engineering resource?

If any of these risks are high, then it could be a sign to test before rollout, in a way that allows you to switch the rollout off if things head west.

2. You have high confidence of success

‘Data and insights: Dude, trust me’

If you have super high conviction that this feature/rollout is for the best, then do you need to test it?

As an extreme example, a team is working on a faster video render time. Do we need to roll this out as a test to see if it improves leading retention indicators? No.

3. It’ll take too long to get results

We have the fortune of a high volume of users coming through which means our tests reach stat stig pretty quickly. For some early stage or low traffic products - or even parts of the product further down the funnel - do not have this luxury.

If your test is going to take 28 days for you to get results, then this is worth taking into the equation whether or not to run the test.

Read more on this one: Do I have enough users to run an A/B test?

4. Many moving parts - testing would take too long.

Testing can sink a lot of time and add to complexity. If you’re looking to roll out multiple improvements in a shorter period of time (think, rapidly improving an area of the product, or early days of a product) then you don’t want to get bogged down in scientifically testing each and every part of the product.

5. You’re launching something brand new

If you have something brand new, then what’s the control? Not much to say on this one..

6. You don’t have many stakeholders

As a product growth division working across different parts of the product, you have many a bunch of stakeholders. If you’re suggesting changes and testing hypothesis’ to improve metrics, then these people care about how impactful the work is. They have a horse in the race 🐎

It’s not enough to say trust me on this one, let’s roll it out. You maybe also learning the level of impact across different initiatives.

On the other hand, if you’re working on am area with low interest of stakeholders, such as a change on a low traffic landing page. There’s a lot more freedom to move with your ‘gut’.

So there it is, 6  things to consider when wondering whether you should be running tests.

Check out some posts below for some similar reading, and sub to the list at the bottom 👇🏼

Related posts:

Recent posts:

Subscribe to my email list:

Nice one! See you on the flip
Oops! Something went wrong while submitting the form.