The conversion nightmare: managing the risks of releasing website changes

October 28, 2024 | Jens Van de Capelle, Senior Digital Marketing Consultant at The Reference

Blog

Have you ever launched a website or app update only to discover that your conversion rate decreased after releasing? It is a marketer's worst nightmare. But have you considered using experimentation as a way to manage the risks inherent to making changes?

People working together during a workshop

I mean, we get it, everyone is enthusiastic about the recent changes or features and proud of the work that has been done. Everyone is eager to get it out there and showcase the result of their work. Thus, the changes are pushed live to all visitors as soon as the QA team confirms the features work as expected.  

However, user reactions might not always align with our expectations, and the impact of changes can be surprising. Once people start seeing something unexpected in their numbers, they start making a pre-post analysis to try to evaluate the impact of the changes that were made.  

Downsides of pre-post analysis

Relying solely on pre-post analysis has some downsides:  

  • Difficulty in isolating the impact: It is challenging to accurately assess the impact of changes as this method can be heavily influenced by external factors such as traffic distribution, seasonality, and more.  
  • Risk of negative effects: If changes unexpectedly have a negative effect on user behavior, it could harm your business. And the changes are visible to 100% of your visitors! When that happens you probably need to start talking about a rollback scenario. And nobody likes doing that.  

If only there was a way to mitigate this risk… Oh wait, there is! 

Focused working shot

Controlled experiments

When making bigger changes, for instance to navigation or to critical pages or flows like for example a pricing page, leadgen or checkout funnels, we recommend using a controlled experiment (A/B test) to validate the impact of the changes on user behavior and the digital/business KPIs. 

The simple fact is, you cannot expect to be able to predict all possible ways users might interact with something new on your website or app. QA testers can test whether the features work as intended on different devices & browsers and whether there are no bugs in the expected user flow (& limited edge cases). If they get enough time, they will do some regression testing for other key features & flows to check if nothing broke due to the recent changes. But they cannot always predict exactly how all segments of real-world users from your target audience will react to the changed interface.  

And that is where controlled experiments add value: they expose the changes to a part of your real users and give you more insight into how they engage with it compared to the original version. Assuming your sample size is high enough, this is a quite reliable way to predict the impact on your digital/business KPI's if 100% of your visitors were exposed to it. Or at the very least, you will know that you are not hurting your KPI's with some degree of certainty. 

Ideally, this is part of an integrated CRO track where you continuously gather insights through conversion research, build hypotheses based on them and validate the hypotheses with controlled experiments. But if you do not have the resources or time to do this you can also consider installing a "testing only" track that is focused on validating the impact of changes you are planning to make to your website or app. 

Depending on the nature of the changes, you can set up experiments built in an A/B testing tool before doing the development work or set up a split-URL test after doing the development efforts. 

A/B testing vs Split-URL testing

A/B testing

If it takes less time to set up the experiment with a (simplified) version of your changes than to develop them (and make them testable) it can be interesting to test via a tool based set up (can be client-side or server-side, but the variation is built within the A/B testing tool) so you can get an idea of the impact before investing the development time. If the impact is not what you had hoped, you do not need to make the investment to fully develop it. 

Split-URL test

Of course, there are cases where you cannot thoroughly test something without some backend development, or it would just take too much time to set up a test via an A/B testing tool. In that case it is best to develop your changes and push them live on a separate URL (or let the variation be triggered with a URL parameter) that is not linked to from any other pages yet. This way you can set up a clean split-URL test that redirects a part of your users visiting the original URL to the latest version. Ideally you also build in a feature switch so that you can change the default version that is being shown if the experiment is successful. 

While A/B testing offers a quick and efficient way to measure user reactions, split-URL testing is invaluable for more complex changes that require backend development. 

Two people showing each other something on a laptop screen

The best way of detecting what works and what causes problems is testing your changes in an isolated way. But usually there are multiple changes that are rolled out to production in a single release. Theoretically you could consider testing all of that together based on a URL parameter (triggering the “new version”) as a safety protocol before releasing to all users. However, if you then see a drop in conversions, it will not be easy to identify exactly what caused the drop.

That is why it is important to identify what you should test and what you should not. You cannot test every change you want to make to your website or app; nobody has the time (or traffic volume) to do that. To help you make decisions on what to test it is essential to have a good view of what is important to your prospects & customers. What drives them, what do they want to achieve and what information do they need to make a decision? What causes them to hesitate? But that is a topic we will cover another time. 

Conclusion 

By consistently validating the impact of significant changes you are essentially de-risking your digital product development efforts. When done right it will also help you identify what works well and what does not, and you will also learn something about your users, which will help you to prioritize better in the future. 

Let's continue the conversation


Are you ready to avoid your worst nightmare of having a large drop in conversion rates after releasing website changes? Get in touch with us, we would love to help you face your digital challenges.  

Let's talk

Keep reading with these insights

AI robot in court in front of a judge

EU AI Act: navigating the future of AI regulation

Discover how the EU AI Act is shaping the landscape of artificial intelligence in Europe, ensuring ethical practices and safety. Learn how The Reference actively contributes to this transformation by supporting organizations in aligning with the new regulations for a secure and innovative digital environment. 

Illustration of three people pointing at statistics and figures

How to gain insights in a diversity of data sources

Does the data in your CRM system not match up with your Analytics solution? Do you get fewer conversions in your Google Analytics report compared to your Meta ads report? Read on to find out why these changes occur and how to gain the best insights from this diversity of data sources.

Person behind their laptop looking across the room

Expertise meets agility: discover the power of hybrid collaborations

Step into the future with our Hybrid Collaborations service. Discover the power of agency and expert staffing synergy, creating a bridge between external expertise and your specific needs. Navigate digital challenges and unlock unparalleled growth. Let's explore, evolve, and excel together.