Forget AI, Focus on this in 2024 to grow experimentation

Towards the end of 2023, Kameleoon, an A/B testing software vendor published an article on their blog with interviews of CROs and experimentation leaders in the industry asking them what trends to expect in 2024.

Everyone interviewed made bold predictions.

AI is going to transform CRO
Experimentation is going to get embedded in every part of the organization
CRO will become Decision Science
ChatGPT will replace the CRO specialist

As humans, we are pretty bad at predicting the future.

Here’s how life in the 2000s was envisioned by those in the 1950s.

Whilst some predictions may come true, I want to challenge the CRO experts on theirs because I have yet to see any indicators that show the industry is moving in that direction. In this article, I want to unpack their predictions but also show why it’s wishful thinking, why I don’t think it will come to pass and what you should be focussing on instead.

CRO will become Decision Science – Unlikely

The idea that CRO becomes Decision Science is not as easy as a simple rebrand. For years, industry insiders have debated dropping the term CRO (Conversion Rate Optimization) for something different as the term was self limiting and didn’t fully encapsulate the breadth of work done by Experimentation teams.

CRO still doesn’t get the respect it deserves and this primarily stems from the fact that organizations have invested in testing as a tactic and a bolt-on rather than a mindset. Often, experimentation teams have been tasked with KPIs that only revolve around improving conversions and increasing revenue.

Moreover, the information flowing upwards to the senior levels of the business are mainly limited to those KPIs. For senior management to make better decisions, the translation of data into insights is important.

Experimentation has wider applications and this is yet to be harnessed because of the current limitations. This is why I don’t believe that CRO will become decision science in 2024.

Experimentation will get embedded in every part of the organization – Unlikely

Similar to the previous point, this is more of wishful thinking from practitioners who dream of a day when experimentation will be done in all parts of the business.

The current landscape of testing is very much web focussed and has slowly grown to be adopted by product teams (though still very much on the surface level).

The challenge however isn’t that experimentation can’t scale. It’s more to do with the fact that in order to scale you need the right conditions in place.

Authority – CROs lack the authority to drive change and set the remit to run experiments as part of the team’s activities. There is no top down remit from senior levels.

Motivation – The new teams being introduced to experimentation aren’t as motivated. Often, this is in addition to a growing list of tasks they need to do and can also run counter to their own KPIs.

Setup – Experimentation teams are still staffed with technically minded experimentation specialists. To drive change, they need to have people who are able to build relationships and do internal sales/marketing to motivate people. There isn’t a senior VP of experimentation who oversees testing across multiple channels. Testing is still very myopic and web focussed.

AI will transform CRO – maybe, but not yet

Last year saw a big rise in AI tech with ChatGPT and the likes gaining immense popularity.

AI definitely has its uses and over the years will become a mainstay in our lives. With ChatGPT, many saw applications and uses in CRO such as generating ideas or even summarising customer research.

Whilst, I’m not writing this off, I believe it will be a good few years for this to happen.

For AI to be useful, the data sets its trained on needs to be good and if experimentation data is not properly captured, poorly maintained and is full of short cuts taken, the AI will interpret it accordingly.

And herein lies some of the problem with AI.

What to focus on instead?

Not so much a prediction but my belief is that the organizations that focus on their foundations and fix the errors accumulated over the years, will have a distinct advantage over teams who are just going through the motions.

If the question was posed to you – what’s the most important part of an experimentation program?

  • The people?
  • The tools?
  • The processes?

The correct answer is – the data stored in the documentation.

On a basic level, Documentation is a collection of all the research insights, ideas captured and prioritised, the experiments run and their results.

But Documentation is much more than this. Most experimentation teams view documentation as data dumping and entering data into spreadsheets so it’s no surprise that the data they collect lacks context, deeper information and quality control.

Here’s how to start with improving your foundations so experimentation can grow

  1. What experimentation data do you track? – Tracking ideas, experiments and results is the bare minimum but spend time reviewing what information related to those activities are being tracked. Are there missing pieces of information. Do you notice half filled experiments plans or some metrics that havent been reported on?
  2. When is the information documented? – Is your team entering data only when the experiment is finished or can you get more information about the process it went through. A good experimentation program management documentation system is not just about capturing data but ensuring data is captured at the right time and in the right way. This way, it will be easy to spot red flags before they become a serious issue.
  3. Review how the information is categorised and classified – The biggest reason why experimentation teams struggle to find the correct insights or data is because the information isn’t classified properly. Worst still, without proper training and monitoring guardrails in place, different teams classify data in their own way. This leads to chaos. Vet all the tags used to classify the experiments and delete, merge or rename ones that are put in incorrectly.
  4. Review the process – Experimentation data is only useful if it’s reliable. Reliability comes from knowing that due process was followed during all stages of the experiment lifecycle – from ideation to planning to execution to reporting. Can you spot where corners were cut or the prescribed process was not followed? Are there enough guardrails in place to stop people from bypassing the process.
  5. Who is engaging? – Tracking who is engaging with the insights and information you share can help you understand the gaps in your organization’s experimentation potential. Simply sharing spreadsheets and decks will not help you enough to improve your program. Experimentation will only grow in an organization when there is a conscious effort to share and monitor if the message is making an impact. You must also track who is engaging with your experimentation insights – are they accessing spreadsheets or your project management tool? Are they asking questions and offering their own ideas?

A lot of the foundational work can be time intensive and need constant monitoring. It’s one of the reasons experimentation teams struggle to properly do it – for one using tools that don’t enable them to achieve this and secondly because its another task in an ever growing list of jobs they need to do.

The teams that understand the repercussions of not doing it and actively focus on fixing it will be the ones that end 2024 better than their peers and competitors.

Manuel da Costa

A passionate evangelist of all things experimentation, Manuel da Costa founded Effective Experiments to help organizations to make experimentation a core part of every business. On the blog, he talks about experimentation as a driver of innovation, experimentation program management, change management and building better practices in A/B testing.