Anatomy Of An Experiment Insight

Your experiment learnings and insights are the most powerful part of the experimentation process. If not handled correctly or overlooked, it can nullify all the work you have put in to running the experiment.

The default state for a lot of experimentation teams is to lead with the actual experiment itself i.e the changes, the results and statistical significance. Whilst this information is important and non-negotiable to have in your experimentation knowledgebase, it is vital to understand that there are other audiences in your organization that don’t care about the nitty gritty of testing.

They do, however, care about what the learnings could mean for them and how it can help them.

Trapping this information in Powerpoint decks and spreadsheets only makes it harder to find and utilise easily. In a lot of organizations, these PowerPoint decks are largely forgotten beyond the initial presentation and are gathering digital dust. A lot of time, money and effort that went into creating these decks are now a wasted endeavour.

What do good insights enable

Before we dive into dissecting a useful insight, we need to understand the outcome of writing good insights.

Imagine a world where stakeholders from different departments and senior management are able to find the information easily, understand it without having to spend hours making sense of data, where they’re able to action the insight and connect it with decisions they make.

Game changer! This is an experimentation team’s dream.

Stakeholders valuing, understanding and using the knowledge they have worked hard to gather.

Insights are not

  • A PowerPoint deck with slides
  • Spreadsheets with aggregated numbers
  • Dashboards
  • A complete Data dump from all the tools you use
  • Stats and jargon.

How to construct a good insight

To create a good insight you have to follow these principles as a guide

  • Who am I creating this insight for? Who is the audience (team, person or level)
  • What can I tell them in the simplest possible language that they will understand
  • What action can I recommend them to take?
  • How can I make it easy for them to find this insight in the future
  • How can I translate the same insight for a different audience?

Now lets see it in practice

The statement in the example above is

  • Easy to understand. It is jargon free and can be understood by anyone reading it
  • Tags are used to classify and categorise the insights with the aim of making findability easy in the future.
  • It’s targeted at the ecommerce manager to take action on this insight
  • There is supporting data attached to it that could be viewed in more detail if the reader is interested.

Why is this a better way of doing things?

  • It encourages engagement. By keeping things “bite-sized” the reader is not overwhelmed with a full deck of information. This is easy enough to skim and understand
  • It piques their interest without a lot of commitment upfront
  • It makes future findability of insights simple
  • If they want to engage further, additional data in the form of attachments allows them to go deeper.

Essentially, with this approach we take the person on a journey layer by layer – the first layer being the simple statement (along with other insights). They pick and choose the ones that interest them most. The supporting data and attachments are now hyper relevant to the insight and the person can explore further

Did you know : Effective Experiments can track stakeholder and user engagement on insights easily to help you experimentation team understand how to drive engagement better.

Want to learn more about Effective Experiments can help you grow engagement and visibility of your experimentation program?


Manuel da Costa

A passionate evangelist of all things experimentation, Manuel da Costa founded Effective Experiments to help organizations to make experimentation a core part of every business. On the blog, he talks about experimentation as a driver of innovation, experimentation program management, change management and building better practices in A/B testing.