Rethinking Experimentation Documentation : Beyond Data Entry

If your organization is running an experimentation program, you must track your ideas and experiments. It should be a no brainer. This is company IP. They’ve invested in experimentation and they own the output.

It shocks me to hear that even now (at the time of writing this in 2023), 58% of organizations don’t have an experimentation knowledgebase. This is a stat according to Speero’s most recent Industry Maturity Benchmark survey.

Can you imagine an organization that doesn’t have a proper way of accounting for their finances? You couldn’t. Yet all the work done in terms of insights gathering, experimentation and reporting which can impact a business’s decision making is not cared about much.

58% of organizations don’t have a knowledgebase

Speero Industry Maturity Benchmark 2023

As a company that specialises in helping companies improve the processes and visibility of their experimentation programs, we get numerous request for a demo for our Experimentation Ops Insights and Workflow platform. It is purpose built for the entire spectrum of experimentation processes and has time saving automations and integrations along with ways to engage the wider organization.

Whenever we do an exploration call with a new prospective customer, we always ask them this question.

“What problem are you looking to solve”

Their answer reveals pretty quickly whether we can help them or not.

If the answer is something like “We need a tool to track our experiments and ideas” but they cannot articulate why or the impact of not doing it, I direct them to a generic tool like Airtable or Clickup.

They’re in the mindset of ‘data dumping’ and its not what we do here at Effective Experiments.

A key indicator we use to understand how serious a company is about their experimentation program is by how much attention and care they give to their documentation.

If a company is serious about experimentation at the senior management level, we notice that there is a lot more granularity in the data as well as engagement. The organization cares about the knowledge and insights gathered. It is their Institutional knowledge.

The inverse is where managing an experimentation program is thought of just as data storage in some spreadsheet or generic project management tool.

Many in the industry have espoused the wonders and ease of Airtable and the generic project management tools as ways to manage an experimentation program. Their argument for this is its easy to customise, cheap and can be used to track data and build dashboards.

Herein lies the biggest shortcoming – managing an experimentation program is more than just about entering data into the system. There’s a lot of nuances that are not addressed when using generic tools. But CROs and experimentation specialists need to start seeing the bigger picture.

Too often, they’re stuck focussing on just one part – data entry.

The act of entering the data into a tool is one small part of what goes into managing an experimentation program and if not done right, it renders your documentation useless.

Heres how to go about rethinking Experimentation documentation:

The Process Matters

How do you keep track of your experiments, results and insights?

Option A: You enter it as and when into a tool. Usually at the end of the experiment.

Option B: You enter each piece of research, idea, experiment and report on it as and when it occurs in the real world.

If you answered Option A, you need to think about what you’re missing out on.

The process of experimentation is as important as the experiment itself. It can reveal a lot about the quality, bottlenecks and challenges faced in your experimentation.

Experiments need to be reliable if you use them to make decisions. When you create a standardised process, you ensure that every experiment follows the same path going through design, development, QA and approval cycles in a reliable way.

If you don’t track the process as it happens in the real world and instead just do a data dump at the end, you may never pick up on a scenario where the experiment didn’t get QA’d or was stuck in an approval cycle longer.

The data points gathered in this will help you track two very important metrics – Throughput and Lags.

Governance is critical

Would you trust your analytics tool if there was a suspicion that the data was inaccurate?

You wouldn’t. The same goes for your Experimentation Knowledgebase.

Garbage in. Garbage Out.

Lack of oversight, governance and guardrails could mean loss of trust in your experimentation program.

All of your documentation is useless if the data isn’t reliable.

Reporting on the number of tests you ran and successes doesn’t paint the full picture. There could be experiments riddled with questionable practices, cherry picked metrics and incorrect conclusions. This is known as Hark-ing and it’s a hidden epidemic in the industry.

We have seen first hand how teams don’t report on all the experiments they run, or by keeping everything in different systems which not everyone has access to, can create their own narrative that paints their work in a positive light.

Only proper governance and guardrails can be help your program avoid this pitfall.

Experimentation Documentation factors in clear processes and governance controls at each step in the journey.

The outcome is a trustworthy program with reliable experiment data. This can help your stakeholder make decisions that are rooted in clean data.

What’s the purpose of the data?

You’re keeping track of your experiments but are what is the purpose it serves once it is entered?

The answer to this is usually to give visibility and to share it with everyone.

The question I pose to you here is – Does this really happen?

Is your documentation visible and transparent for everyone so they can self-serve or is the experimentation specialist / team the gatekeeper of all this?

If running experiments is about learning, then you have to prioritise not just how the data is shared but also keeping a close eye on whether the information is seen and acted on.

If this sounds like too much work, it is but without it you won’t see any meaningful engagement.

Sure, you can share powerpoint decks via email but did you keep a close eye on whether it was viewed or actioned? Keeping track of this will give you an insight into whether experimentation is valued and whether the insights and results you provide make any meaningful impact in the organization. Then, you can decide on how to ramp up engagement and evangelize experimentation to the wider org.

A culture of enablement & coaching

This is the big leap that Experimentation Managers and senior management need to make when it comes to Experimentation documentation.

The data entry layer is non negotiable. It must be done following strong processes and operating procedures. An accurate record MUST be kept at all times.

This will help CRO managers to see a clear picture on which individuals and teams need support, training and feedback.

If documentation is only treated as data entry into a spreadsheet or project management tool without a clear plan on how to rollout, improve and coach the practitioners into getting better, then all you’re doing is running and documenting tests but unable to see the bigger picture.

The data captured should help you spot red flags in process adherance, quality of planning, reporting and stopping any bad practices such as Hark-ing. This way, you can nip any long term issues in the bud before they become a problem.

Final thoughts

Project management tools are not the right solution when it comes to managing an experimentation program as there is a lot more to documentation than data entry.

Learn more about how Effective Experiments supports organizations looking to grow experimentation capabilities in their organization.

Manuel da Costa

A passionate evangelist of all things experimentation, Manuel da Costa founded Effective Experiments to help organizations to make experimentation a core part of every business. On the blog, he talks about experimentation as a driver of innovation, experimentation program management, change management and building better practices in A/B testing.