Get in touch with our team
Feature image for 23.09.2025

23.09.2025

10 min read

Experimentation at scale: Building an agile approach to UX and Digital Experience

This article was updated on: 24.09.2025

No one knows exactly how many experiments are taking place on the internet. Still, anyone who works in digital marketing can tell you that it’s a lot. With the proliferation of AI, that number is only going to increase as the cost of entry and the skill required to launch an experiment decrease. 

In 2017, Mark Zuckerberg highlighted that there are 10,000 versions of Facebook Live at any given time, which was a key factor in driving its advantage. But as volumes increase, there will be a challenge to the quality of experimentation. Not just in their production but also in the value they deliver.  

There’s a debate about whether to design experiments to win (achieve a positive commercial outcome) or to learn. But wise experimenters know a good program does both. If competitive advantage comes from innovation, then it stands to reason that companies that learn faster will outperform those that lag behind.

As experimentation grows, new challenges emerge. These include unwieldy research repositories full of insights that are never used, rapidly growing QA, recurring bugs, and the difficulty of building the right team to deliver experiments. Digital marketing is now facing the same scaling challenges that engineering has dealt with for years.

Engineering a solution

Enter Taiichi Ohno, Chief Engineer at Toyota, considered the father of the “Toyota Production System”, which sought to build processes and ways of meeting the challenges of production and engineering at scale.

He is widely credited with creating the D.O.W.N.T.I.M.E. model aimed at reducing waste in engineering, each letter standing for:

  • Defects
  • Overproduction
  • Waiting
  • Non-utilised talent
  • Transportation
  • Inventory
  • Motion
  • Excessive processing

In this article, we will explore the model and how it can be adapted to help address the challenges of scaling experimentation production. As Inventory is tied more closely to engineering and manufacturing, for this case, we will be looking at inventory and overproduction as similar concepts for Digital Experience and experimentation.

Defects

Defects impact time, money, customer and stakeholder satisfaction. There are numerous ways in which defects can occur. As we scale, we generate more ideas and redirect our experimentation programs into new directions. The number of defects will start to increase. 

Examples of defects

  • Poor quality control at the production level
  • Lack of understanding of the experimentation tooling
  • Lack of proper documentation
  • Lack of process standards
  • Not understanding your customers’ needs
  • Lack of knowledge of the test environment 
  • Lack of understanding of core statistical concepts 

Actions to take

  • Have a dedicated QA within the team and ensure that proper checklists are completed by both the experiment producer and the QA person within the team.
  • Adequate training is delivered through both learning tools and hands-on sessions to ensure that people know how to set up and run experiments effectively.
  • Map out the experimentation process end-to-end. Produce documentation on how to execute key processes. 
  • Create checklists for the development and QA teams. 
  • Run regular stakeholder workshops and gather customer data to understand pain points and opportunities for improvement. 
  • Establish a basic training of statistics concepts that marketers are likely to come across, and build this into the team.
  • Establish an experimentation centre of excellence and a knowledge base to help teams across the marketing department and business run practical experiments. 

Overproduction

This can be seen in research repositories, which are full to the brim with insights that won’t see the light of day. Experimentation ideas are being designed before they are signed off. Producing too many solutions to a single problem and generating multiple variations for an experiment when one could stagger the experiments, thereby reaching a statistically significant outcome more quickly. Running more experiments than the website’s traffic can handle. Overproduction misuses marketers’ valuable time and increases complexity.

Examples of overproduction 

  • Unreliable process
  • Unstable production schedules
  • Inaccurate forecast and website traffic information 
  • Customer needs are not clear
  • Poor automation
  • Long or delayed set-up times
  • Larger or more complex experiments can, in moderation, be a good thing, but they use lots of resources, and if they fail, the learning is more sparse

Actions to take 

  • Analyse the current backlog and areas of overproduction, and then fix the supporting processes.
  • Create checkpoints in the production process, which means that specific steps only happen once others have been completed. 
  • Conduct A/A tests to understand traffic behaviours and reporting. 
  • Understand the bandwidth of a website and conduct a sample size analysis to understand how many experiments a website or test environment can support. 
  • Create feedback loops and checklists, and utilise elements such as brand guidelines and atomic design to establish replicable UI patterns. 
  • Review automations and test different scenarios. Create automation and process flows to make sure that dependencies are clearly identified. 
  • Create a list of dependencies that could lead to delays or complex setups. Design workarounds and fallbacks in layers of sign-off. 

Waiting

This can include people, materials and software. Anything that incurs a cost or retainer, regardless of usage. Underutilisation can lead to increasing costs and encourage an increase in burst activity, which is typically higher under pressure and can result in defects when corners are cut. It can also increase pressure to obtain results, thereby increasing the likelihood of bias in analysis. 

Examples of waiting 

  • Unplanned downtime or Idle equipment
  • Long or delayed set-up times
  • Poor process communication
  • Lack of process control
  • Producing according to a forecast
  • Idle equipment
  • Holding experiments waiting to go live. 

Actions to take

  • Consider flexible resources, like freelancers.
  • Consider multifunctional tooling and training teams across the business about the benefits of increasing adoption. 
  • Having multi-skilled experimentation practitioners who can switch between tasks as needed. For example, designers with front-end development skills. 
  • When unplanned downtime occurs, have internal improvement workstreams that focus on processes, internal story, or marketing.  
  • Set clear timelines and collaborate with teams across the test environment to prevent experiments from becoming stalled. 

Non-Utilised Talent

This occurs when potential employee talent fails to be utilised effectively. This can occur due to organisational structure or culture. Talent can fail to reach its full potential due to a lack of training, opportunities, and effective leadership. It can be caused by over-stretching talent or assigning tasks to the wrong team member. 

Examples of Non-Utilised Talent

  • Poor communication
  • Failure to involve people in workplace design and development
  • Lack of or inappropriate policies
  • Incomplete measures
  • Poor management
  • Lack of team training

Actions to take

  • Create maps and skill compendiums to track which team members are proficient in certain areas. 
  • Build redundancy into the team so that if a team member is unavailable, production doesn’t grind to a halt. 
  • Hold regular workshops with stakeholders and team members to involve them in the system design process.
  • Regularly review processes and operational procedures to ensure they are fit for purpose. 
  • Set SMART objectives for team members to help them grow and advance experimentation within the company.
  • Ensure that team members receive basic training in essential skills for experimentation, such as statistics, UX principles, and project management.
  • Ensure 360 reviews for team members, including management. 
  • Provide management with KPIs and values that encompass performance, progression and team satisfaction. 

Transportation

This is less of a consideration of the digital age, but still worth consideration for larger experimentation teams. The costs associated with physical office space and the location of the teams can impact the productivity level of an experimentation team. For example, differences in time zone and the distance to the client’s offices can all factor in. 

Examples of transportation waste

  • Poor time management
  • Poor design of production systems
  • Mismatch of cultural fit

Actions to take

  • Clear schedules and timetables taking into account time zones and complexity. 
  • Map dependencies across different geographies and build systems that allow cues to form, so that as team members log in, their actions are clearly defined and in actionable positions. 
  • When hiring team members and scoping work, understand the organisation’s needs and customers’ expectations. Hire staff that fit the overall culture. 

Motion

For digital, this is unnecessary back-and-forth between stakeholders, team members, and customers. The more work goes back and forth, the more time, money, and resources are spent on moving experiments around without gaining any benefits, thereby stifling innovation. There are, of course, some traditional motion considerations, such as arming the team with the right tech stack and equipment to make their jobs easier and reduce analysis time, for example.  

Examples of Motion waste

  • Poor production planning
  • Poor process design
  • Shared equipment and machines
  • Siloed operations
  • Lack of production standards

Actions to take

  • Manage the research and experimentation stack carefully to prevent duplication and redundancy in tooling. 
  • Identify areas where waste loops can be created in processes and implement checkpoints and countermeasures to mitigate them. 
  • Ensure that you have the necessary tools and training to operate technology effectively and efficiently.
  • Build a bridge at different stakeholder and accountability levels between the customers and the delivery teams to allow for clear communication and feedback to be actioned. 
  • Utilise checklists and quality assurance effectively to remove defects. Build lists of known development issues and client preferences to reduce and catch defects. 

Excessive processing

A poorly designed process causes this and can be related to management and administrative issues. Think of this as experiments that go back and forth between stakeholders, attempting to get sign-off, or between the client and the agency. Going around in circles or creating roadblocks in the process will stifle innovation and kill good ideas before they ever reach the live environment. 

Examples of excessive processing

  • Poor communication
  • Not understanding your customers’ needs
  • Human error
  • Slow approval process or excessive reporting
  • Lack of confidence in stakeholders or customers

Actions to take 

  • Establishing a common language around experimentation terms, such as “winner” and “loser”, can create challenges based on culture and increase risk, advising or blinking patterns of behaviour. 
  • Establish a straightforward process from the start with stakeholders and agree on how long sign-off should take and the number of rounds of amendments that should be allowed.
  • Train and educate team members to minimise human error. Install a QA to help catch mistakes in areas where they are most likely to occur, such as web development. 
  • Ensure that reports are concise and to the point. State the hypothesis, the key metrics examined, the analysis, the supporting evidence, and the outcome / next steps in as clear and succinct a way as possible. 
  • Building trust with stakeholders and customers requires transparency and accountability. People want to understand what will happen and why.  

As experimentation programs mature and become more advanced, numerous ways can arise in which waste can start to creep into experimentation, hampering outputs and reducing innovation and financial impact. Managing waste using frameworks is a great way to enhance experimentation processes. 

Key Takeaways

  • Map the experimentation process and work on refinement based on where backlogs and defects are occurring.
  • Workshop to understand internal stakeholder and customer needs to build processes that work across the organisation and testing environments.
  • Establish basic training in elements like UX and statistical analysis that are key skills in the experimentation process.
  • Create clear documentation that explains the experimentation process so that anyone across the organisation could understand it. 
  • Use checklists to make sure that key quality assurance process are being completed at each stage. Tailor these over time based on the experimentation workflow.
  • Prioritise experimentation programs based where they will drive the most commercial impact and strategic learnings.

As a follow-up to this blog, I recommend taking a look at the M.I.N.D.S.P.A.C.E framework if you’d like to learn more about psychology and user behaviour in marketing. Check out our webinar or white paper, and if you have any questions, please reach out!