Ocado Transforms Logistics With Advanced Digital Twins

Ocado Transforms Logistics With Advanced Digital Twins

With nearly 25 years of experience at the forefront of industrial transformation, Andy Ingram-Tedd has witnessed the evolution of warehouse automation from simple mechanical assistance to the sophisticated, software-driven ecosystems of today. As the VP of Advanced Technology at Ocado Intelligent Automation (OIA), he specializes in the high-stakes interplay between robotics, human labor, and the digital frameworks that govern them. His work focuses on shifting the industry narrative away from the mere substitution of people with machines toward a comprehensive systems-design approach that leverages data to remove guesswork from global logistics.

This conversation explores the critical distinctions between predictive modeling and live digital twins, the necessity of simulating “worst-day” scenarios to ensure operational resilience, and the technical hurdles of modeling complex bot-based storage systems. We also delve into the nuances of ergonomic station design and the expansion of these advanced automation platforms into high-compliance sectors like pharmaceutical distribution.

Simulation is often confused with digital twins. How do you distinguish between a predictive model used before construction and a live system aligned with operational data, and what specific decision-support benefits does this real-time alignment offer to warehouse managers during daily operations?

The distinction is fundamental to how we de-risk a project versus how we optimize an active site. Simulation is essentially the “pre-physical” phase where we load assumptions—orders, stock layouts, and rules—to see if a design will actually work before a single piece of steel is cut. A digital twin, however, only truly exists once the warehouse is built because it requires a continuous heartbeat of real operational data to stay aligned with reality. For a manager, this real-time alignment is a powerful decision-support tool because it moves beyond instinct; you can test a configuration change, like a new item placement strategy, in the digital space first. It allows us to understand exactly what will happen if we adjust outload timings or pick speeds today, ensuring that the physical system remains stable while we seek continuous improvement.

Spreadsheets often fail to capture the complexity of systems with high throughput and many moving parts. Why is discrete event simulation necessary for modeling specific start points and process rules, and how does this approach help identify weak points before any capital is committed?

When you are dealing with high-utilization environments where thousands of bots move simultaneously, a spreadsheet simply cannot account for the chaotic interplay of variables. We rely on discrete event simulation because it treats every activity as a unique event with a specific start point, end point, and set of governing rules that can’t be averaged out. If you just look at mean times and motions, you miss the cascading bottlenecks that occur when hundreds of processes overlap. This high-fidelity modeling allows us to find the breaking points in a layout or a software rule set long before capital is committed. By the time we start construction, we have already run countless “what-if” scenarios to ensure the infrastructure can handle the intended load without unexpected failure.

Optimizing for a “best-case scenario” can lead to system failure under pressure. What specific variables, such as equipment downtime or labor gaps, must be included to model a facility’s “worst day,” and how do these stress tests prevent catastrophic operational breakdowns?

We have been operating our own equipment for a quarter of a century, so we aren’t guessing about what can go wrong; we have lived through it. To model a “worst day,” we purposely inject stressors like late inbound vehicles, unexpected equipment downtime, or significant gaps in the labor force into the simulation. We often combine these factors to see how the system recovers from a “perfect storm” of operational friction. This stress testing is vital because it prevents catastrophic breakdowns by revealing how much “buffer” is actually required in the system. It ensures that when the real-world inevitably throws a curveball, the facility has the inherent resilience to keep moving rather than grinding to a halt.

Standard off-the-shelf simulation packages often lack the depth to model complex grid-based storage and bot navigation. What are the technical advantages of building simulation tools in-house, and how does using identical software for both simulation and production improve overall system reliability?

The primary reason we developed our own simulation capability back in 2008 is that no third-party package could accurately replicate the logic of our dense, grid-based storage where bots navigate in such close proximity. By building our own tools, we ensure that the software running the simulation is identical to the code that actually controls the bots on the production site. This creates a “truth” loop where the model behaves exactly like the physical asset will, eliminating the translation errors that occur with generic software. This tight integration significantly improves reliability because any behavior we observe in the digital environment is a direct reflection of how the production site will execute those same commands.

While some pick stations can reach peak speeds of over 1,000 units per hour, sustainable targets are often significantly lower. How do you determine the “sweet spot” for throughput to avoid wasting upstream resources, and what role does simulation play in designing ergonomic stations?

We have demonstrated stations hitting 1,072 units per hour, but building a whole system around that peak would be a strategic mistake and a waste of upstream capital. We use simulation to find the “sweet spot”—often between 600 and 700 units per hour—where the operator is consistently fed work without being overwhelmed, and the machinery isn’t over-specified. Simulation allows us to experiment with modular layout changes to see how they impact the operator’s physical movements and overall ergonomics. The goal is to design a station that supports a high, sustainable rhythm of work, ensuring we aren’t paying for human idle time or over-investing in bot capacity that a person couldn’t possibly keep up with.

Automation frequently struggles with “corner cases,” such as awkward product presentations or rare failure modes. How do you integrate these extreme, unusual situations into your modeling, and what steps are required to ensure a live site handles these exceptions safely without constant human intervention?

In a high-volume site, a “one-in-a-million” event might actually happen several times a day, so you cannot have a live site unless it can handle these corner cases autonomously. We integrate these extreme scenarios into our modeling by identifying unusual product orientations or rare mechanical failure modes and teaching the system how to react. Ensuring safety and reliability requires a sophisticated software layer that can detect these exceptions and trigger a programmed recovery sequence. Our objective is to minimize human intervention, so the modeling must prove that the system can either self-correct or safely isolate the issue without stopping the entire operation.

Pharmaceutical distribution introduces strict requirements for batch traceability and security that differ from grocery retail. What technical adjustments are necessary when applying automation platforms to the pharma sector, and what specific productivity gains can be achieved in these high-compliance environments?

While the physical movement of goods in pharma is similar to grocery, the digital requirements for accountability, security, and lot traceability are much more stringent. We have adapted our platform to handle these high-compliance needs, ensuring every touchpoint is recorded for batch integrity. In projects like our Montreal facility with McKesson, we are seeing that the same automation principles used in grocery can drive massive productivity gains in pharma while actually improving accuracy. By automating the traceability aspect alongside the physical picking, we remove the risk of human error in documentation, which is just as critical as the speed of the pick itself.

What is your forecast for the future of live digital twins in global logistics?

I believe we are moving toward a future where the guesswork is completely removed from logistics through the maturation of live digital twins that offer end-to-end visibility. We will see systems that don’t just model bot movements, but integrate conveyors, pallets, vehicles, and people into a single, cohesive ecosystem of data. This level of total integration will allow global operators to achieve a state of continuous optimization, where the system learns from every single shift and adapts its own rules in real-time. Ultimately, the differentiator in the market won’t be a specific robot, but the ability to accurately model and manage these incredibly complex interactions across an entire global network.

Subscribe to our weekly news digest.

Join now and become a part of our fast-growing community.

Invalid Email Address
Thanks for Subscribing!
We'll be sending you our best soon!
Something went wrong, please try again later