Richard Lavaile sits down with Marco Gaietti, a veteran of business and operations strategy, to unpack how a renewed long-term logistics partnership underpins rapid growth in UK online grocery. Drawing on hands-on work in optimization and customer-centric operations, Marco explains how a single national consolidation center, a reconfigured Kettering facility, and a redesigned WMS knit together into a resilient, scalable network. Across our conversation, he reflects on translating demand signals into capacity, reshaping slotting and replenishment, balancing cost-to-serve with availability, and leading teams through change while safeguarding service levels. The themes are practical and forward-looking: fewer handoffs, tighter governance, smarter rules, and a shared commitment to keep seven customer fulfillment centers fed with ambient products, day in and day out.
What business goals drove the decision to renew the long-term logistics partnership, and how did previous results inform that choice? Can you share specific KPIs that tipped the balance, and a moment when the partnership proved mission-critical?
The renewal centered on growth, stability, and scalability. Previous results showed the model could absorb year-on-year volume increases without cracking service. We watched inventory accuracy, OTIF to the seven CFCs, and dock-to-stock speed as our north stars. A mission-critical moment came during a sudden peak when the national consolidation center buffered demand surges, kept ambient flow steady, and shielded CFCs from upstream variability.
A national consolidation center now handles bulk ambient storage and feeds seven customer fulfillment centers. How does this setup reduce variability and costs, and what trade-offs do you manage around transport frequency, inventory positioning, and service levels?
One center smooths inbound noise and converts it into consistent outbound rhythm. It cuts duplication, trailer touches, and administrative overhead across seven sites. The trade-offs sit between bigger linehaul drops and local agility. We tune frequency by lane, accept slightly deeper central stock to protect service, and keep CFCs lean to lower handling cost.
Volumes have risen year over year. What demand signals most accurately forecast that growth, and how did you translate them into capacity, labor, and transport plans? Which forecasting pitfalls did you avoid?
We weighted order intake trends, SKU onboarding cadence, and promotion calendars over vanity metrics. Those signals translated into incremental racking at Kettering, staggered labor ramps, and pre-booked transport windows. We also sized pick-face changes to match the ambient mix. We avoided the pitfall of averaging peaks away and the trap of chasing last week’s noise.
The Kettering facility was reconfigured for new pallet configurations and SKU profiles. Walk us through the redesign decisions, the sequencing of changes, and how you measured success. What surprised you during implementation?
We began with pallet geometry and load stability, then mapped SKU velocity to storage media. Next came pick-face design, put-away rules, and replenishment logic in the WMS. We sequenced changes by zone to protect outbound. The surprise was how quickly new SKU profiles shifted once the seven CFCs stabilized, forcing faster slotting refreshes than expected.
You increased storage capacity while maintaining service continuity. Which risk controls and contingency plans made that possible? If you had to do it again faster, what would you change?
We ring-fenced live pick zones and piloted changes in shadow locations. Dual-running old and new WMS rules allowed rollback if exceptions spiked. Daily control-tower huddles caught issues before they touched outbound. Next time, I’d pre-stage more modular racking and pre-train super-users earlier to compress the curve.
A redesigned warehouse management system was central to the optimization. What core rules, data models, and integration points delivered the biggest gains? How did you validate them in live operations without disrupting outbound?
The wins came from rule-driven slotting, velocity-based replenishment, and tighter put-away constraints. We enriched item masters with cube, handling notes, and compatibility flags. Integration with transport planning and CFC intake schedules locked timing to capacity. We validated via A/B waves after cutoff, then widened scope once error rates and cycle times held steady.
New pick faces were introduced. How did you determine slotting strategy, pick-path design, and replenishment triggers? What were the before-and-after metrics on pick rates, travel time, and error reduction?
Slotting followed velocity, unit of measure, and touch frequency, with safety and ergonomics layered in. We simplified pick paths to reduce cross-aisle backtracking and aligned faces to carton flow. Replenishment triggers reflected demand volatility and case-pack size. Pick rates rose, travel time fell, and errors dropped, while service to all seven CFCs remained intact.
Replenishment processes were enhanced. Which algorithms or thresholds govern replenishment now, and how do they respond to promotions and seasonality? Describe the exception handling for stockouts and overstocks.
We use floor-based reorder points tuned by velocity band and case multiple. Promotion flags lower thresholds and widen safety bands temporarily. Seasonality shifts the banding and pushes earlier pre-build at the national center. Exceptions trigger targeted cross-dock, escalation to buyers, and guided transfers to neutralize overstocks.
Put-away logic was updated. What attributes—velocity, cube, compatibility, or handling constraints—most strongly influence location assignment? How did you train teams to trust and fine-tune those rules?
Velocity and cube lead, with compatibility and handling constraints as gates. The WMS assigns best-fit locations near appropriate pick faces or deep reserve. We trained with side-by-side tasks showing why the system chose a slot. Then we captured operator feedback as rule overrides, feeding continuous improvement.
Outbound efficiency improved without sacrificing service levels. Which levers mattered most—wave planning, batching, load sequencing, or dock scheduling? Can you share a concrete metric shift and the operational story behind it?
Wave planning and dock scheduling did the heavy lifting, with batching tightening pick density. Load sequencing synced to CFC intake windows and route cutoffs. The operational story is fewer idle docks and smoother carrier turns. That translated into faster outbound while still meeting service commitments to seven CFCs.
Greater stock cover now supports peak trading periods year-round. What’s the sweet spot for days of cover by category, and how do you prevent obsolete inventory? How do you align suppliers to that strategy?
We tailor cover by velocity and seasonality rather than chase a single number. For ambient, deeper central buffers protect peak periods without clogging CFCs. Obsolescence risk is curbed by disciplined review of slow-movers and promotion tie-ins. Suppliers align through shared forecasts and delivery slots anchored to the consolidation rhythm.
Acting as a consolidation center aims to stabilize product flow into CFCs. What governance and data-sharing practices keep the network synchronized daily, and how do you escalate when bottlenecks emerge?
A daily control-tower cadence aligns inbound, storage, and outbound across all seven CFC lanes. We share inventory visibility, ASN quality, and exception dashboards. Bottlenecks trigger predefined playbooks and capacity swaps. If needed, we re-prioritize loads and flex labor across zones to clear the path.
Scaling infrastructure is a stated priority. Where do you see diminishing returns—labor, space, automation, or transport—and how are you planning to extend the curve? What investments are next?
Diminishing returns first show up in space and intra-aisle travel. Labor scales, but only with smarter orchestration. We extend the curve with better data, modular storage, and rules that reduce touches. Next comes deeper WMS rule refinement and selective equipment upgrades in Kettering to support ambient growth.
From a leadership standpoint, how did you maintain team morale and accuracy during major system and layout changes? Which training, incentives, or communication routines proved most effective?
We kept changes transparent and predictable, explaining the why and the when. Hands-on training at the pick face built confidence. Short, daily huddles captured pain points and quick wins. Recognition tied to accuracy and safety reminded the team that service continuity matters as much as speed.
When measuring overall value, how do you balance cost-to-serve, availability, and customer experience? What dashboard or cadence do you rely on, and which metric is the early warning signal you never ignore?
We put availability and service steadiness alongside cost, not behind it. The dashboard blends inventory health, OTIF to seven CFCs, pick accuracy, and carrier turn times. A weekly rhythm frames strategy; a daily one catches drift. The canary in the coal mine is fill risk at the consolidation center, because that echoes across the whole network.
What is your forecast for UK online grocery logistics over the next five years?
Expect consolidation centers to become the norm for ambient flows, feeding multiple CFCs with fewer breaks in the chain. WMS rule sophistication will outpace heavy automation in many sites, unlocking gains with data rather than concrete. Partnerships will deepen, with governance and shared visibility as the glue. For readers: build the muscle to re-slot fast, protect service relentlessly, and let a single, well-run consolidation hub do the heavy lifting.
