Richard Lavaile sat down with Marco Gaietti, a management consulting veteran whose career spans strategic management, operations, and customer relations. Marco is known for helping engineering and business leaders meet in the middle—translating agile promises into measurable outcomes. In this conversation, he unpacks how flexible tooling, clear metrics, and smart integrations keep teams shipping what matters without creating bureaucracy.
Across our discussion, Marco emphasizes three themes: first, the need for workflow flexibility that supports Scrum, Kanban, and hybrids without forcing teams into uniformity; second, the importance of connecting engineering execution to business goals through roadmaps, dashboards, and portfolio views that offer total visibility without micromanagement; and third, a pragmatic approach to AI, automation, and integrations—using them to predict risks, prevent overcommitment, and reduce busywork, while governing noise. He compares monday dev with tools like Jira, ClickUp, Asana, Smartsheet, Wrike, Notion, Trello, Aha!, and Zoho Sprints, and shares migration and rollout playbooks that reduce disruption and speed adoption.
You argue rigid tools slow dev teams, while monday dev lets squads design Scrum, Kanban, or hybrids. Can you share a real example where that flexibility cut a backlog, the specific boards or automations you changed, and the before/after cycle times or release frequency?
I’ve watched teams wrestle with prescriptive workflows that looked neat in a slide deck but collapsed under real-world edge cases. One group shifted to monday dev specifically so each squad could pick Scrum, Kanban, or a hybrid—and that choice alone removed a lot of hidden friction. We built customizable sprint boards for the Scrum squads and a drag‑and‑drop Kanban with WIP columns for the ops-heavy squad; then we layered in pre‑built automation recipes for bug triage and release notifications, plus no‑code rules to move items when statuses changed. The notable result wasn’t just a thinner backlog—it was the renewed momentum: leadership got real-time visibility through dashboards and burndowns without requiring extra status meetings, so the teams shipped more confidently and more often. When your platform bends to the team’s rhythm, you stop paying the “translation tax” every single day.
monday dev ties engineering work to business goals via dashboards and roadmaps. Walk me through a step-by-step setup you’ve used—boards, fields, and views—plus the exact metrics leaders watched weekly and one anecdote where this alignment changed a roadmap decision.
I start with outcome-first modeling: a high-level roadmap board for company goals and epics, linked to team-level sprint or Kanban boards. On the roadmap: goal fields, epic owners, target windows, and a simple value/effort lens; on team boards: status, priority, estimates, dependencies, and integration links (e.g., GitHub) for traceability. Then I connect dashboards to show real‑time burndown charts, cumulative flow, and portfolio health; leaders review these weekly along with velocity, WIP distribution, and upcoming release windows from the same hub. In one review, a dashboard revealed a cross‑team dependency that would delay a highly visible initiative—because everyone could trace epics to goals in the same view, we re-sequenced the roadmap in minutes instead of weeks, preserving confidence and avoiding rework.
You mention AI-driven sprint planning that predicts realistic commitments. How do you calibrate it to team velocity, what data points feed the suggestions, and can you share a time it prevented overcommitment—include sprint goals, forecast vs. actual points, and the follow-up adjustments?
Calibration starts with history: closed items, statuses, and throughput patterns feed the AI to generate grounded forecasts. The model uses the team’s previous cycle patterns, story types, and board-specific lead times to suggest what’s achievable; we then add context—holidays, on‑call duties—to refine it. In practice, the AI’s “smart sprint planning” nudged the squad to trim scope when a risk spike appeared in the dashboard; the sprint goal stayed focused on one epic and related bug cluster, and the team exited the sprint on target rather than dragging work forward. Post-sprint, we captured learnings in the retrospective tool right on the board, and the AI improved future suggestions—steady, realistic commitments beat headline capacity illusions every time.
The platform flags risks and bottlenecks early. What signals do you monitor (e.g., WIP trends, lead time, blocked items), which alerts or automations you turn on first, and how did a team resolve a flagged risk—timeline, owners, and measurable impact?
I track WIP trends, blocked-item age, lead time shifts, and queue buildups in testing or review. First, I enable automations for status changes, blocked-item alerts, and release notifications, and I add pre‑built risk identification so early warnings roll up to dashboards. A team once saw an uptick in blocked items at the review stage; the alert triggered an ownership huddle, and we introduced code‑review nudges and clearer swimlanes. Within the same release window, the queue stabilized and leadership could see the improvement on the burndown and cumulative flow—no extra meetings, just clean signals and crisp ownership.
You cite up to 250K monthly automation actions on Enterprise. What are the most impactful recipes you’ve deployed at scale (e.g., bug triage, code review nudges), how do you govern them to avoid noise, and what throughput or handoff gains did you track?
At Enterprise scale, the heavy hitters are bug triage, code review reminders, handoff notifications, and cross‑board sync for epics and sub‑items. The 250K automation actions capacity lets large orgs orchestrate complex workflows without writing code, while Pro teams get 25K actions for robust squad-level flows. Governance matters: we centralize recipe templates, set naming conventions, and require a quick pilot per recipe before org‑wide rollout to prevent alert fatigue. Gains show up in fewer handoff misses and cleaner queues; people see the effect in dashboards and burndowns—work moves without nagging, and throughput climbs because the system does the nudging.
Pricing spans Free (up to 2 seats/3 boards) through Pro and Enterprise. How do you decide when to upgrade—what limits you hit first, which features justify the jump, and can you share ROI math comparing subscription cost to saved meeting hours or faster cycle times?
Teams start on Free—up to 2 seats and 3 boards—to prove out their rhythm. As soon as cross-team visibility, integrations, or heavier automations become essential, Standard or Pro makes sense; Pro adds advanced features and up to 25K automation actions, which is a meaningful step-up. Enterprise is where portfolio management, security, and 250K automation actions matter for complex orgs. ROI is straightforward: if a plan replaces recurring status meetings with real‑time dashboards and eliminates manual handoffs, the subscription is absorbed by reclaimed focus time—especially when leaders no longer need separate tools to see burndowns, cumulative flows, and roadmap status in one place.
monday dev integrates with GitHub, GitLab, CircleCI, Slack, Jira, and Salesforce. Describe your integration map for a mid-size team—events you sync, fields you mirror, and one case where bi-directional sync removed a status meeting. What errors or drift did you measure before/after?
I mirror commits, PRs, and build statuses from GitHub or GitLab to the dev board, and I surface CircleCI pipeline signals next to sprint items. Slack handles notifications (ready-for-review, build failed, release shipped), while Jira or Salesforce can sync epics or customer-impact fields for cross‑department alignment. In one case, bi‑directional sync between GitHub and the sprint board made a weekly status meeting redundant—review progress was visible in real time, and release notifications were automated. Before, teams reported drift because statuses lagged; after, the same board became the single source of truth, and leadership trusted the dashboard without extra check-ins.
You positioned monday dev against Jira’s deep issue tracking and 89.65% market share. Where does monday dev win in cross-department alignment, and where does Jira still shine? Share a migration story with training time, custom workflows rebuilt, and the adoption curve by role.
Jira’s 89.65% market share in issue tracking underscores its depth and maturity for technical teams, and it shines with advanced reporting and marketplace breadth. monday dev often wins when you need cross‑department alignment—engineering, product, and leadership can share one visual language and navigate dashboards without the steep learning curve that can slow non‑technical stakeholders. In a migration, we rebuilt workflows with customizable boards and methodology flexibility so squads could pick Scrum, Kanban, or hybrid; the training effort was lighter because the visuals matched how teams already thought about work. Adoption followed a predictable curve: product and leadership engaged early via dashboards and roadmaps, and engineers leaned in once Git integrations and automations reduced manual updates.
ClickUp offers a broad feature set but a steep learning curve. When would you still recommend it over monday dev, and how would you de-risk onboarding? Give a concrete rollout plan, milestones for proficiency, and metrics to confirm productivity didn’t dip.
I recommend ClickUp when a team needs extreme customization in one place and is willing to invest in learning. To de‑risk onboarding, I run a phased rollout: start with a single team in a pilot, then expand to adjacent squads after templates stabilize. Milestones include basic navigation, sprint reporting, and workload views; success criteria are steady throughput, healthy WIP, and unchanged (or improved) burndown behavior during the pilot. If the team needs simpler cross‑departmental visibility with fewer change-management hurdles, monday dev is typically faster to adopt.
Asana limits assignees to one person and gates time tracking on higher tiers. How have those constraints affected agile rituals like swarming? Share a real workaround, its trade-offs, and the measurable effect on cycle time or quality.
Single‑assignee constraints can complicate swarming, where multiple devs share responsibility during crunch moments. The workaround is to keep one assignee for accountability and use subtasks, tags, or comments to coordinate helpers; for time tracking on lower tiers, teams rely on integrations or keep time lightweight elsewhere. The trade‑off is added overhead in coordination moments and less granular time visibility unless you move to higher tiers. In practice, teams often switch when they realize that rituals like swarming are smoother in tools where multiple contributors can be reflected without kludges.
Smartsheet claims use by over 90% of the Fortune 100 and blends grid, Gantt, and Kanban. In an enterprise hybrid setup, when does that spreadsheet metaphor help or hinder? Walk through a portfolio dashboard you built and what executive metrics it clarified.
The spreadsheet metaphor helps when leaders are fluent in grid logic and want to pivot between grid, Gantt, and Kanban with minimal friction—especially in data-heavy environments. It can hinder when development teams need purpose-built agile constructs like built‑in story points, burndowns, or sophisticated backlog grooming without extra setup. I’ve built portfolio dashboards that roll up multiple projects into real-time views—executives saw schedules in Gantt, WIP in Kanban, and overall status in the grid, all in one place. The clarity was in the blend: traditional planning comfort plus agile visuals, though teams noted the total cost can rise as premium add‑ons accumulate.
Wrike’s resource management is strong but advanced agile features sit on higher tiers. How do you justify the Business vs. Enterprise jump? Share a capacity planning example, the exact workload visualizations leaders relied on, and the utilization improvements you recorded.
I justify the jump when leaders need robust resource management with enterprise security and BI integrations; Wrike’s advanced features unlock richer planning and governance. Capacity planning hinged on workload and utilization views—leaders could see who was overcommitted and rebalance work before sprints started. The visualizations, paired with risk prediction, gave stakeholders a shared basis for decisions and reduced last‑minute fire drills. The story was consistent: a higher tier paid for itself by preventing bottlenecks and making workload debates objective rather than emotional.
Notion lacks built-in story points and burndown charts. If a team insists on Notion, how do you patch those gaps—templates, formulas, or third-party tools? Outline the exact setup, the maintenance overhead, and the reporting fidelity compared to purpose-built platforms.
In Notion, I create databases for backlog and sprints, then add custom properties for estimates, statuses, and dates; formulas can approximate velocity, and views can mimic Kanban or timelines. For burndowns, I either build a lightweight chart via formulas and linked views or connect a third‑party tool; documentation and project data benefit from living together. The overhead lies in maintaining templates and keeping formulas aligned with actual practice—teams must be disciplined or the reporting drifts. Compared to purpose‑built platforms, reporting fidelity typically lags, but for teams prioritizing an all‑in‑one workspace, it’s workable with clear guardrails.
Trello’s simplicity scales poorly as boards grow. What guardrails have you used—card aging, WIP limits via Power-Ups, or swimlanes—to keep signal high? Share one cleanup sprint: steps you took, board metrics before/after, and how you preserved context.
Guardrails start with WIP limits via Power‑Ups, card aging to surface staleness, and swimlanes for priority or work type. In a cleanup sprint, we archived stale cards with a clear rule, merged duplicates, and converted ambiguous titles into crisp, outcome‑based phrasing; we kept context by linking to docs and recording final notes before archiving. Post‑cleanup, the board felt breathable again: fewer columns, clearer swimlanes, and faster stand‑ups because everyone could scan signal over noise. The team regained confidence in the board, which is what ultimately keeps adoption strong.
Aha! links goals to dev work but has a steep learning curve. For a product-led org, how do you sequence Aha! with monday dev or Jira? Provide a concrete flow from strategy to sprint, data handoffs, and the validation metrics you monitor at release.
I position Aha! for strategy and portfolio—goals, themes, and roadmaps—then push prioritized features into monday dev or Jira for execution. The handoff includes clear fields: goal alignment, priority, acceptance criteria, and links back to strategy; execution tools own sprint planning, burndowns, and day‑to‑day visibility. At release, I validate against the original goals using roadmap views and dashboards, checking whether shipped work aligns with the intended outcomes. This separation of concerns keeps product discovery and strategy crisp while letting delivery teams move at their natural cadence.
Zoho Sprints’ Premier plan is $5/user/month with CI/CD integrations and time tracking. When is it the better pick for a Scrum-first shop? Share a sample backlog-to-sprint workflow, the velocity reports you trust, and the trade-offs you’ve faced with limited integrations.
It’s a solid pick for Scrum‑first teams that want structure, built‑in time tracking, and affordability; Premier at $5/user/month is compelling. A clean workflow goes backlog grooming, sprint planning with drag‑and‑drop prioritization, then daily execution on Scrum/Kanban views, finishing with burndown/burnup and velocity reports. Those reports give a reliable pulse without needing extra setup, which teams appreciate. The trade‑off is integrations—compared to Jira or monday dev, you may hit limits faster if your ecosystem is wide, so confirm your CI/CD and communication tools are covered before scaling.
You outline five steps to choose a platform: agile flavor, integrations, collaboration gaps, future growth, and value. Can you detail how you run this assessment—stakeholder interviews, tool audits, and pilots—and the hard criteria or scorecard you use to recommend a platform?
I start with stakeholder interviews to capture actual practices—not the textbook version—across engineering, product, and leadership. Then I map essential integrations (GitHub, GitLab, Slack, CI/CD, CRM), run a tool audit against collaboration gaps, and size future growth needs so we aren’t back here in six months. A short pilot tests usability, dashboards, and AI/risk features while measuring friction and adoption; the scorecard weights methodology flexibility, integration depth, visibility, scalability, and value rather than sticker price. The right tool is the one that increases speed and confidence while keeping developers in flow and leaders aligned in real time.
On migration, what phased plan minimizes disruption—data mapping, sandbox testing, role-based training, and cutover timing? Share a concrete timeline (weeks by phase), the import tools you used, the top two pitfalls you hit, and how you measured stabilization.
We phase it: data mapping and cleanup, sandbox testing with real samples, role‑based training, then cutover. Import tools and guided support reduce risk, and a planned rollout avoids the “big bang” shock; we schedule a 14‑day window that doubles as a free trial period on Pro features where helpful. The common pitfalls are underestimating field mapping complexity and skipping hands‑on training for non‑technical roles; both lead to rework. Stabilization shows up when dashboards, burndowns, and cumulative flows mirror reality and teams no longer need parallel status meetings to trust the system.
You stress “total visibility without micromanagement.” What dashboards, burndown charts, and cumulative flow you standardize by role, how often you review them, and a story where visibility caught a slipping dependency early—include the exact lead-time delta.
I standardize a leader’s dashboard with portfolio health, burndown, and risk signals; product sees roadmap progress and epic status; teams get sprint burndowns and cumulative flow. Reviews occur weekly at the leadership level and more frequently within squads, all in the same workspace to avoid context switching. In one program, the dashboard flagged a dependency that was drifting; by nailing it early, we reshuffled the sequence and kept the release on track without adding new meetings. That’s the heart of it: clarity that accelerates, not surveillance that slows.
How do you prove the value of AI in planning and risk spotting beyond anecdotes? Describe your baseline metrics (cycle time, predictability, throughput), the experimental setup you used, the statistical thresholds you consider meaningful, and the changes you saw over multiple sprints.
I baseline cycle time, predictability, and throughput across multiple sprints, then enable AI features like smart sprint planning and risk identification for a comparable run. The experimental setup keeps teams and work types consistent while reducing external noise; we review dashboards and burndowns to ensure we’re not gaming metrics. I look for consistent improvements—reduced surprises and steadier sprint outcomes—over several iterations, not one lucky sprint. When leaders see fewer rollovers and clearer early warnings on the same boards where work happens, the value becomes concrete and repeatable.
Do you have any advice for our readers?
Start with how your teams actually build, not how a tool wants you to build. Choose platforms that respect your rhythm—Scrum, Kanban, or hybrid—and connect work to goals with dashboards that everyone understands at a glance. Pilot with a single squad, harden templates, and only then scale; keep integrations tight so developers never have to be their own project managers. Above all, optimize for momentum: fewer meetings, cleaner handoffs, and visibility that earns trust without micromanaging—because teams that move with confidence ship better products, faster.
