Expert IT strategist Marco Gaietti discusses the shift from simple automation to agentic AI, sharing insights on how structured governance and strategic role redesign are essential for scaling these autonomous systems effectively. With decades of experience in management consulting, Gaietti explores the practical challenges of data integrity, the necessity of AI steering committees, and the operational shifts required to move from pilot programs to global production.
Agentic AI is often described as the next step beyond simple data prediction and pattern recognition. How does this technology transform manual tasks into autonomous processes, and what distinguishes its ability to perform actions from the capabilities of standard generative models?
The evolution of artificial intelligence has moved through distinct stages, starting with machine learning, where we simply interrogated data for patterns. We then moved into the era of generative AI, which focuses on predicting outcomes and extrapolating information from massive datasets. Agentic AI represents the next leap because it doesn’t just predict; it performs specific actions or processes autonomously. While people often compare it to traditional robotic process automation, the key difference is that agentic AI can be fed by generative predictions to execute complex tasks that previously required manual intervention. It transitions the technology from a “thinking” partner to a “doing” partner, effectively closing the loop between data analysis and operational execution.
Internal research tools can now aggregate information from sales, finance, and external press releases to create a 360-degree customer view. What specific operational shifts occur when hours of research are condensed into a single button press, and how do you measure the resulting gains in accuracy?
When you deploy a tool like “Deep Research,” you are fundamentally changing how a team understands its clients by spanning systems like Salesforce, financial databases, and ServiceNow simultaneously. Instead of a staff member spending several hours manually hunting for press releases or ticket histories, an agent provides a comprehensive brief instantly. We have found that these agents are over 90% accurate in their research and analysis, which provides a massive safety net for decision-making. Beyond just time-saving, we see up to a 40% increase in tangible benefits, whether that manifests as improved cash flow or higher team productivity. This shift allows employees to move away from the “hunt and gather” phase of their work and go straight to high-value strategy and problem-solving.
Implementing a successful AI steering committee involves collaboration between HR, IT, and legal departments. What specific information must a project proposal include to pass through a formal review portal, and what training benchmarks should employees meet before they are granted access to these tools?
Our governance model requires anyone proposing an AI tool to fill out a detailed form on a centralized portal that outlines the specific benefits and whether the solution is off-the-shelf or requires custom development. This proposal is then scrutinized by a working group that evaluates it through the lenses of legal liability, HR policy, and privacy constraints. Crucially, we do not grant access to powerful tools like Microsoft Copilot until an employee has completed specific training to understand our governance policy. This training covers the “dos and don’ts,” the inherent dangers of AI, and the specific ethical guardrails we’ve established. We also maintain a rigorous register to monitor model drift, ensuring that as data changes over time, the algorithm doesn’t begin to produce degraded or biased outcomes.
As more employees begin building their own agents, organizations face new risks regarding database access and system integrity. How do you categorize different user tiers to manage read and write permissions, and what orchestration is required to prevent hundreds of autonomous agents from causing conflicts?
To manage this “brave new world,” we categorize users into three distinct segments: professional IT developers, business super users, and general staff who might be experimenting with basic scripts. Each tier requires different access rights, particularly when deciding who can merely read data versus who can write to a database. If an agent adds incorrect data to a system and a feeder system unknowingly reports on it, the ripple effect of errors can be catastrophic. CIOs must get ahead of this by creating an orchestration layer that manages these hundreds of autonomous agents to ensure they don’t conflict or compromise the integrity of the core IT environment. Without this structured governance, the risk of agents “running wild” and hitting systems in ways that cause operational failures becomes a very real threat.
Many organizations choose to reject AI tools that rely on inaccurate source data or face copyright legalities. Why is it vital to avoid using AI for individual employee performance reviews, and how can anonymized data be used instead to improve broader business functions without violating privacy?
We have been very firm in rejecting use cases where the source data is questionable or where there are active legal disputes, such as a vendor facing copyright infringement lawsuits. One of our strictest boundaries is employee performance; we believe you cannot use AI to determine whether a specific person should be rewarded or terminated. The risk of bias and the lack of human nuance in such a sensitive area could lead to unfair or legally problematic outcomes. However, we do see immense value in using anonymized, aggregated data to analyze global trends, such as identifying common ticket issues or optimizing resolution steps. This allows the business to improve its collective intelligence and service speed without violating the privacy or professional dignity of the individual worker.
Moving from a proof of concept to a production-ready environment often requires significant data normalization and interface automation. What are the common technical hurdles during this transition, and how should leadership explain the cost and complexity differences between a small pilot and a global rollout?
The jump from a “cheap and cheerful” proof of concept to a full production environment is often where many projects stumble. Leadership must communicate that a global rollout involves massive efforts in data normalization and the automation of complex interfaces that a small pilot simply doesn’t touch. You aren’t just scaling the software; you are scaling the data integrity requirements and the governance oversight needed for a global stage. This transition is inherently more expensive and technically demanding because the stakes are higher; a mistake in a pilot is a learning moment, but a mistake in production can halt a global workflow. CIOs need to manage executive expectations by explaining that the “last mile” of production is where the most critical—and costly—engineering work resides.
High-growth companies are using agentic AI to redesign roles rather than simply reducing headcount. How can agents allow a single employee to manage tasks that previously required multiple hand-offs, and what strategies ensure that these productivity gains actually lead to a faster speed to market?
The primary driver for agentic AI is not headcount reduction, but the absolute speed of execution and the ability to scale without adding back-office overhead. In markets like Japan and Europe, we are seeing roles redesigned so that a single employee can handle a workflow that previously required three different people and two hand-offs. By automating the connective tissue between these tasks, an agent allows one person to oversee the entire process from start to finish. This role redesign is what fuels a faster speed to market, which is essential because if you don’t increase your operational velocity, your competitors certainly will. We view AI as a way to allow a growing company to do significantly more within its existing constraints, focusing on productivity as a competitive advantage rather than a way to trim the payroll.
What is your forecast for agentic AI?
I believe that within the next few years, we will see a shift where governance becomes the most critical role of the IT department as agents become ubiquitous. CIOs will move from being service providers to being “conductors” of an autonomous workforce, where the focus is on maintaining data integrity and ensuring that hundreds of agents are working in harmony. We will see a massive surge in job redesign, where the “entry-level” analyst role disappears, replaced by “agent orchestrators” who manage the output of these tools. Ultimately, the companies that thrive will be those that didn’t just buy the latest AI tools, but those that built the most robust ethical and operational guardrails to let those tools run at full speed. My advice is to stop fearing the technology and start obsessing over your data quality, because your AI will only ever be as good as the information it is allowed to touch.
