I’m thrilled to sit down with Marco Gaietti, a veteran in management consulting with decades of experience shaping business strategies across strategic management, operations, and customer relations. Today, we’re diving into the transformative role of AI agents in the workforce and how HR is adapting to this seismic shift. Our conversation explores the integration of AI into daily work, the critical partnership between HR and IT, the management of AI systems, new ways to measure performance in hybrid teams, and the evolving landscape of skills and job roles in an AI-driven world. Let’s get started.
How do you see AI agents reshaping the day-to-day work experience for employees in modern organizations?
AI agents are fundamentally changing the rhythm of work by taking on repetitive, data-heavy tasks that used to consume a lot of time. For instance, in many organizations, they’re handling things like scheduling, data entry, or even initial customer inquiries. This frees up employees to focus on creative problem-solving, relationship-building, and strategic thinking. However, it’s not just about offloading work—it’s also about creating a new kind of collaboration where employees interact with AI as a teammate. The challenge is ensuring that this shift feels empowering rather than disruptive, which means constant communication and training to help staff adapt.
Can you share a specific example of a task or role AI agents have taken over and the impact it’s had on human workers?
Absolutely. In one organization I’ve worked with, AI agents were implemented to manage first-level HR inquiries, like answering questions about payroll or leave policies. This used to be a significant time drain for the HR team, often leading to delays in responses. With AI handling these routine queries, HR professionals could redirect their focus to more complex employee engagement initiatives and personal coaching. The impact was twofold: response times for basic questions dropped dramatically, and employee satisfaction with HR support improved because the team had more bandwidth for meaningful interactions. It wasn’t perfect at first—there were hiccups in the AI’s accuracy—but over time, with feedback loops, it became a game-changer.
What’s your perspective on the notion that every role today is essentially a tech role, and how does that manifest in the workplace?
I think that idea holds a lot of truth. Technology, especially AI, has permeated every corner of work, from marketing to manufacturing. Even roles traditionally seen as non-technical, like HR or sales, now require a baseline of tech literacy—whether it’s using AI tools for analytics or understanding how automated systems impact workflows. In the workplace, this manifests as a universal need for upskilling. I’ve seen companies start to embed tech training into onboarding for all employees, not just IT staff. It’s about creating a common language around technology so everyone can contribute to and benefit from these tools, rather than being left behind.
Why is a strong collaboration between HR and IT so critical when integrating AI agents into an organization?
HR and IT collaboration is non-negotiable because AI integration isn’t just a technical rollout—it’s a people transformation. HR brings the understanding of workforce dynamics, culture, and employee needs, while IT provides the technical expertise to implement and maintain these systems. Without this partnership, you risk deploying AI tools that either don’t align with employee realities or fail to deliver on technical promises. I’ve seen successful integrations happen only when both teams co-create strategies, ensuring the technology serves human goals, like improving productivity or engagement, rather than just being a shiny new toy.
What challenges have you encountered in bridging the gap between HR and IT, and how did you address them?
One major challenge is the difference in priorities and language between the two departments. HR often focuses on qualitative outcomes like employee morale, while IT emphasizes metrics like system uptime or data security. This can lead to misalignment. In one case, I facilitated regular cross-departmental workshops where HR and IT teams mapped out shared goals, like enhancing employee experience through a new AI tool. We also appointed liaisons from each team to translate needs and concerns. Over time, this built trust and a shared understanding, breaking down those silos. It’s not a quick fix—it takes consistent effort—but it’s essential for long-term success.
How do you ensure HR and IT stay aligned on objectives during AI adoption?
Alignment comes down to clear communication and shared metrics. I always advocate for defining joint KPIs from the start—things like user adoption rates of AI tools or employee feedback scores on tech-driven processes. Regular check-ins, like monthly steering committee meetings, help keep both teams on the same page. It’s also important to celebrate small wins together, whether it’s a successful pilot or positive employee feedback. That builds a sense of shared purpose. Ultimately, both departments need to see AI adoption as a unified mission to drive business outcomes, not just their individual departmental goals.
In your view, who should take ownership of managing AI agents within an organization—HR, IT, or another group?
I don’t think there’s a one-size-fits-all answer—it depends on the organization’s structure and the specific use of AI. In many cases, I’ve seen a hybrid approach work best, where IT handles the technical management, like updates and troubleshooting, while HR oversees the people-facing aspects, such as how AI impacts roles or training needs. Increasingly, though, I’m seeing the rise of dedicated AI governance teams that include representatives from both HR and IT, along with legal or compliance experts. This ensures a balanced perspective, because AI agents touch everything from data privacy to employee morale. It’s less about who owns it and more about who collaborates on it.
What skills or qualities do you think are essential for those tasked with managing AI systems?
First and foremost, you need a blend of technical and human-centric skills. On the technical side, understanding data analytics and AI functionality is crucial to troubleshoot issues or optimize performance. But equally important are soft skills like communication and empathy, because managing AI often means managing change for people. I’ve found that adaptability is key—AI evolves rapidly, so managers need to be lifelong learners. Lastly, ethical judgment is critical. Whoever manages these systems must prioritize fairness and transparency, ensuring AI doesn’t unintentionally harm employees or skew decisions. It’s a tall order, but it’s necessary.
How do you strike a balance between leveraging AI agents and maintaining human oversight for critical decisions?
It’s all about defining boundaries upfront. AI is fantastic for handling routine or data-intensive tasks, but human oversight is non-negotiable for decisions involving ethics, nuance, or high stakes—like performance evaluations or conflict resolution. I’ve advised organizations to create clear protocols: AI can recommend or analyze, but final calls rest with humans. Regular audits of AI outputs also help catch biases or errors before they escalate. The goal is to use AI as a support tool, not a replacement for judgment. It’s a partnership where humans provide the context and values that AI can’t replicate.
What approaches does your experience suggest for measuring the effectiveness of teams that combine humans and AI agents?
Measuring effectiveness in hybrid teams requires a shift from traditional metrics. It’s not just about output anymore; it’s about synergy. I’ve seen organizations track metrics like task completion time to see how AI speeds up processes, alongside qualitative feedback from employees on how supported they feel by AI tools. Another useful measure is error reduction—AI often catches mistakes humans might miss. But it’s also important to assess team morale and collaboration. If AI is creating friction instead of harmony, that’s a red flag. Surveys and focus groups can reveal how well the human-AI dynamic is working beyond just numbers.
What new metrics or strategies have you developed to evaluate performance in these hybrid environments?
One strategy I’ve found effective is creating composite performance scores that blend individual human contributions with AI-enabled outcomes. For example, in a sales team using AI for lead scoring, we might measure not just the number of deals closed but also how effectively the team used AI insights to prioritize leads. Another approach is tracking learning curves—how quickly humans adapt to working with AI tools. I’ve also pushed for engagement metrics, like how often employees interact with AI systems willingly versus out of obligation. These metrics give a fuller picture of whether the hybrid setup is truly enhancing performance or just adding complexity.
Can you walk us through a specific success or challenge you’ve faced when assessing human-AI collaboration?
Sure. A notable success was with a client in the logistics sector where we integrated AI to optimize delivery routes alongside human planners. We measured success through reduced fuel costs and faster delivery times, which improved by 20% in the first quarter. But the real win was in employee feedback—planners felt less stressed because AI handled the grunt work of calculations, letting them focus on customer issues. The challenge came in interpreting data early on; the AI occasionally suggested impractical routes due to outdated inputs. We had to refine the system with human feedback loops, which taught us that collaboration isn’t a set-it-and-forget-it process—it’s iterative.
How are traditional job roles being redefined to accommodate the skills needed in an AI-augmented workplace?
Job roles are increasingly being broken down into skill sets rather than fixed titles. I’ve seen companies move away from rigid job descriptions to frameworks that prioritize adaptability—hiring for skills like critical thinking or digital fluency over specific experience. For instance, a customer service role might now include training on AI chatbots alongside empathy and communication skills. It’s about building flexibility into roles so employees can pivot as AI takes on more tasks. This also means rethinking career paths to include continuous learning, ensuring people aren’t boxed into roles that might become obsolete.
How do you approach deciding which tasks should be assigned to humans versus AI agents?
It starts with a clear analysis of value and complexity. Tasks that are repetitive, rule-based, or data-intensive—like processing forms or analyzing large datasets—are prime candidates for AI. Humans, on the other hand, excel at tasks requiring emotional intelligence, creativity, or ethical judgment, like mentoring or resolving conflicts. I’ve worked with organizations to map out workflows and identify where AI can augment rather than replace. It’s a balancing act; you don’t want AI overstepping into areas needing human touch, nor do you want humans bogged down by tasks AI could handle faster. Regular reviews of these assignments keep the split dynamic as technology and needs evolve.
What’s your forecast for the future of AI agents in the workforce over the next decade?
I believe we’re just at the tip of the iceberg. Over the next decade, AI agents will become even more integrated, evolving from task-specific tools to strategic contributors that anticipate needs and offer insights proactively. We’ll likely see more personalized AI companions tailored to individual employee roles, enhancing productivity in ways we can’t fully imagine yet. However, the human element will remain central—AI will amplify, not replace, human potential. The bigger question is how organizations will address ethical and cultural challenges, like ensuring equity in AI access and preventing skill atrophy. It’s an exciting, complex road ahead, and I think the winners will be those who prioritize people alongside technology.