If you’ve ever tried to bring AI into HR, you know the work starts long before the technology goes live. Mitratech's Susan Anderson and Aimee Pedretti have been there—testing models, managing change, and reminding everyone that “human-centered” isn’t a tagline, it’s a discipline. In their SHRM 2025 session, they shared five lessons from the messy, real-world side of responsible AI.
Overview
The promise is exciting: faster insights, fewer manual tasks, more time for the work that actually needs a human. Though, between model testing, legal reviews, and the endless “what ifs” from well-meaning stakeholders, it can start to feel like you’re steering a plane while still building the wings.
At Mitratech, we’ve seen this tension up close—the space between ambition and adoption, between what AI could do for HR and what it should do. Importantly, AI doesn’t automatically make processes smarter or more ethical; people do. That’s why we build technology that keeps humans in the loop, and why our HR digital assistant, ARIES™, was designed not to replace expertise, but to amplify it.
Introducing Mitratech’s AI for HR
Empower your HR team to stay ahead of constantly evolving employment laws and regulatory requirements. Using conversational AI, ARIES™ provides quick, reliable answers and resources by drawing from an extensive database of federal and state laws, expert Q&As, and compliance materials created by the Mitratech Mineral content team and HR specialists.
With ARIES™, you can:
- Quickly access accurate compliance information tailored to your needs;
- Reduce administrative workloads by enabling employees to self-serve answers to HR-related questions; and
- Provide clear guidance to employees, streamlining processes like onboarding, leave management, and workplace policies.
Implementing AI in HR
In their SHRM 2025 session, Susan Anderson and Aimee Pedretti shared what it really takes to get there; five lessons learned through trial, error, and a lot of collaboration. These aren’t theories from a white paper; they’re lived experiences from teams who’ve built responsible, human-centered AI in the messy real world of HR.
Here’s what they learned, and what every HR leader should know before turning “AI in HR” from idea to implementation:
1. Start with the End in Mind
If you’ve ever been handed a “transformational” tool without a clear problem to solve, you know how fast excitement turns into noise. The same is true for AI. Before bringing in a new platform or feature, pause and ask the most basic question: what problem are we actually solving? When teams start with outcomes, such as like shortening time-to-hire or reducing administrative work by 20%, everyone knows what “good” looks like.
Research from McKinsey shows that HR AI pilots tied to measurable business outcomes are three times more likely to scale successfully than those launched without clear objectives. This clarity matters not just for alignment, but for trust. Bring in your partners early. IT, legal, compliance, and risk, so AI doesn’t feel like an HR side project, but a shared company initiative with the right guardrails from day one.
Speaking of guardrails—think of them as bumpers that keep innovation from veering off track. They don’t slow things down, they keep people safe. At Mitratech, for example, our ARIES™ digital assistant was intentionally designed not to weigh in on termination decisions or harassment complaints. Those moments deserve human judgment and empathy, not algorithmic shortcuts. Guardrails like that build credibility before the first output ever ships.
Start small, prove value, and scale with care. It’s not about getting the flashiest ROI in Q1, it’s about earning the kind of internal trust that sustains transformation.
Key Takeaways:
- Define your AI use case and success metrics upfront.
- Engage IT, legal, compliance, and risk teams early.
- Scope your tools intentionally, guardrails create trust.
- Start small to build momentum and internal trust.
2. Empower the Right Team
AI success in HR doesn’t start with technology, it starts with people. One of the most important lessons Aimee Pedretti shares from the field is this: HR needs to be embedded in AI projects from the beginning, not brought in at the end as reviewers.
In her own experience, being deeply involved with the technical team, learning their language, and understanding the tool’s underlying architecture helps her advocate for HR needs in ways that earned trust and shaped better outcomes. A product innovation mindset may be new for HR professionals and there may be varying levels of comfort with rapid iteration and product feature prioritization.
According to Lattice’s State of People Strategy 2026 Report, HR leaders who partner cross-functionally on AI adoption are 40% more confident in the fairness and quality of AI outcomes.
There’s another piece here that doesn’t get enough airtime: psychological safety. Building something new means messy drafts, half-baked ideas, and tension between what’s technically possible and what’s ethically sound. Aimee puts it best, the best teams aren’t the ones that never fail, they’re the ones that fail safely and keep learning.
Key Takeaways:
- HR must be embedded in AI projects from the start, not just reviewing outputs.
- Build a cross-functional team where HR and technical experts collaborate side-by-side.
- Encourage psychological safety to support learning, questions, and iteration.
- Equip HR leaders to speak the language of AI technology (e.g. product and engineering) to better advocate for people-centered solutions.
- Set clear expectations: early versions won’t be perfect, and that’s part of the process.
3. Audit and Quality Are Critical
If AI is only as good as the data it learns from, then trust is only as strong as the people who check its work. That’s where Human-in-the-loop (HITL) comes in. Human-in-the-Loop (HITL) is the practice of keeping humans actively involved in reviewing, refining, and improving AI systems, rather than letting the technology make decisions on its own.
From day one, Aimee’s team embedded domain experts alongside technical engineers, involved in every stage: from evaluating foundation models (“the LLM bake-off,”) writing and refining system prompts, to ongoing testing and refinement. This deep partnership helped bridge different mindsets, where engineers may see AI output that is 95% accurate as a win, HR leaders know the missing 5% might carry real risk.
Quality assurance is an ongoing process. The team created a structured review process where human raters continuously assessed AI-generated outputs for accuracy, clarity, and bias. HITL ensures AI stays relevant, responsible, and accurate, long after the first launch.
Key Takeaways:
- Embed domain experts throughout the AI lifecycle, not just at the end.
- Create repeatable QA processes with calibrated human review.
- Measure outputs for accuracy, clarity, and bias frequently.
- Plan for how to identify, flag, and fix errors over time.
- Build for continuous improvement, not a one-and-done launch.
4. Prepare People, Not Just Platforms
Every new technology sparks the same question in HR: “Is this going to replace jobs?” The truth is more nuanced. AI changes work before it replaces it, and in HR, it’s creating more work before it reduces any.
During early pilots, teams are writing prompts, testing responses, documenting edge cases, and explaining changes to stakeholders. Gartner’s 2025 HR Technology Trends report found that 72% of HR leaders say their teams’ workloads increased during AI implementation because of added training, oversight, and data cleanup.
Change management is a critical piece of integrating AI in HR, and it deserves more attention than we can cover here. The key insight? AI shifts roles rather than replaces them. Right now, during the building and testing phases, there’s actually more work to do, not less, and this creates job security for HR experts involved in development.
Over time, the work evolves, allowing HR professionals to focus on more human-centered tasks, while AI handles routine, mundane work. Fear of job loss or misuse is natural. Transparency is essential: clear, intentional communication from leadership helps ease fears and uncertainty.
It’s not just HR teams who need reassurance, managers and adjacent stakeholders also need to be informed and involved. Those who participate in the process and are kept in the loop tend to feel more confident and less threatened by AI’s impact.
Key Takeaways:
- AI shifts roles but doesn’t replace people; more work happens during development phases.
- Transparent communication from leadership builds trust and eases fear.
- Include managers and all stakeholders in the change conversation.
- Involve teams early to create a shared journey and reduce uncertainty.
- Emphasize the human-centered value of evolving HR roles.
5. Build Responsible AI by Design
If you’ve ever watched a “black box” tool spit out an answer and thought, “where did that come from?”, you already understand why explainability matters. Responsible AI means showing your work, tracing every output to a trusted source, making sure it’s grounded in expertise, and giving people the right to challenge it. When people understand the “why” behind AI outputs, they’re far more likely to use them.
Mitratech’s ARIES™ HR digital assistant is a practical example. It draws from vetted content written by HR and employment law experts, and every recommendation links directly to its source. This transparency gives users confidence to act on the advice, not second-guess it.
Key Takeaways:
- Create and use AI systems with transparent, trusted sources, avoid “black box” tools.
- Prioritize explainability to trace AI decisions, especially in sensitive HR processes.
When you strip away the jargon and the pressure to “get AI right,” what’s left is the same thing HR has always been about—good judgment, clear communication, and a deep understanding of people. The tools are changing, but the heartbeat of the work hasn’t. Whether you’re testing an AI pilot, updating your SMART goals, or just trying to make the day a little smoother for your team, the real measure of progress is trust.
If 2025 was the year HR experimented with AI, 2026 will be the year we operationalize it, with more empathy, stronger guardrails, and a deeper understanding that technology is only as fair and effective as the people shaping it.
Start small, stay curious, and keep humans in the loop.
