If you’ve ever tried to bring AI into HR, you know the work starts long before the technology goes live. Mitratech's Susan Anderson and Aimee Pedretti have been there—testing models, managing change, and reminding everyone that “human-centered” isn’t a tagline, it’s a discipline. In their SHRM 2025 session, they shared five lessons from the messy, real-world side of responsible AI.
The promise is exciting: faster insights, fewer manual tasks, more time for the work that actually needs a human. Though, between model testing, legal reviews, and the endless “what ifs” from well-meaning stakeholders, it can start to feel like you’re steering a plane while still building the wings.
At Mitratech, we’ve seen this tension up close—the space between ambition and adoption, between what AI could do for HR and what it should do. Importantly, AI doesn’t automatically make processes smarter or more ethical; people do. That’s why we build technology that keeps humans in the loop, and why our HR digital assistant, ARIES™, was designed not to replace expertise, but to amplify it.
Introducing Mitratech’s AI for HR
Empower your HR team to stay ahead of constantly evolving employment laws and regulatory requirements. Using conversational AI, ARIES™ provides quick, reliable answers and resources by drawing from an extensive database of federal and state laws, expert Q&As, and compliance materials created by the Mitratech Mineral content team and HR specialists.
With ARIES™, you can:
- Quickly access accurate compliance information tailored to your needs;
- Reduce administrative workloads by enabling employees to self-serve answers to HR-related questions; and
- Provide clear guidance to employees, streamlining processes like onboarding, leave management, and workplace policies.
Implementing AI in HR
In their SHRM 2025 session, Susan Anderson and Aimee Pedretti shared what it really takes to get there; five lessons learned through trial, error, and a lot of collaboration. These aren’t theories from a white paper; they’re lived experiences from teams who’ve built responsible, human-centered AI in the messy real world of HR.
Here’s what they learned, and what every HR leader should know before turning “AI in HR” from idea to implementation:
1.以终为始
If you’ve ever been handed a “transformational” tool without a clear problem to solve, you know how fast excitement turns into noise. The same is true for AI. Before bringing in a new platform or feature, pause and ask the most basic question: what problem are we actually solving? When teams start with outcomes, such as like shortening time-to-hire or reducing administrative work by 20%, everyone knows what “good” looks like.
Research from McKinsey shows that HR AI pilots tied to measurable business outcomes are three times more likely to scale successfully than those launched without clear objectives. This clarity matters not just for alignment, but for trust. Bring in your partners early. IT, legal, compliance, and risk, so AI doesn’t feel like an HR side project, but a shared company initiative with the right guardrails from day one.
Speaking of guardrails—think of them as bumpers that keep innovation from veering off track. They don’t slow things down, they keep people safe. At Mitratech, for example, our ARIES™ digital assistant was intentionally designed not to weigh in on termination decisions or harassment complaints. Those moments deserve human judgment and empathy, not algorithmic shortcuts. Guardrails like that build credibility before the first output ever ships.
Start small, prove value, and scale with care. It’s not about getting the flashiest ROI in Q1, it’s about earning the kind of internal trust that sustains transformation.
主要收获:
- 预先确定人工智能用例和成功指标。
- 尽早让 IT、法律、合规和风险团队参与进来。
- 有意识地确定工具的范围,护栏会产生信任感。
- 从小事做起,建立动力和内部信任。
2.为正确的团队赋权
AI success in HR doesn’t start with technology, it starts with people. One of the most important lessons Aimee Pedretti shares from the field is this: HR needs to be embedded in AI projects from the beginning, not brought in at the end as reviewers.
In her own experience, being deeply involved with the technical team, learning their language, and understanding the tool’s underlying architecture helps her advocate for HR needs in ways that earned trust and shaped better outcomes. A product innovation mindset may be new for HR professionals and there may be varying levels of comfort with rapid iteration and product feature prioritization.
According to Lattice’s State of People Strategy 2026 Report, HR leaders who partner cross-functionally on AI adoption are 40% more confident in the fairness and quality of AI outcomes.
There’s another piece here that doesn’t get enough airtime: psychological safety. Building something new means messy drafts, half-baked ideas, and tension between what’s technically possible and what’s ethically sound. Aimee puts it best, the best teams aren’t the ones that never fail, they’re the ones that fail safely and keep learning.
主要收获:
- HR must be embedded in AI projects from the start, not just reviewing outputs.
- 建立一个跨职能团队,让人力资源专家和技术专家并肩协作。
- 鼓励心理安全,以支持学习、提问和迭代。
- 让人力资源领导者掌握人工智能技术的语言(如产品和工程),从而更好地倡导以人为本的解决方案。
- 设定明确的预期:早期版本并不完美,这是过程的一部分。
3. Audit and Quality Are Critical
If AI is only as good as the data it learns from, then trust is only as strong as the people who check its work. That’s where Human-in-the-loop (HITL) comes in. Human-in-the-Loop (HITL) is the practice of keeping humans actively involved in reviewing, refining, and improving AI systems, rather than letting the technology make decisions on its own.
From day one, Aimee’s team embedded domain experts alongside technical engineers, involved in every stage: from evaluating foundation models (“the LLM bake-off,”) writing and refining system prompts, to ongoing testing and refinement. This deep partnership helped bridge different mindsets, where engineers may see AI output that is 95% accurate as a win, HR leaders know the missing 5% might carry real risk.
Quality assurance is an ongoing process. The team created a structured review process where human raters continuously assessed AI-generated outputs for accuracy, clarity, and bias. HITL ensures AI stays relevant, responsible, and accurate, long after the first launch.
主要收获:
- 让领域专家贯穿整个人工智能生命周期,而不仅仅是在最后阶段。
- 通过校准的人工审核,创建可重复的质量保证流程。
- Measure outputs for accuracy, clarity, and bias frequently.
- 计划如何在一段时间内识别、标记和修复错误。
- 为持续改进而建设,而不是一劳永逸的启动。
4. Prepare People, Not Just Platforms
Every new technology sparks the same question in HR: “Is this going to replace jobs?” The truth is more nuanced. AI changes work before it replaces it, and in HR, it’s creating more work before it reduces any.
During early pilots, teams are writing prompts, testing responses, documenting edge cases, and explaining changes to stakeholders. Gartner’s 2025 HR Technology Trends report found that 72% of HR leaders say their teams’ workloads increased during AI implementation because of added training, oversight, and data cleanup.
变革管理是将人工智能融入人力资源的关键一环,它值得关注的程度远超过我们在此所能涵盖的范围。关键见解是什么?人工智能会转变角色,而不是取而代之。现在,在构建和测试阶段,要做的工作实际上更多了,而不是更少了,这为参与开发的人力资源专家提供了工作保障。
随着时间的推移,工作会不断发展,人力资源专业人员可以专注于更多以人为中心的任务,而人工智能则负责处理日常琐碎的工作。担心失去工作或被滥用是很自然的。透明度至关重要:领导层清晰、有意的沟通有助于缓解恐惧和不确定性。
不仅仅是人力资源团队需要得到保证,管理人员和邻近的利益相关者也需要了解情况并参与其中。参与这一过程并随时了解情况的人往往会更有信心,更少受到人工智能影响的威胁。
主要收获:
- 人工智能会转变角色,但不会取代人工;更多的工作发生在开发阶段。
- 领导层透明的沟通可以建立信任,缓解恐惧。
- 让管理人员和所有利益相关者参与变革对话。
- 尽早让团队参与进来,创建共同的旅程,减少不确定性。
- 强调不断发展的人力资源角色以人为本的价值。
5. Build Responsible AI by Design
If you’ve ever watched a “black box” tool spit out an answer and thought, “where did that come from?”, you already understand why explainability matters. Responsible AI means showing your work, tracing every output to a trusted source, making sure it’s grounded in expertise, and giving people the right to challenge it. When people understand the “why” behind AI outputs, they’re far more likely to use them.
Mitratech’s ARIES™ HR digital assistant is a practical example. It draws from vetted content written by HR and employment law experts, and every recommendation links directly to its source. This transparency gives users confidence to act on the advice, not second-guess it.
主要收获:
- 创建和使用来源透明、可信的人工智能系统,避免使用 "黑盒 "工具。
- 优先考虑可解释性,以追踪人工智能决策,尤其是在敏感的人力资源流程中。
When you strip away the jargon and the pressure to “get AI right,” what’s left is the same thing HR has always been about—good judgment, clear communication, and a deep understanding of people. The tools are changing, but the heartbeat of the work hasn’t. Whether you’re testing an AI pilot, updating your SMART goals, or just trying to make the day a little smoother for your team, the real measure of progress is trust.
If 2025 was the year HR experimented with AI, 2026 will be the year we operationalize it, with more empathy, stronger guardrails, and a deeper understanding that technology is only as fair and effective as the people shaping it.
Start small, stay curious, and keep humans in the loop.
