You’ve heard the buzzwords: agentic AI, generative models, governance frameworks, risk mitigation… But what does any of that actually look like when you’re in-house and trying to lead an organization through the AI shift, without setting off a compliance panic or creating friction with your product team?
That’s exactly what this session tackled.
In a webcast hosted by Corporate Counsel Business Journal (CCBJ) and Mitratech, a brilliant, cross-functional panel of women from legal, risk, and ops sat down to share what it really takes to build an AI program that’s strategic, responsible, and built to last.
You’ll hear voices like:
- Julie Honor, Counsel from Thompson Hine, with the wisdom of a former GC and the grit of someone who’s seen AI strategy go wrong without strong human oversight.
- Somya Kaushik, Associate General Counsel and legal leader, who’s helping reshape how legal teams go from being risk-averse to strategic accelerators.
- Madeline Reigh, a compliance strategist who keeps reminding everyone that governance isn’t just policy — it’s proactive culture-building.
- Liz Lugones, a legal ops veteran with a legal background who likens AI transformation to getting lawyers to love Zoom (they eventually did).
And they weren’t here for vague platitudes. These women brought stories, honesty, and humor to the table — from GCs who’ve been asked to “just write a policy that says no AI,” to risk leaders juggling HIPAA concerns while enabling innovation.
The message?
AI isn’t just another software deployment. It’s a people-first change initiative that cuts across legal, compliance, risk, product, and leadership — and it must be approached as such.
If you’re looking for an AI Program Checklist, you’ll find it. But if you’re looking for insight into how to lead a real AI transformation, rooted in people, built on structure, and fueled by strategy, then pull up a chair for this webinar replay.
Start with the People (Yes, Really)
Before they dove into frameworks or workflows, the panel made one thing clear: AI is not a tech problem — it’s a people problem. Or rather, it’s a people opportunity.
Julie Honor, a former GC turned Outside Counsel, put it best: “Even as AI evolves, the human element stays at the center. That’s where the real decisions — and risks — live.”
The group described a wheel-shaped framework for responsible AI adoption, and guess what sits at the center? People and operations. Everything else (policy, process, tools, governance) rotates around that.
Agentic AI may be the shiny new thing, but as Julie explained, it’s really just “AI with workflows.” What matters more is how you design and use it. And that starts by making sure your people feel informed, empowered, and supported.
Who Owns This? Building the Right Committee (and Culture)
As Liz Lugones pointed out, AI might feel new, but rolling it out isn’t all that different from other major tech shifts. Remember when lawyers didn’t want to use Zoom? Yeah. Now they’re the first ones to click the meeting link.
To make AI stick, you need a structure — and you need the right voices at the table.
Mitratech’s own journey began with a small AI committee, comprising just a few key players, mainly from the legal department. However, as the project evolved, they brought in product, security, marketing, IT, and ops. The result? A dynamic group that could actually make decisions and move things forward.
Somya Kaushik, Associate General Counsel at Mitratech, summed it up beautifully: “Legal’s role isn’t just to say yes or no. It’s to ask: how far and how fast can we go, responsibly?”
Executive sponsorship helps, but don’t wait for a memo from the C-suite. Start where you are. Build the relationships. Create the structure. Grow your AI program as you go.
Use Cases First, Tools Later
When people talk about AI, they often start with: “What tools should we buy?”
Wrong question.
The better one, according to Liz and Somya, is: “What experience are we trying to improve?”
Whether it’s making internal work more efficient or building external-facing products, you have to start by identifying clear, people-centered use cases. Then build guardrails around those — especially if sensitive data (like PHI or confidential IP) is involved.
At Mitratech, they created an internal use case submission process. It wasn’t about locking things down — it was about empowering teams with clear guidelines and smart pathways. Eventually, they layered in a more formal intake via TAP, and began reviewing tools and roadmaps on a quarterly basis.
One smart move? Classifying tools based on the data they touch. If it’s not sensitive or proprietary, there is no need to bottleneck. If it is? The full review kicks in.
Yes, You Can Be Responsible and Strategic
AI and risk are often positioned as enemies. But the truth is, compliance and innovation can be great partners (if you design it that way!).
Madeline Reigh, Director of Risk and Compliance at Mitratech, reminded us that managing AI risk isn’t a one-and-done. It’s a living system: policies, training, evaluations, and feedback loops that evolve in tandem with the technology and regulations.
That’s especially true in highly regulated industries, such as healthcare. One audience member raised a great point about HIPAA, and the panel responded with practical advice:
- Build closed, controlled models when needed.
- Add tailored HIPAA training for relevant teams.
- Layer privacy tech (like format-recognition filters) to catch risky data before it slips through.
And above all? Teach people the “why.”
Julie nailed it: “Don’t just tell people not to upload PHI. Make them understand what happens if they do — lost trust, fines, headlines. That’s when behavior changes.”
Training That Actually Works
The panel also stressed that training doesn’t mean a boring PowerPoint. It means:
- Embedding education in the tools themselves (smart prompts, input blockers).
- Creating just-in-time guidance.
- Customizing messages to resonate — what works for engineering won’t work for sales.
And perhaps most importantly, training is a culture play. People need to feel safe asking questions. They need to feel part of the process.
Which leads to one of the most viral moments of the session (if this had been X):
“People support what they create, and people need to see themselves in the change.” – Julie Honor
Pulling It All Together: The Framework in Action
By the end of the session, the audience had a clear picture of the framework’s full lifecycle:
- Start with the use case. (What problem are you solving?)
- Choose to build or buy. (What fits your needs and risk appetite?)
- Implement with care. (Include people, plan for change.)
- Govern it continuously. (Train, adapt, evaluate.)
As one audience question showed, all four stages are connected. Whether you’re worried about model pollution, misuse of data, or regulatory constraints, this AI program framework gives you a way to engage, not just react.

Final Takeaways from the Panel
Each speaker left us with a parting thought, tied to their perspective:
- Madeline (Risk): Keep policies dynamic. Risk isn’t static, and your strategy shouldn’t be either.
- Somya (Legal): Know your people, your appetite, your blind spots. Lead the dialogue, not just the documentation.
- Julie (Operations/Process): Help people feel included. You’re not replacing them — you’re empowering them.
- Liz (People): Legal ops is empathy, structure, and systems. Keep people at the center, and success will follow.
Last Reminder: Don’t Let “Perfect” Be the Enemy of Progress
There’s no one-size-fits-all playbook for AI. And that’s okay.
This conversation wasn’t about locking down a rigid system — it was about creating a living, breathing approach to AI that evolves with your people, your business, and the world around you.
So start where you are. Get organized. Get curious. And bring your people with you.
And if you’re still asking whether legal should lead the AI conversation?
The answer’s simple: you already are. Now lean in.
Want the full framework slide and follow-up materials? Watch the replay here.
