If you’re wondering why your AI pilot project stalled, or why your team still prefers sticky notes to smart tools, you’re not alone.
The truth is, most AI initiatives fail not because the technology doesn’t work, but because the organization isn’t ready for it. No champion. No training. No controls. No clarity on what problem it’s supposed to solve.
I recently sat down with Pete Terryn, Director of Operations at NuWave Technology Partners and co-founder of the West Michigan AI Lab, to talk about exactly that: why operationalizing AI is harder than people think—and what it takes to get it right.
Pete’s not a machine learning engineer. He’s not building custom models from scratch. But he has helped operationalize AI within his organization, and he’s built a thriving community of AI practitioners in West Michigan. Our conversation was packed with practical insights about what holds companies back and how to overcome it.
Here’s what we learned.
If you want to operationalize AI, you need someone to own it. And no, they don’t need a PhD. Pete doesn’t have a technical background, and he’s not a coder. What he does have is curiosity, consistency, and the drive to understand how AI can solve problems.
That’s what makes him the perfect internal champion.
Whether it’s someone in leadership or a line-level employee who’s already experimenting with tools like ChatGPT, the key is to empower that person. Give them space, give them time, and give them the organizational mandate to test and teach.
As Pete put it:
Your chances of implementing AI without a champion are pretty low.
RELATED: How to navigate change (and why your team hates it so much)
At NuWave, Pete and his team didn’t start with big moonshot AI projects. They started small: drafting their own acceptable use policy with help from an LLM. From there, they layered AI into help desk operations, giving new technicians a “second brain” to diagnose problems faster and categorize tickets more intelligently.
That’s the point: AI isn’t replacing people. It’s augmenting them.
This mirrors some of our own experience at LaFleur. Years ago, we built a predictive model that analyzed legal intake forms to estimate case value. It worked, but it didn’t get traction. Why? No one believed it could work. Not really. The technology was there, but the buy-in wasn’t.
The lesson: AI has to solve a real problem, and people have to believe it’s solving it.
This was Pete’s biggest personal takeaway, and I couldn’t agree more: training can’t live in one person’s head.
Even technical teams can be resistant to AI, especially if they don’t feel confident using it. Organizations that win here are the ones that create broad, structured education: think lunch and learns, certification platforms, internal Wikis, and good old-fashioned conversation.
In fact, Pete believes training is the key to AI adoption:
“I thought that just because I was learning it, it would get implemented. But you have to train the organization.
RELATED: AI and law firms: Preparing for the future
If you’re using ChatGPT Pro and think your data isn’t being used to train the model—surprise! It probably is. You have to turn that feature off manually. And unless you’re on ChatGPT Teams or using Azure-hosted OpenAI, your information might still be up for grabs.
That’s especially risky for law firms and regulated industries.
Pete recommends sticking with platforms that give you a clear “don’t train on my data” toggle. Microsoft CoPilot and OpenAI both offer that level of control, and if you’re using a wrapper product (like Harvey), you need to get it in writing that your data isn’t training anyone’s model.
This is the kind of conversation we’re having more and more with our legal clients. Because the last thing you want is for someone’s PII (personally identifiable information) to end up in a language model.
RELATED: Ethical marketing and AI: Navigating challenges in highly regulated industries
Pete’s go-to tools align with what we use internally at LaFleur:
Personally, I use Perplexity Pro with SharePoint integrations to search internal files. I also use Copilot to manage my calendar and find docs across Microsoft’s ecosystem. And yes, I’m a little paranoid about Google knowing too much, but the utility is hard to ignore.
We wrapped up the conversation reflecting on how easy it is to think you’ve mastered something after five minutes with AI. The reality is: AI is powerful, but it’s still a tool. And a tool is only as good as the person wielding it.
“You need to research your answers before you stand behind them,” Pete warned. “Just because ChatGPT says it, doesn’t mean it’s true.”
We’re all living through the AI revolution in real time. That means mistakes, overconfidence, and some healthy skepticism. But if you start with a champion, solve real problems, train your team, and stay smart about risk—you’ll be on your way.
And if you’re in West Michigan, check out the West Michigan AI Lab Meetup. You just might bump into Pete.
Let’s talk. LaFleur’s AI audits and consulting services help you evaluate your risk, build out your use cases, and train your team.