As someone who’s spent years at the intersection of HR and technology, I’m constantly energized by the way AI is reshaping how we work. And in a recent conversation with Daniel Strode—Principal at Coode, keynote speaker, and returning guest on the Transform Your Workplace podcast—we explored how HR professionals can strategically adopt AI tools without sacrificing what matters most: our human-centered approach.

Daniel didn’t mince words: “Recruitment and data analytics are the two biggest areas where I’m seeing AI make a real impact,” he told me. And it makes sense. These functions are data-intensive, time-consuming, and ripe for optimization.

But Daniel is not just talking about plugging in new software—he’s advocating for a fundamental shift in mindset and structure.

The 5P Framework for Responsible AI Adoption

To help organizations navigate the complexities of integrating AI, Daniel shared what he calls the 5P Model, which includes:

  1. Process Enhancement – Using AI to streamline time-consuming tasks like CV screening, payroll, and internal queries.

     

  2. Personalization and Experience – Enhancing the employee journey with intelligent recommendations and custom insights.

     

  3. Predictive Insight and Data Analysis – Leveraging AI to anticipate employee turnover, identify engagement trends, and make smarter decisions.

     

  4. Policy and Governance – Establishing clear ethics, transparency, and compliance frameworks, especially critical in HR where sensitive data is the norm.

     

  5. People and Culture Transformation – Empowering teams to embrace digital tools while preserving trust, empathy, and the human connection.

Of the five, Daniel is clear: “Policy and governance and people and culture are the two most important. Without a digital mindset and a cultural muscle for change, nothing happens.”

Adoption Gaps: Power Users vs. the Rest

Daniel considers himself a “strategic power user” of AI. He uses tools like ChatGPT as creative partners for product development, ideation, and strategic planning. But most professionals aren’t there yet.

In fact, only about 25% of working-age adults use generative AI regularly. “They tried it once like it was Google, didn’t get great results, and never came back,” Daniel explained.

That’s not just an individual issue—it affects adoption company-wide. Many employees inherit AI without realizing it, as vendors quietly integrate machine learning into existing tools. “Most can’t turn it off even if they wanted to,” Daniel said. From recruitment platforms to employee engagement surveys, AI is already embedded.

This raises a critical need: cataloging your tech stack. Know what tools you’re using, what AI is baked into them, and where potential risks lie. “The vendor won’t take responsibility for how you use the tool—you will,” Daniel cautioned.

AI Strategy Requires Collaboration

When I asked who owns the responsibility for AI governance, Daniel didn’t hesitate: “It’s a trifecta—HR, compliance, and IT. It doesn’t work in isolation.”

IT may deploy the tools, but HR and compliance must drive the ethical strategy. From training employees to setting up governance committees, everyone has a seat at the table—and a responsibility to speak up.

Getting Hands-On: Upleveling the Team

For those eager to bring their teams along, Daniel advocates a hands-on, layered approach:

  • Start with experimentation – Encourage team members to “play” with AI. “The number one global use case right now?” Daniel asked. “Writing bedtime stories for your kids.”

     

  • Offer structured learning – Webinars, workshops, and zero-risk internal challenges help build AI literacy.

     

  • Use progressive prompting – Teach employees how to move from simple assistant-style prompts to using AI as a strategic advisor.

     

  • Assign identities to the model – “Always start a prompt by telling the AI who it is—‘Act as a recruiting expert’ or ‘Act as Brandon Laws.’ It frames the response much better,” Daniel said.

The People Transformation AI Demands

When it comes to people and culture, Daniel outlined three transformation layers:

  1. HR Function – Teams must evolve from transactional processors to strategic advisors, rethinking skills, operating models, and roles.

     

  2. Enterprise-Wide Literacy – Everyone needs a baseline understanding of AI. “We’re asking people to adopt a coworker that’s not human,” he noted.

     

  3. Cyborg-Minded Leadership – Leaders must model tech curiosity, encourage experimentation, and foster trust in AI.

“Build teams that are half-digital and half-human,” Daniel said. “Leaders need to talk about AI every day.”

Avoiding the Culture Pitfall

A moment in our conversation really stuck with me—when I told Daniel about an HR leader who asked if AI might make us less human. It’s a valid concern. Will all this automation disconnect us, like social media arguably has?

Daniel reframed the question beautifully: “Did electricity make us less human?” He acknowledged the tension but believes the outcome depends on intention.

“We could actually be more human,” he said, “if we spend less time on manual processes and more time storytelling, connecting, and understanding people one-on-one.”

Final Thoughts: A Brighter Vision

Daniel left me with this hopeful reminder:

“AI will change jobs. Some roles will shift slightly, others completely. But it’s not about AI taking jobs—it’s about helping people through the transformation with care.”

The future of HR isn’t just about smarter tools—it’s about braver leadership. If we want to shape organizations that are efficient and empathetic, we must lean into AI boldly, strategically, and, above all, humanely.

 

Brandon Laws is a workplace culture and leadership enthusiast, host of the Transform Your Workplace podcast, and VP of Marketing and Product at Xenium HR.