Strong Leaders, Strong Tech: Equipping Leaders to Grow in an AI World : Opening Keynote - From Shadow AI to Shared Superpowers

Original Event Date:
November 19, 2025
5
minute read
Strong Leaders, Strong Tech: Equipping Leaders to Grow in an AI World : Opening Keynote - From Shadow AI to Shared Superpowers

Strong Leaders, Strong Tech: Equipping Leaders to Grow in an AI World : Opening Keynote — From Shadow AI to Shared Superpowers

In a forward-looking, energizing opening keynote, Susan Anderson and George Boone explored what it means to lead in a world where AI is both accelerating expectations and reshaping the core of leadership. Rather than framing AI as a threat, they positioned it as a shared superpower — a tool that strengthens human judgment, deepens connection, and elevates leaders’ strategic impact. Drawing from Mitratech’s real-world experience with AI-enabled workflows, culture transformation, and leadership capability building, Susan and George illuminated how organizations can shift from “shadow AI” usage to intentional, governed, psychologically safe adoption. Their session blended mindset shifts, practical people-leadership tools, and actionable strategies for developing strong leaders who thrive in a tech-driven future.

Session Recap

Susan Anderson opened the keynote by grounding the room in the reality that AI is already here — not as a distant future tool but as something leaders are using today, often without guidance. She emphasized that the rise of “shadow AI” (unapproved or informal use of tools) reflects not defiance but need: employees are turning to AI because they’re overwhelmed, under-resourced, and searching for efficiency. Rather than policing behavior first, Susan argued, organizations must start by empowering leaders with clarity, education, and shared principles.

She highlighted a critical tension: while AI can accelerate operational excellence, it also raises real concerns around compliance, accuracy, and risk. For HR and compliance teams, the path forward isn’t to block adoption — it’s to create a governance framework that enables responsible innovation. Susan spoke about Mitratech’s approach: cross-functional alignment, clear documentation, transparent policies, and ensuring employees understand not only what is permitted but why.

George Boone picked up the conversation by focusing on leadership capability in an AI-powered world. He stressed that AI cannot compensate for poor leadership — if anything, it amplifies it. In the AI era, leaders must excel in empathy, clarity, relationship-building, and the ability to create psychologically safe environments where employees feel confident experimenting with new tools. George walked through how Mitratech is redesigning leadership development to incorporate AI literacy, strategic thinking, and adaptive capacity.

Both speakers emphasized that technology alone does not create transformation — leaders do. Susan noted that leaders must model curiosity, not fear. George added that the organizations that win will be those whose leaders use AI to free capacity for what matters most: coaching, decision-making, and human connection.

The keynote closed with a call to action: instead of hiding AI use in the shadows, companies must cultivate shared superpowers — a culture where leaders are informed, confident, and equipped to guide their teams through change with transparency and trust.

Key Takeaways

• Shadow AI is a signal, not a failure — employees turn to AI because demands exceed capacity.
• Leaders must be equipped first; they set the tone for safe, responsible adoption.
• AI doesn’t replace leadership — it amplifies strengths and exposes gaps.
• Governance is essential: clear policies, clear rationale, and shared language across functions.
• Psychological safety fuels innovation; employees need permission to learn, try, and iterate.
• Leadership development must now include AI literacy and strategic thinking.
• Use AI to remove administrative drag so leaders can focus on people and decisions.
• Communication is the new superpower — leaders must explain the “why” behind AI usage.
• Trust grows when organizations are transparent about risks, boundaries, and expectations.
• Strong leaders plus strong tech equals resilient, future-ready organizations.

Final Thoughts

The keynote underscored a central truth: AI isn’t redefining leadership — it’s revealing its importance. Organizations that thrive in the coming decade will be those that treat AI not as a shortcut, but as a catalyst for better leadership, better communication, and stronger human-centered cultures. By shifting from shadow usage to shared superpowers, companies can unlock AI’s full value while elevating the capabilities, confidence, and wellbeing of their people.

Program FAQs

1. What is “shadow AI,” and why does it matter?
Shadow AI refers to employees using AI tools without approval or governance. It matters because it signals unmet needs and poses potential compliance or data risks.

2. How can leaders reduce shadow AI without restricting innovation?
By providing clear guidelines, approved tools, and psychologically safe spaces to learn and experiment.

3. What leadership skills become more important in an AI world?
Empathy, communication, curiosity, strategic thinking, and the ability to create trust.

4. How should organizations build AI governance?
Through cross-functional collaboration between HR, IT, compliance, and legal, paired with transparent policies and training.

5. Can AI replace parts of leadership work?
AI can replace administrative tasks, but not direction-setting, coaching, or human judgment.

6. How do we ensure AI aligns with organizational values?
By designing governance frameworks that reinforce ethics, safety, fairness, and accountability.

7. What role should HR play in AI adoption?
HR should guide policy creation, leader training, cultural readiness, and employee communication.

8. How can leaders help employees feel safe using AI?
Normalize experimentation, set clear expectations, and share examples of appropriate use.

9. What’s the biggest risk if organizations ignore AI?
Employees will continue using ungoverned tools, increasing compliance risks and inconsistency.

10. What’s the first step for leaders looking to integrate AI?
Start with curiosity: learn an approved tool, use it for a simple workflow, and discuss the experience openly with your team.

Click here to read the full program transcript

Our first speakers here. We have Susan Anderson, head of HR and compliance expert, and George Boon, director, organizational effectiveness and talent management, with the title here, which I, I love this title, from Shadow AI to Shared Superpowers. So, in the chat, and can you hit the emojis or whatever? Give these two amazing leaders a warm welcome for volunteering their time and their expertise to guide us on this topic and share some of their perspectives on it. Give them a warm welcome to George and Susan. So, George and Susan, thank you so much for being here with us. We'll hand it over to you two. Thank you, everybody, for joining us today. Um, my name's George Boon. I'm the director of organizational intelligence and technology management here at Metrotech. Hello from, uh, way too hot Austin, Texas, right now. Today's session is essentially a sequel to webinars on AI and employee experience that my co-host, Susan Anderson, has hosted over the last few months. Uh, Susan's done incredible work over the past several months exploring how AI is reshaping the employee experience from recruiting to retention and what it means for trust, engagement, and culture. Something I think we can all agree are crucial parts to our working environments. We've also looked at research on where AI pilots are happening around time savings and administrative work. Now, the conversation needs to go further than pilots and productivity to what it really takes to scale AI in a way that builds confidence, connection, and measurable results. Susan is a seasoned executive with over two decades of experience leading business strategy, operations, and digital transformation across Fortune 100 and private equity-backed SaaS companies. An early adopter and advocate for emerging technologies, Susan has been at the forefront of the AI movement, guiding in- enterprise experimentation, shaping responsible adoption strategies, and re-imagining how technology can drive operational excellence and customer value. Her background spans strategy, technology, and enablement and growth execution, with a focus on translating vision into action and preparing organizations for what's next. Thanks, George. Really appreciate that intro. And you, you left us with, "What's next?" And exactly, that's what we're here to discuss today. At this point, you know, most leaders have already had that starting conversation, "Should we be using AI?" And right now, on the heels, comes the trickier one, which is, "Which AI should we be using, and how?" The reality is, no single AI model can meet every business need. Different teams need different strengths. Our customer support team might need a fast, affordable model that can handle huge volumes of text. Engineers likely want something fluent in code. Compliance leaders want airtight control of data. That's why forward-thinking organizations aren't picking just one model. They're orchestrating several. I feel strongly that AI isn't here to replace leaders. We need the human in human resources, but leaders who understand AI and how to use it responsibly to enhance their work will outpace the rest. At Metrotech, that's exactly the perspective we're helping organizations build, AI that's safe, orchestrated, and aligned to governance. AI as a str- strategic capability, not a side experiment, which is why we're talking about how to build an AI-enabled people function. You're probably hearing two competing narratives right now. On the one side, the board and the CFO are asking, "Where is AI gonna deliver leverage and accelerate our efficiencies, and reduce costs?" And on the other hand, your team is probably still wrestling with spreadsheets and four different systems that don't talk to each other. Somewhere in the middle sits the real work, which is figuring out where AI can remove friction without creating new risks, where it can make hir- hiring and performance feel more fair, and how you should show progress next quarter. So, let's get into it. I've been thinking a lot about AI use and the concept of trust, and I'm talking a lot about it lately. Two years ago, there was a real lack of trust in these AI tools, and rightfully so. The quality was spotty. Hallucinations, they were super common. And prompting skills needed to be honed to a tight point to get the right kinds of output needed. And fast-forward to today, we're seeing stronger outputs from common and accessible tools. In some organizations, leadership remains cautious, and in others, you know, the mandate to use AI comes with very little guidance. This data here that we're showing is from a report out just a few months ago from KPMG. It highlights a reticence to share how employees are using AI, you know, the Shadow AI or bring your own AI to work, uh, concepts. And I suspect this is really driven by a fear of being asked to do more, or a fear of seeing jobs eliminated. So, it's, it's, it's natural to see this human nature, you know, sort of wanna protect, um, some of this tool use. And additionally, we're seeing in the second data point, you know, formal training and the tools and the skillsets are lagging behind, with less than half of employees surveyed in that particular one sharing that they've had any sort of formal AI training. So, you know, ask yourself, are any of these statistics a surprise to you? I think it tells us two things. First, AI is already woven into daily work. This isn't futuristic. It's now. We saw in the chat, you know, some, some teams are just getting their toes dipped in. Others are full bore. We're gonna hear from panelists later today, um, sharing the whole range of experiences, and, and you'll hear from them just what's working and what's not. But what I would say, secondly, and more importantly, I think these stats reveal something deeper, and people don't feel safe being honest about it. They're not, um, encouraged to be open about the ways that they're innovating and learning. And your employees, they may be experimenting in secret, because they don't want- know how you're going to react.We'll use the phrase shadow AI, uh, throughout our session today to describe use of AI tools and applications by employees without the knowledge or formal approval of an organization's IT or security departments. And to quote a recent EY report, literally came out this week, um, stifling this innovation, uh, ki- or stifling this kills innovation. But ignoring shadow AI creates security, governance and compliance nightmares. The sequency, the sequency and the tension between innovation, workflow and fear, that's what I call the trust gap. Technology scales process and trust scales productivity, but what does that even mean? Innovation is sprinting forward. There's new AI tools, new expectations, new skills, but employee confidence, understanding and emotional comfort isn't keeping up. There's a wide variance with comfort of AI use. The trust gap is not a technology problem; it's a human problem. If employees don't feel informed, safe and empowered, they will avoid opportunities for AI use. We spent a decade optimizing HR systems for efficiency, but trust is not always a given. Our role as HR leaders is to help close the gap by giving employees permission and structure to use AI responsibly. As Pa- Patrick Lencioni put it, "Without trust, the most essential element of innovation, conflict becomes impossible." If people don't trust the process or one another, they won't challenge ideas, share concerns, or take the risk that innovation requires. That's exactly right, George. Studies show that technology adoption and ROI re- rise with perceived fairness, and employees trust their peers more than their HR systems. This year's hr.com's Future of Recruitment Technologies report uncovers the paradox, that while most organizations have embraced AI-powered recruitment fue- tools, fewer than half of the respondents believe that these systems deliver any real value. Only 43% of those surveyed rate their technology stacks as good or excellent. This really exposes a persistent gap between adoption and meaningful impact. Okay, so we have cited a handful of recent studies. So let's ground this conversation in what top voices in our field are saying. So you've probably heard of Gartner. You know, I'll start with Gartner. Uh, they remind us that it's not about having the most AI; it's about how we integrate it into the business and keep humans' oversight at the core. Dave Ulrich, one of the original architects of modern HR, has been super clear. AI doesn't succeed on its own. Rather, it succeeds when it amplifies authentic human connection. And then finally, these last two, Ellen Shook, who is Accenture's former CHRO, reframes the role of HR entirely when it comes to AI as being the voice of the employee amidst AI transformation discussions and programs. And Corin Faery's research backs this up. The best HR leaders aren't waiting for permission. They're moving fast, learning faster, and using experimentation as a leadership skill. I know we're talking about building an AI-powered people function, but I don't see AI as the key here. AI is an enhance- it's the whipped cream on the ice cream sundae, not the ice cream base. AI tools aren't one-size-fits-all, and it's important to find the right fit for your function. Viewing AI as an enhancement rather than a replacement is easier to embrace when it's grounded in your company's values. When teams start from what they believe in and how they show up for their people, it becomes clearer how AI can support those principles rather than competing against them. Let's take a look at how your values can guide where and how AI shows up across your people strategy. Your values shouldn't just live on a wall. They should be ingrained in your culture. If I could highlight one thing for you to walk away with today, it would be this. Every organization that wants to scale AI responsibly needs to connect AI to its values. This is how you move from a collection of tools to a reflection of who you are, how you make decisions, how you treat people, and how you build trust. Connect to the employee experience. You have the chance to make your values even more apparent in daily work. Our values at Mitratech are trust, ownership, transparency, growth and inclusivity. Susan mentioned shadow AI. These- those unsanctioned tools that pop up when people are eager to innovate but unclear on the guardrails. It's a challenge. Culture-minded leaders approach these AI best practice projects as a transparency project, bringing those experiments into the open so teams can safely explore, align, and inno- and innovate in ways that stay true to your values. Trust has to be a two-way relationship. How are you empowering your team? Are you providing clear examples of what tools are acceptable? How are you showcasing your teams that weaving AI into back in July, we hosted an AI-centric all-hands that showcased LinkedIn Learning's AI enhancing learning catalog that is now available to all employees at Mitratech. And then we also kicked off our AI momentum project bingo challenge that highlighted use cases and encouraged our employees to try out new prompts on their own. Think of AI through a culture lens, as a value to action blueprint, a framework that ensures every AI decision reinforces who you are, not just what you do. Yeah. Thanks, George. Um, so we're gonna launch a poll. Um, you're gonna see it shortly, and I'd ask you to rate which of these five aspects you feel most confident or advanced in in your own organization. I'll dive deep onto each of these here in a minute, but hopefully they're relatively clear as you go ahead and, and put your, your poll vote in. And I'd offer that our role as HR leaders is to create the conditions...... for fostering responsible use of AI, and that we have these five levers that we can pull to build value across our organizations. If you've already addressed four or five these, it's fair to say your organization is fairly mature, and that's a strong position to be in. But for most teams, what we're seeing is that some layers are well developed, like maybe tools and metrics, while others, especially governance and capability, are still emerging. So let's take a deeper look inside this model. I'm gonna go through each of these, um, one at a time and, and talk a little bit about what they mean and how they show up. And, um, these are in order, um, by de- by intention. So we're gonna start with vision and governance. I start here because this is about setting a tone and an aspirational context for how AI shows up at your organization. This vision and how it comes to light leans into how we build trust, translate our ethics, and reinforce responsible use of these tools. Many companies, including our own in our early days of experimentation, we started with tools and maybe some training. But not all have laid the groundwork to ground their AI work in an aspirational vision. When it comes to leveraging AI within HR, this is where you should start. Define the why behind using AI within your function. The other half of this is governance, and that sets the principles, the policies, the guardrails that keep innovation aligned with your organization's values. When governance is well-defined and well-understood, it doesn't slow progress, it actually speeds it up 'cause people know what types of innovation that they can experiment with. Clear direction reduces random experimentation and unstructured attempts, allowing teams to innovate confidently within known boundaries. And by creating these trusted sandboxes, organizations enable responsible experimentation and faster scaling of what works. The, the, the key headline here for you is, strong governance also helps address concerns about bias, fairness and ethics, something we know many of us care deeply about. I'll move to the second one, which is process/design, um, which is really about deciding where AI belongs, how it supports your people and your processes. You're looking at which steps should automation accelerate and which require deep human judgment. This layer often sits at the intersection of regulatory requirements and what really works best for your teams. So for example, you may choose to automate candidate screening to improve speed and consistency, but decide to keep final hiring decisions fully human to protect fairness and stay compliant. And before I move on to capability and skills, I wanna offer up what might be somewhat provocative. I think incremental process improvements are really tempting, but I'd say may be a flawed approach. AI enables business leaders to entirely rethink the value chain of work and how to leverage these tools to reimagine the way work gets done. Sort of think of it as a blank page approach. That's where most companies are seeing transformational instead of incremental value. Instead of asking yourself, "Where can we plug in AI?" they're asking themselves, "If we started from zero, how would this work look with AI as a teammate?" The third one here is capability and skills. And George mentioned early, earlier, technology only scales if people know how to use it. And so this layer is about building AI fluency within your HR teams, helping recruiters, HR business partners, and people analytics teams understand how to question, interpret, and then, uh, constantly improving AI outcomes. And this is also about mindset, moving from being consumers of tools to being co-designers of solutions. It's a, it's a, it's an important shift. And this fluency builds confidence. Confidence builds trust, not just in the systems, but in ourselves as HR professionals leading this change. Okay, number four, tools and agents. Now finally, we're getting to the technology itself. Notice this isn't where we're asking you to start. It has to come after you've set that vision and the guardrails, after you've looked at your processes and you've built the right skills. This is where you choose the tools that integrate cleanly into your existing systems, that can explain their decisions, that can align with your governance standards. Getting this sequence right prevents pilot sprawl or dozens of disconnected AI experiments that never scale or talk to one another. Even with the right tools, human oversight remains absolutely essential. Every model, workflow, and agent needs to be monitored, tuned, and recalibrated because this technology is not staying still. It continues to evolve. Human-in-the-loop practices ensure AI doesn't just perform, it learns responsibly. Importantly, they overturn, or they turn oversight into a feedback loop for improvement, keeping the innovation safe, transparent, and aligned with your team's intent. And finally, last but certainly not least, is what are you measuring with metrics and accountability? What's the A- ROI of your AI investments? How often are you conducting bias audits? How transparent are you with the results? These are board-level metrics that tell a complete story. I'll speak more about KPIs on the next slide, but together, these five components make up the value creation model for responsible AI. Not a theory, but a framework you can actually use. So let's take it into, um, reality. Rather than just talk about the abstract, let's make it practical. Before I dive into metrics and ROI, I wanna show you what this looks like in action, how these different facets of the value creation model can come to life in real HR processes. I'm gonna go with something really common we can all relate to most likely, which is thinking about the recruiting ecosystem. We know many of you are starting there. There's some obvious gains that AI can provide.So that first layer, Vision and Government- Governance, this is setting the rules of the game. That was the number one thing that I said, um, we should start with. Um, starting with a strong vision and governance approach to defining outcomes you're aspiring to achieve, and defining the right set of guardrails to help you innovate responsibly is, is the starting point. Um, this should be done before you screen a single resume or implement any technology. And in our example here, the AI value creation model should begin with, um, defining the overall business outcomes you're trying to drive and a vision for AI in this overall system. You might choose to unlock scale by leveraging AI tools to screen candidates or automate aspects of the scheduling process. Those are pretty likely places to start. Uh, for the governance aspects, you need to plan on defining how AI is used, what data it has access to, and how you're protecting fairness and privacy. You might even go further, saying every AI system in this process is documented in an AI registry and every recruiter knows the boundaries. And I mentioned AI registry, that's a fairly advanced topic, um, and it's emerging as a way to reduce shadow, uh, AI agents, which are unauthorized tools. They can build strength, uh, and trust in the AI operating model because they should show a central inventory, ownership, and an audit trail of which agents are being in use. The governance layer means no shadow AI, agent or otherwise, that we talked about earlier. So no surprises. So let me give you a couple of examples. The recruiter dashboard displays a tag showing that the sourcing AI has been bias audited and approved by your AI review board. Maybe you have, you know, something similar. And the system prompts, uh, something like, "Automated screening active and human review required before candidate contact." So some checks and balances within those systems. The second thing we talked about on the prior slide was process design. So this determines where AI helps and where humans decide, where AI is enhancing the work, and where humans continue to retain ownership. Um, from requisition to short list, AI can support recruiters by matching skills, creating resumes, and identifying human talent. But humans have to remain in control, reviewing, validating, and confirming each recommendation before it moves forward. This human in the loop layer ensures efficiency without losing humanity. Okay, there's t- there's three more. Capability and skills, um, we've said this a few times, technology is only gonna help when people trust it. So building the skill set and the, and the capabilities and competencies around using these tools for your recruiting managers and your hiring managers, making sure that they're trained in AI and understand the underlying technology, the data that powers these models, being able to spot bias, and, and also how to interpret the explainability reports. So in practice, recruiters might complete microlearning modules on bias detection. Uh, hiring managers might receive short explainability briefings, uh, or AI reports before each hiring cycle. Thanks, Susan. For tools and agents, uh, the technology backbone, I wanted to talk about a few things we're doing here at Mitratech. I mentioned LinkedIn Learning's AI enhancing learning catalog earlier, but perhaps LinkedIn's greatest tool in the AI space is the new- newly launched recruiting assistant. It helps take the mundane out of your recruiter's job. It can help you pass over scheduling, it can automate follow-up correspondence, or even make screening hundreds or thousands of candidates easier. Microsoft and LinkedIn have partnered to make your recruiters' lives easier and help them be able to spend more time focusing on hiring the right candidates, not managing an inbox or a calendar. Uh, maybe you want to hire the next Susan Anderson. You can even ask it to help you do that. Some great tools to have in, in their toolkit right now. AI tools can make onboarding easier by helping you facilitate performance management and employee learning as well. We won't touch on this in today's presentation, but I'd be remiss to not mention Mitratech has a full suite of onboarding, performance management, and learning management solutions. Thanks, George. Um, let's talk about metrics and accountability. That's our final, um, fifth step in our value creation model. Every hiring cycle produces data that can be measured and can be defended. So for example, you might be looking at bias audit receipts that can roll up into your quarterly reports. Uh, explainability logs that can feed into your compliance dashboards. And then traditional metrics like time to fill and candidate experience metrics. Those can show the productivity side of the equation. So when your board asks, "How do you know this is fair?" You have the receipts, literally. This is also where it helps to use trusted vendors. Um, Mitratech's built-in guardrails help your team, um, keep them protected at every step. So, you know, some examples here might be, I mentioned time to fill, you might see a reduction in 28%, for example. Your offer acceptance rate might increase due to faster communication. Your quarterly, quarterly fairness audit, you know, looking for minimizing, um, variance by gender and ethnicity, and then having access to all explainability reports and having those archived for future reference. So this is just a way to take it from theory to practice. On the next slide, I want to talk about, um, you know, how to, how to take your next step forward. We walked through the recruiting ecosystem. It's a perfect example of what responsible AI could look like when it's working end-to-end, where you have clear governance, your human in the loop checkpoints, explainability built right into that process. But that said, very few HR teams are there yet, and that's totally okay. Most organizations are still somewhere in the middle, testing tools, running small pilots, trying to figure out how to connect it all withou- without breaking what's already working.And rather than show you just a finished state and say, "Good luck!" Uh, I wanted to make this practical. So here's a sample 18-month roadmap. It's a, you know, call it a responsible acceleration plan, any HR function can start using in order to move from experimentation to scale measurable success. Now, remember that blank page approach I mentioned earlier where most companies are seeing transformational value? Instead of asking, "Where can we plug in AI?" um, ask yourself, "If we started from zero, how would this work to look with, at AI as a team member?" As you think about what's next, I encourage you to start paying attention to AI agents. It's all over the press. Um, you know, at HR Tech and SHRM this year, it was all anybody could talk about. Um, these are starting to mature. So as you, uh, are coming through your maturity curve, you're gonna hear a lot more about them nec- in the, in the months ahead. AI agents are essentially AI teammates. They're autonomous systems that perform tasks based on rules you define. They learn from their outcomes, and they collaborate across workflows. And this is really the next evolution beyond tools and prompts of how work is gonna get done in organizations, moving us from using AI to working alongside it. From fear to fluency really captures what this AI transmit- transition feels like for most HR teams. When AI first pops up, the reaction is usually a mix of curiosity with a little bit of caution. At Metrotech- Metrotech, we saw that too. Smart, capable people wondering if it was going to replace them or what would happen if they made a mistake. Uncertainty is a natural feeling. We didn't try to push it out the door or sweep it under the rug. We worked with it. We created space for learning and an opportunity for honest conversations about what AI could help us do. Over the past year, our people team has built AI literacy through practice. We started by mapping where automation could remove stress and then focused on capability. We tapped into learning modules on how AI could enhance our working environment and sought out to understand the best way to utilize it. Today, our team isn't afraid of AI. They're curious about it. They see it as a partner. That's the importance in the shift. Your team has to understand that AI is not here to replace them. It's here to enhance their working lives. That's what I want for every HR team. Not perfection, just progress, one conversation, one use case, one skill at a time. Thanks, George. As promised, this is where we bring trust and performance together. And I'll be honest, this is also where many HR teams start to feel a little daunted. However, that's exactly why measurement and ROI and metrics really matters. Without data, AI stays in the novelty or the pilot zone. Without the right metrics, it mo- it more easily becomes part of how you... Sorry. With the right metrics, it becomes more easily part of how you run your business. And for years, HR has measured adoption and efficiency, time to bill, cost per hire, engagement scores. As AI becomes part of how work happens, we also need to measure how much people trust it and how transparently it operates. So I have five core indicators here. Um, these create a fuller, more balanced picture of responsible adoption of AI. And so, uh, starting with number one, adoption. So this is where you're asking yourself, "What percentage of HR processes include AI-assisted steps?" In a recent McKinsey survey, nearly two-thirds of respondents say their organizations have not yet even begun scaling AI across the enterprise. Aim to track the percentage of workflows that include at least one AI-supported decision or task and the growth of that trend quarter to quarter. That's a simple way to begin. Number two is velocity, and this is really about speed to skill or speed to decision and how much faster are your key moments happening. SHRM's mid-2025 data shows AI-enabled recruiting teams reducing time to fill by 18 to 30% and time to decision and performance cycles of 40%. This velocity is your signal for agility and how quickly HR is converting insight into action. The third idea I have here is around quality. So how much faster, um, or how fast your decisions are becoming better decisions. Track outcomes such as post-hire performance, early tenure retention, or completion rates in LED programs. Number four is trust. Now, this is a qualitative one. This is your human pulse. Do employees really believe AI tools are making fair, explainable, uh, data for your, the decisions? And trust can be measured through pulse surveys, candidate sentiment surveys, or even an AI confidence index. And then the final one that I have here is around risk. This is really about accountability and action. So here you're asking yourself, "How many AI systems have completed bias or explainability reviews? What's your annual pass rate?" Just 43% of surveyed organizations have an AI governance policy, with only a quarter still, uh, with a quarter still in the process of implementing one. And almost a third of organizations have no AI governance policy. Now, it's easy to think about policy as the manifestation of the guardrails we talked about earlier, but you have to be careful that it doesn't have the cooling effect of, um, stifling innovation. And this means that most HR leaders are navigating AI safety issues, um, without the expert guidance on how AI intersects with employment law, the GDPR, EEOC regulations, um, without these policies. That leaves us all open to penalties, reputational damage, and loss of trust. So these are five, um, metrics to start with, but there are emerging ones. I'm gonna go ahead and just really quickly highlight a couple to be keeping your eyes on. Um, explainability rate, intervention rate. This is where humans step in and correct, um, AI outcomes, uh, as not meeting the quality or the standards you want.Looking at percentage of AI use cases that have, um, compliance to your data stewardship policies. Um, you can start to look at manager en- enablement index, which is where AIs are helping coach and helping managers make better decisions, and then candidate experience lift. I'll turn it back over to George to take us home. Thank you, Susan. That was a lot of information in a short amount of time. Let's recap what we covered today, you know, the, the four key takeaways you have right here. Build trust before you scale, move from tools to systems, design responsibly, and measure what matters. As we close out today, we had one final thought. AI won't make HR more human on its own. When AI- HR leads with trust, design, and accountability, that's when AI becomes a shared superpower. You can obviously read this slide on your own, but this is the heart of what Susan and I have described today. At the end of the day, you need people and technology working together in a way that feels natural and responsible. This is a reflection of what happens when we use these tools to bring out the best in our teams. The data gets easier, the process gets faster, but the heart of the work, fairness, judgment, and empathy, stays right, stays right where it belongs, with the people. So... Oh, sorry. Go ahead, Susan. (laughs) Yeah, it's your turn. Um, maybe you're ready to step fully into that next role of chief tru- trust officer for AI inside your organization. Maybe you're thinking about how to bring IT into the conversation to make sure your team has a real voice in cyber security, governance, and these new agent workflows that are starting to emerge. Or maybe your head's still spinning at the idea of shadow AI, and that's completely okay. That's the beauty of where we're at right now. Every organization, every HR team, we're all at different points on this journey. Some of you are building governance boards, others are just trying to make sense of what tools are already being used. What matters is that we're here learning together, that we're showing up to these conversations with open minds and a spirit of curiosity. This isn't about being first. It's about being responsible. It's about leading with clarity, confidence, and a shared commitment to trust. And if we keep doing that, learning, experimenting, sharing what works, like we're doing today, we won't just adapt to this new area, we'll define what good looks like for everyone who follows. With that, have fun today. Thank you for joining us. Susan and I will now pass it back over to Zach and Kim. Wonderful. Thank you so much, George and Susan. That was, uh, a great opening keynote. We really, really appreciate your time. Uh, lots of takeaways there from the AI bingo, AI recruiting ecosystem, the LinkedIn learnings. And Susan, I really appreciated, uh, your call-out about treating AI as a teammate and not necessarily as just an incremental change. I know that we've taken some first steps, uh, just on our team here to, uh, implement that, uh, ourselves. So, we've already kind of seen those early successes, uh, here. So, uh, we had a few questions come in, if you guys don't mind sticking around for one or two of those. Um, so the, our first question, uh, that came in is, "What leadership qualities will differentiate successful leaders and organizations in the next five years as AI becomes ubiquitous? What skills should a leader focus on and develop now to be ahead in the near future?" A two-part question. I, I can start. Um, thank you for the question. You know, at the heart of AI transformation, I have been beating a drum for the last two and a half, almost three years now, that this is a cultural transformation. At the heart of this, this is about changing hearts and minds about the way that we work. Um, certainly it's about building skills, but it's about creating a shared culture of innovation and a shared culture of rethinking the way work happens. And so if we go to, you know, classic management traits, looking at change and innovation, I think a spirit of curiosity and a spirit of confidence that, that you and your teams are in it together to learn together, to innovate together, and really, um, lean into sort of new horizons, to me, is some of the skill sets that I have seen have the most impact. The other thing I'll say, and, and this is another sort of, um, thing you'll hear come out of my mouth all the time, it's tone at the top. And so, those leaders at the top of the organization, um, need to be bought in. They need to understand the opportunities, the guardrails, and quite frankly, you know, that this is a, a cultural journey and that they have a critical role in, um, inspiring confidence to, to lean forward. Wonderful. Thank you so much Geor- or Susan. And George, he had to drop today, uh, so thank you for participating as well. Susan, we have another question, uh, uh, that we'll get to. So, "How do I coach a manager who works for me and uses AI to send the exact same emails to every team member? She is skipping the human intervention step." What would be your best advice? Yeah, this is... Yeah, this is a tough one. Um, it's very tempting, um, particularly if our employees are feeling like there is just a lot of pressure, um, and they're feeling overworked. It's very easy to hit, you know, the magic wand that's in your email or using a, a ChatGPT or Gemini or other tools to, to generate content. Um, this is, the fancy name for this is called cognitive offloading. Um, it means that we're staying so superficial with the way that we're working with these tools that we're not using critical thinking as part of the activity. And unfortunately, this can cause...... reputational damage for that employee. It can cause mistakes and errors, um, and it can, it can erode trust with your teammates. Um, so what I would do in this situation, first and foremost, as a leader across the entire organization you have influence over, I think it's important to state articulately and confidently your expectations that these tools are assistants only, they're not replacements for your work. And in fact, the role of critical thinking, which is a human experience, becomes that much more important, that using these tools, um, in ways that you're sort of coasting is actually something that is, um, a poor performance indicator as you're talking about your expectations for your employees. Um, if it, if it were me, you know, I would really lean into the reputational damage they're doing to themselves, the erosion of the trust that they're, that they're incurring with their peers, and the important opportunity to engage with these tools to, to open, again, what is more human, which is creative thought. Wonderful. Thank you. And we have time for one more question. I know a few more came in. Uh, we will answer those, uh, after this event. So our last question for this session is, uh, they're wondering about diversity and inclusion of humans in the loop. If AI takes over copi- copywriting, marketing, et cetera, et cetera, what happens to these team members? I think I'm, I'm trying to really understand the question. Uh, let me take a stab at it, um, and if I get it wrong, feel free to, to follow up in the question, and I'll take it offline. Um, when I think about bias and diversity of voices in the training data set that, that is used in all of these, um, these models, they, they don't actually think. They're, they're looking for patterns of words that they've been trained on. And early on in these tools, there was a lot of critical, um, evaluation and concern about the training data that underlied these because they were, you know, US-centric, cis, het male, you know, based. Um, some of the recruiting tools, uh, had bias built into them where it was making decisions about a person's gender or ethnicity based on names, as an example. I think we as a human community recognize these shortfallings, and I've seen, um, tools and large language models actively try to, um, dilute the effects of hetero- heterogeneous training data. What I would say is, when you are using these tools, being very intentional about, um, how you want it to represent diverse viewpoints. Um, if you are building an outline for training for new hires, for example, being intentional to s- to include as part of your guidance or your brainstorming with it, you know, multiple perspectives, diversity of thought, diversity of, uh, the way that, that activities are defined in a, in a training outline, for example. You, you mentioned copywriting and marketing. Um, same thing. Like, being really clear in the prompting and your brainstorming with these tools, who is your audience? What are their viewpoints that need to be represented and making en- sure to, to train it just like you would an intern, your expectations. Wonderful. Thank you, Susan. Yeah. We really appreciate your time and all of your insights.

More Resources Like This

On-Demand Sessions
AI
Future of Work
Original Event Date:
November 19, 2025

Strong Leaders, Strong Tech: Equipping Leaders to Grow in an AI World : Future-Ready HR - Mini “Impact” Sessions (3 × 20 min)

Vanessa Cannizzaro
Vanessa Cannizzaro
Vice President, Talent Management & Operations
Heather Fuqua
Heather Fuqua
Vice President Human Resources
Dessalen Wood
Dessalen Wood
Global Chief People Officer
On-Demand Sessions
AI
Learning & Development
Original Event Date:
November 19, 2025

Strong Leaders, Strong Tech: Equipping Leaders to Grow in an AI World : Closing Panel - From Culture to Compliance: Driving Growth Through People and Insight

Courtney King
Courtney King
Senior Vice President People & Culture
Lia Rollman
Lia Rollman
Director of People & Culture
Ingrid Myers
Ingrid Myers
Senior Director, Culture & Talent Innovation
On-Demand Sessions
AI
Original Event Date:
November 19, 2025

Strong Leaders, Strong Tech: Equipping Leaders to Grow in an AI World : Ask the Experts — HR, Compliance, and AI in Practice

Kyle Cupp
Kyle Cupp
Manager, Content Experts Strategy
Bethany Lopusnak
Bethany Lopusnak
Sr. Manager, Advisory Experts Operations
Somya Kaushik
Somya Kaushik
Associate General Counsel