Strong Leaders, Strong Tech: Equipping Leaders to Grow in an AI World : Panel Discussion — The Future of HR Leadership: Confident, Compliant, and AI-Ready

Strong Leaders, Strong Tech: Equipping Leaders to Grow in an AI World : Panel Discussion — The Future of HR Leadership: Confident, Compliant, and AI-Ready
In a forward-looking, high-energy panel, Danny Guillory, Aimee Pedretti, and Angela Cheng-Cimini unpacked what HR leadership must become in an AI-driven world—and what organizations must do now to prepare.
The conversation centered on a clear message: AI is no longer a technical initiative; it’s a leadership competency. HR must simultaneously guard compliance risks, shape ethical use, strengthen culture, and equip managers with the confidence and capability to lead teams augmented by AI.
The panel blended real operational playbooks (AI governance, risk frameworks, pilot selection, workforce education) with people-first leadership guidance (trust building, transparency, upskilling strategy, change enablement). Speakers highlighted the transition from “shadow AI” to shared organizational AI fluency—and emphasized HR’s evolving identity as the function that ensures AI accelerates people, not replaces them.
Session Recap
The session opened with a shared acknowledgment: AI adoption is happening faster than most organizations can govern. Employees are already using AI tools informally, and leaders are often unsure how to respond. The panel explored how HR can shift from reacting to AI behaviors to shaping intentional, ethical, and scalable adoption.
Danny grounded the conversation in reality: people are experimenting with AI regardless of official policy. HR’s role is not to police curiosity but to guide responsible exploration, reduce fear, and build trust. He emphasized psychological safety—employees must feel safe asking questions, trying AI tools, and acknowledging skill gaps. Danny also noted that AI elevates the importance of good leadership: clarity, empathy, and communication become even more essential as workflows shift.
Aimee brought the governance lens, noting that most organizations underestimate the compliance and operational risks of unmanaged AI use. She outlined the importance of AI risk assessments, use-case vetting, security reviews, and transparent model documentation. But she also stressed the need for accessibility—governance should empower people, not block them. Aimee shared strategies for building “AI confidence” in leaders through hands-on practice, structured prompts, and simple decision trees to help teams evaluate when and how to use AI.
Angela brought a strong people-first perspective, highlighting the cultural and capability shifts required for the workforce. She described how leaders must model openness, normalize learning publicly, and communicate how AI supports—not replaces—human strengths. She also discussed reskilling strategies, competency mapping for AI-augmented roles, and the importance of maintaining dignity during transformation. Her core message: AI adoption succeeds when it is anchored in values and humanity.
Across the panel, one theme stood out: AI readiness is not just a technical transition; it's a leadership transformation. Organizations must pair structural governance (risk, data, compliance) with leadership development (trust, transparency, upskilling, communication) to unlock the full potential of AI at scale.
Key Takeaways
• Shadow AI is already happening—HR must shift from restriction to responsible enablement.
• Employees need psychological safety to learn, experiment, and admit what they don’t yet know.
• Governance frameworks (risk scoring, approved use cases, data protection rules) are essential for scale.
• AI competencies must be part of leadership expectations—not optional extras.
• Upskilling cannot be one-time training; it must be embedded into daily work and supported by coaching.
• HR plays a dual role: protecting the organization (compliance) and accelerating it (capability).
• Transparency builds trust—leaders should openly share how they’re using AI.
• AI elevates, not replaces, the importance of human-centered leadership.
• Adoption accelerates when early pilots focus on real pain points, not hypothetical benefits.
• Organizations must define what “AI-ready leadership” looks like—and reward it.
Final Thoughts
The future of HR leadership is neither purely technical nor purely relational—it is the intersection of the two. AI requires leaders who can navigate ambiguity, build trust, communicate transparently, and help teams integrate new tools into meaningful work.
The panel made clear: HR must guide organizations from fragmented, shadow AI behaviors to shared superpowers grounded in ethics, clarity, competence, and humanity. By embracing AI as both a strategic accelerant and a cultural imperative, HR leaders can shape workplaces that are more compliant, more capable, and more human.
Program FAQs
1. How do we address “shadow AI” without shutting down innovation?
Create a safe reporting culture, introduce clear guidelines, and give employees approved tools so curiosity is directed—not suppressed.
2. What are the first governance steps HR should prioritize?
Define approved vs. restricted use cases, set data-handling rules, and establish a cross-functional AI review group (HR, Legal, IT, Security).
3. How can we help leaders feel more confident using AI?
Give them hands-on practice, templates, and structured prompts. Confidence comes from usage, not theory.
4. What skills define an “AI-ready leader”?
Curiosity, critical thinking, transparency, communication, ethical reasoning, and change enablement.
5. How do we maintain trust during AI adoption?
Communicate purpose, acknowledge risks, share limitations, and be transparent about how AI will support—not replace—people.
6. How can HR support employees who fear AI?
Provide psychological safety, skill-building resources, real examples, and a clear message that human expertise still matters.
7. Should HR lead or co-lead AI transformation?
HR must co-lead alongside IT, Legal, and business operations—HR owns people readiness, culture, and capability.
8. How do we measure impact of AI adoption on people?
Track adoption rates, sentiment, productivity gains, skill development, and compliance alignment.
9. What’s the best place to start with AI upskilling?
Begin with role-relevant micro-skills: writing prompts, evaluating outputs, and identifying appropriate use cases.
10. How do we ensure AI supports culture rather than eroding it?
Use AI to enhance—not replace—connection, communication, and coaching. Anchor rollout decisions in shared values.
If you can load up the chat with a warm welcome for our next speakers here. I would love to welcome these three individuals who've, I've kind of reached out to, to understand like, okay, as we think about the future of leadership, as we think about the integration of these strategies and build more confidence, and do this in a way of compliance and become more AI ready, I thought of these three individuals. They're awesome, they're doing this work already at a certain extent, h- at a high level. So let's welcome, I love the emojis coming in. First, I already see, Danny, you're up here, my man. Welcome Danny Guillory, uh, chief people officer at Gametime. We also have Amy, who is a principal AI transformation expert with Mitratech. And then we also have Angela, who is also a dear friend, and advisor, and mentor, and friend to the chief engagement community. So if you've been to her programs, you've seen her before. But she's CHR, CHRO, uh, as well. So let me stop sharing. We'll welcome these three individuals to the stage. Welcome, Danny. Good to see you. Good to see you, Zach. Amy, welcome in here. Morning. Happy to be here. And Angela, last but not least, welcome in. So let's jump into it. I, we have a limited amount of time. We could unpack the world and solve the world, but we only have about 30 minutes, so unfortunately, uh, we're limited on that front. So I'd love to jump into this discussion. And this theme of this panel has been all about like, how do we be more confident? How do we be more compliant? And how do we become more AI ready? And I think a lot of the leaders in the room have kind of already shared like concerns and kind of things that they're looking at that they're wondering like, is it actually safe to unlock at scale? How do we make sure we're not exposing critical data and information about our people? So I would love just to start there at a high level. And maybe we start with just like high-level introduction, you know, introduce yourself, share some of the work you're doing. But when we think about, you know, the focus of becoming AI ready, to you, what does that start to look like? Like, what are your thoughts around building readiness across the organization? And, uh, Danny, I'd love to pass it over to you to kick us off. Sure. Thanks, Zach. Uh, my name's Danny Guillory, chief people officer at Gametime right now, and previously the chief people officer at Glassdoor. Um, so what I'll share are some experiences that mix, um, both previous as well as current employment. I think the first thing that I've tried to use that's been really helpful is to think about, um, AI readiness and implementation, integration at three different, on three different levels. One is individual, individual enablement, so how we use it day to day. A second is how we think about workflows across the organization and potentially in employ agents to try and increase efficiency and capacity. And then the third one is how we think about transformation at scale across the organization, so thinking about maybe rethinking an entire function and how it works. So for me, I bite it off into those chunks and approach them a little bit differently. Um, I think the first one I'm starting to expand or, or alter a little bit in terms of how I'm thinking about it, and that's one thing I think that's really important about all of these discussions, is that this area, this field is emerging. So whatever you hear today may be obsolete in six months, and don't think that you are behind because we're all learning as we go through this. For me, I think the thing that we're starting to lean toward in terms of enablement, e- on the individual level, is thinking less about generic across the board enablement and more trying to focus our decisions and platforms and investment resources on early and deep adopters. And the reason why is that I think there's a, there's a belief that that's where the greatest transformation will come along versus pulling along people who might be resistant to it. Um, I think that, um... The other thing I'll just mention real quick, because I know we only have 30 minutes and I know we have a lot to, we have a lot to cover, um, is that I think in terms of education and growth, although g- although generic kind of courses can be helpful, what we're also starting to lean heavily toward is how we feature and encourage use cases amongst the people in the function. So we did an AI readiness survey when I was at Glassdoor, and what we found people wanted most was to learn how other people were using it in their area. So if there are ways that we can promote, um, innovation, experimentation, and feature both those that fail as well as those that succeed so that people can learn and grow from them, um, that's where I think we'll be able to accelerate learning, um, in the quickest way. I have a, I have a ton more to say, but let me stop there because again, I know we're, we're limited on time. Well, that's a great way to kind of kick us off here, and I appreciate you, one, kind of breaking down kind of the levels of maybe readiness and maturity that you could break down, and I also appreciate you saying like, okay, if we do have a limited amount of resources and how do we pull the right levers to get the biggest impact, let's maybe center a lot of our development on the people that are gonna be the biggest influence and the biggest transformers. And then leverage social learning to kind of enable other people to learn from that, right? Like, so I think that's awesome. So, um, thank you for kicking us off. That was, that's a great way to get us going. So Angela, welcome. I'll pass it over to you next. Like introduce yourself, share a little bit about who you are, and then, yeah, what does AI readiness start to look like for you? Angela Cheng Simony, um, really happy to be here. Zach, it's always good to see and, and be here with my fellow panelists. I'm currently the CHRO at The Chronicle of Philanthropy. And before that, I was the CHRO at Harvard Business Review. Danny, I really like your approach, uh, to the scalability and I just wanna double-click on the piece about the individual. I've been finding that I've been spending more time with actually the executive team.I totally love the idea of glomming onto your early adopters and using them to highlight successful use cases in your organization, but the biggest lever is to make sure that your executive team is actually understanding its application and how it moves the business forward. Otherwise, you're just going to have individual pockets of deep use and not necessarily any connection to the strategic value of adopting this technology. And that's true for any major change transformation, right? It has to be led from the top, it has to be clearly connected to what the organization is trying to achieve. And once they are routinely using AI, then it just starts to filter through all their conversations, all their thinkings, all their interactions with staff. If re- if it doesn't remain fully in their hands, then it's, then it's much more catch as catch can and it's just making the adoption that much harder. Kind of makes me think about too where, like, we don't want just innovation for innovation's sake. We want innovation- That's right. ... in a certain direction that we're trying to roll the organization within. So, like, as you share, if you just allow it to be Wild, Wild West, people to innovate and test these tools, you might have some really cool things happening, but is it strategically connected to what the executive team cares about and where they're trying to guide their company, right? So, um- And Z- and Zach, before we go on, can I jump in for just a second? Because I think what, um, what I also emphasized and so th- that is that, um, Angela, and I appreciate what you mentioned. What I did mention was the third part, which is enterprise-wide transformation, and that's what's facilitated at the exact level, I think. Um, I didn't focus on that as much as, as, uh, as you did in there, but really, I, I think actually, it's, it's really both a bottom-up and a top-down transformation. So I would, I would not just focus exclusively on that top, but I think the bottom up is also going to be really important because again, it's an emerging tech and we don't know where it's going to go. Yeah. That's great. All right, Amy, welcome in. Uh, great to see you again as well. And yeah, please introduce yourself. I know you're the AI transformation expert at Mitratech, so I would love to hear some of the work that you're doing. B- and what does AI readiness start to look like and sound like to you? Yeah. Thanks, Zach. Happy to be here again. Um, so, uh, my role's a little bit different than the other two panelists where I'm currently working on AI product development. So I'm bringing sort of my deep background as a former global HR expert and leader, um, and in executive coaching and consulting. Um, my current role at Mitratech, I'm actually working on AI product development, um, and AI transformation. Um, and so I would just add and kind of build on what both the other panelists have already said. You know, I think that... and what you heard in the keynote as well, um, from Susan and George. Um, but readiness is you have to have this kind of, kind of compelling vision that's driving you, that's contributing an- and is a big part of adding value to your strategy. Um, I think that leaders who are energized, excited about the potential of AI, you know, having that is a really important step. Um, and then, uh, a network of AI champions, um, and a way to sort of measure efforts, um, and having supportive structures really that, um, build AI capacity across both employees and teams. So some similar, you know, synergies across this panel. (laughs) Yeah. And I know you shared that you've even built certain scorecards and maturity cards, so we'll, we'll kind of get into some of that. But I'd love to start digging into some of these themes that we already see popping up as a group. And I'd love to start with kind of some of the cultural piece and how do we help people on how they think about it and also how AI shapes everything that we follow and do in, in many ways. And we talk about, okay, we want to empower from the individuals and the influencers to kind of leverage this and bring this to life. And George and Susan even shared, like, certain concerns of, you know, keeping humans in the loop and how important critical thinking is and, and leveraging it as a teammate that you work with and, and a copilot in many ways. So Amy, I'd love to pass it back to you to kind of kick us off on this first part, but how do... what are your thoughts on how we make sure we build the human-in-the-loop culture and that piece that creates excitement for people to engage with this and preserve some of that critical thinking and collaboration, um, while maintaining some of that, like, that speed element that AI can bring? Yeah. Yeah. So, um, the concept of human-in-the-loop really does come from AI development and, and that's this idea of having humans involved and human SMEs, subject matter experts involved, all the way from the design phase to reviewing and auditing output for things like quality, accuracy, any bias or other issues. And so when we kind of take that concept of human-in-the-loop, we can, um, apply that as a mindset to the culture. And so I think that that... having this expectation that individuals can look at themselves as human-in-the-loop, um, and really setting the expectations. We heard, we heard some of this in the response to the question earlier, um, touching on some of these themes, but really, um, you know, having this idea that, that you're responsible for the output that you're doing with generative AI. Like, just period. And, um, I think that as HR, it's really critical to build... help build that c- uh, culture of innovation, but also build supportive structures that, um, keep the critical thinking piece. Like, we do not want to, um, delegate critical thinking or human judgment where it doesn't make sense. Um, and so I think when we get to this sort of question about speed, you know, or not slowing people down or, you know, um...We always have some sort of tension between, um, speed and precision when we're talking about innovation. We're always gonna see that. Uh, and I think that, um, specific to this, I would actually say, I think it's really important, and I would caution against uh, over-focus on speed, um, to the detriment of quality. Um, and so we, um, we are seeing actually, I think, and I'm speaking at the individual level. You know, we talked about the three levels, the individual level of adoption that we're actually seeing, um, a lot of over-reliance on HR start cropping up, like, uh, in organizations that, you know, are m- more mature in their, or have more AI adoption, and it's more out and not the shadow AI. Um, you know, we see this sort of idea that there's over-reliance and, uh, or can be. It's risk, um, if we focus too much on speed and if we don't focus on quality and accuracy, right? Um, and so I think it's actually interesting. There was a Stanford study recently said 40% of office workers have interacted with what they call work slop. It's kind of similar to that first question we saw, or second question maybe we saw about the emails. You know, you, or where you put something out, it looks, you know, amazing on the surface, but then you, it falls apart kind of upon, like, further looking at it. And so those types of things actually, um, they found can... You might save some time in the short term with the person that did that work, but then the downstream impact can be, like, two hours additional work for somebody else. So I think that part of that human-in-the-loop approach is really making sure that you're creating a culture of ownership and responsibility, um, and performance management around, uh, quality output. So Amy- That was a- ... what I think is super interesting about the concept of human in the loop is that right now, the natural state is for employees to say, "Currently, the way I do work is I am, I am the loop." And so what we're asking people to do is break the habit of actually letting go of some of their work and relegating it and delegating it to AI. So for some people, human in the loop is not, it's not a security blanket. That is not comforting to them, because that is currently how they do they, their work. Everything is human in the loop. So what we're asking them to do is actually break the loop and start a brand new one where AI plays a part and the human continues to play a part to different degrees, right? So actually, HBR is in their current issue have this, has this really useful quadrant, right? About the things where it's no regrets. You completely let it do it, it's fine. And then there are some things that are sacrosanct. You never let AI do it. And so understanding where your work falls into those quadrants can be really clarifying for people. Um, but I, I find... And, and I say human in the loop all the time myself, but it, it, it is a really interesting construct where people are like, "Screw it. I'm already, like, like, what is wrong with it? I'm, if I'm going to be accountable, then I want to touch every piece of the work." So it's actually asking them to completely break how we've approached work for the past several decades and adopt a brand new thinking. So human in the loop might actually be doing us a little bit of a disservice while we're still in the process of adopting it. Two, two pieces I'll jump in, Zach, and mention on the, on humans in the loop. Um, one is that I think that humans in the loop are actually changing, meaning who the humans are. One of the things that, um, that I know that I'm doing in terms of restructuring my own team is bringing in more s- deep subject matter experts, um, and finding less application for generalists, because in order to monitor, to test, to look at what is working, what's coming out of AI and what it's producing and know whether that's accurate or not, whether it's good or not, um, whether it's helpful or not requires deep expertise. So I think, at least in terms of how we structure teams, that will probably start to change. Um, and in fact, how I'm pitching the experience to people is that we will not build through people. You're going to build your capacity through Agentic, but you're coming in as a subject matter experti- as a subject matter expert. The second thing, just as a perspective that I think is, um, is as people have already said before, not necessarily thinking about AI for AI's sake, but using it specifically to solve business problems and as a way to advance something with business problems. And when we... that we got that, we had that question earlier about metrics in the earlier presentation, and one thing that I thought I would share with you is, is I facilitated, um, a workshop with a set of investors a few weeks ago, um, talking about how when they evaluate companies, they evaluate AI readiness, and the measures that they use were all certain business metrics that received a different kind of focus. They were things like product velocit- product velocity, 30-day retention for AI users, what the gross margin was of the company, um, how they're pro- how they're using their proprietary data and their own feedback loops to serve customers better. The point being that it wasn't about necessarily just AI for AI's sake, but it was different metrics that they already measure a business on to see if those are advancing in a more rapid way or more productive way in some cases. So as we think about metrics and things that we want to drive, um, I think it's also helpful to think about what are the business metrics that we think will be most impacted by AI and think of that as an address, and that helps then to, um, inform and guide what we do. And I hope everyone listening, like, hopefully you're taking notes and writing down (laughs) what some of these metrics were that Danny just shared, and (clears throat) I'd be curious for some of you in the chat, if you've already started redefining what you're measuring to track maturity, adoption, as well as impact, and what kind of organizations should care about. Put it in the chat right now if you have a certain metric or a couple metrics that you care about. But I think this tees up kind of a follow-up question to you, Amy, around, uh, like, maturity and, like, how do we score these things? And I, I remember you talking about, uh, I think you created an actual AI maturity scorecard for HR. So can you tell us a little bit, like, what your perspective looks like on how we measure and understand? Like, first of all, what do we measure? What matters? And then how do we also start to reevaluate what's important to us? And what does that scorecard start to look like? Sure. Yeah, and so this is specific to Mitratech, something that we've just, um, created in the last few months, so hot off the presses (laughs) . Um, brand new, in the process of, um, having our, uh, teams look at, um, uh, uh, six dimensions and then four levels by dimension. So just to kind of a- address your question about HR, like for us, we're using the same scorecard across all of our teams. Um, though we do have a slightly different version or process for our engineering team. You know, this kind of makes sense. It's l- looks a little bit different what maturity is there. Um, you know, I don't know if we have time. I can quickly rattle off the six dimensions (laughs) . Sure. So yeah, so the first one is, uh, leadership and strategy. So we're just, um, you know, looking at the degree to which AI is, um, embedded and a part of your, um, planning, your decision-making, um, and your departmental goals. Second is AI competency, so that's just, um, confidence and capability at a department level, and this is ruled out at the department level. Um, and then... And that would also include sort of the role of the AI champion. We have one in each department. Um, third is use case identification and experimentation. So, you know, maturity in AI, you need to have this sort of disciplined, um, experimentation machine going, like, where are we with that? Um, process integration and tool usage is the next. So how embedded is AI into your workflows, into, um, you know, systems, et cetera? Um, and then metrics, we're talking a lot about metrics, but seeing teams move from that more anecdotal value statements, um, to being able to really define and track clear, um, value that the, uh, AI efforts and transformation are bringing. And then lastly, we have, uh, data and governance, so health of data, um, data quality, and then stewardship of data, um, and then appropriate use of data, and then just overall governance. So that's our... you know, very quick (laughs) . Mm-hmm. Could talk a lot about this. Um, and then I just will also mention that, you know, it's not one si- size fits all approach for us. We're not looking to have everybody up at level four right away. That actually is not what we're looking for. We're looking at having teams identify like, where do we think we are now or in that process now? And then where do we want to be in a year? And which of these areas focusing on, um, you know, can bring the most impact, um, and, and momentum? I love that, Amy, about meeting people where they are. I mean, I, I think sometimes when we talk about this AI being a revolution, it, it makes sort of a scary situation even scarier, right? And we know that learning and development happens best when we, when we teach, when we learn, and when you exchange information in a modality that is most readily received by the learner. And screaming at them and saying, "You must learn. You have to be an expert," right, doesn't do anything to really get the synapses to fire. So I appreciate that you're being very thoughtful about how you move people along but making it clear that we do expect them to learn and grow and that they do need to be moving along. And to the part of the governance in a really immature organization, some of the places to start is just sort of philosophically, how do you plan to embrace AI? Um, I'm in an organization that is heavy with journalists and content creators, and they are very skeptical. Some are even fearful about the content that can be generated by AI, and so we had to first really assure them that we were going to use it very intentionally and thoughtfully and not in a way that was going to replace them. Then we moved into a discussion about, okay, how does it mechanically actually work? What is, what is the appropriate use case? And thirdly, now, we're moving to the stage where we're capturing the wins and the places where people are using them, and we're grabbing the early adopters to come forward and talk about how they're, how they're, um, getting efficiencies, how they're getting f- what's not working. Um, because hearing that from their peers and their colleagues has been much more compelling than hearing it from me in HR or even the CEO. So I just wanted to call out that I thought that your pathway to how you move the organization along is a great one to follow. I, um, I second that, and, I'm, Amy has a lot more that she could probably unpack on that. So I encourage all of you listening, like, I'm sure she'll be happy, like, reach out to her. Maybe ask for this scorecard and just to dig into it deeper, 'cause it sounds like, how do we start to measure and just understand where are we today? And then what are kind of the plans and pathways of what's most important? And Angela, I appreciate kind of how you all have approached it, where, you know, philosophically, how do we get kind of the right mindset and narrative into the organization around this? And then now, how does that start to work into the actual use cases? So I think that's really important. Um, we're coming up on time, so I kinda wanna talk a little bit about this. We- we've talked about this a little bit so far, and even in George and Susan of, of kind of that sludge factor, right? Like, how do we avoid the quali- like, risking the quality of what we're putting out into the market, into our client experience, and even the employee experience? And I think a lot of organizations, and especially probably leadership teams, and Danny, you could probably speak to this, especially at a board level and investor level, like, this, this pressure to move fast. Like, it's a race. We wanna be winning the race. We wanna be the first to market, and we wanna claim this, you know, piece of our, you know, industry maybe. Um, but maybe that risk puts a lot of risk of moving fast, where now we're putting, you know, certain quality or compliance or governance at risk because of how fast we're trying to innovate. And Danny, I'd love to hear your thoughts on some of that. Like, how do we understand our risk levels with AI? And you even talked about how, you know, there might be certain pieces of the business and use cases where we do look for 100% accuracy and quality in the output that is provided. In other areas, we might not care as much for that type of accuracy. Um, could you just share or talk a little bit about that with us? Sure. So, um, in terms of, uh, in terms of quality and risk, just in terms of... Uh, I, I think... So, I've been with companies and am at a company now where w- our competitiveness is really dependent upon some of the transitions that are happening with, with AI. So, there's been kind of that existential urgency that we, that we have all had. I think for me, it goes back... And it's kind of repeating a little bit of something that I mentioned before. I think one is having the right people in place as, as, as it's being used. So making sure that experts, um, people with deep subject matter expertise are ones that are vetting things. I think we're, we neces-... We don't nec- as much talk about kind of how we want to compose our workforce based on this and, and how it's going to look based on this, um, and being thoughtful about how we bring people in. Um, I think a second piece that I would mention, and this was just, um, you know, in our prep call, Zach, in terms of just how I think of it, um, there are areas wh- that, where we require 100% accuracy. And although I may, we may use AI in those to some extent, there has to be deep, deep, deep vetting before we trust it. I think in other areas where you can have 90% accuracy, it's, i- i- it's okay. So for example, um, if I use it for developing a, a manager series on how to give feedback, that's a place where probably any mistake that it makes that's in that 10% area won't be fatal. However, if I use it in payroll in terms of how we pay people, if there's even a 0.01 error and my company has thousands of people, that's a fatal mistake. And so thinking about where we use it and what our risk tolerance is becomes important. The last thing that I'll just mention here is, is when we developed our governance around AI at Glassdoor, one of the things that we did was instead of making it a, um, a what you can't do, the guides and, and how it was put out there to the organization, it was really credit to our legal and IT teams, was really more of a roadmap of how to use it versus not use it. And so, and so I think how we frame things, um, in terms of risk can be framed in a way to train people how to use it positively versus don't do this, don't do that, don't do that. And I thought that was something really innovative that the teams did at Glassdoor when we put together our, our governance, governance effort. So those are some thoughts on that. Yeah. Angela, I'd be curious from your thoughts, especially you kind of already shared you have a lot of creators and people who are, you know, concerned about it as well and, and their role specifically. But then I'm curious as an organization, are there certain risk tolerance levels as a company or as a CHRO where you're going, "100% use AI for this entire responsibility or this en- entire task," and other ones where you're saying, "No, actually, our, our risk tolerances, you know, needs to be a little bit more intentional or careful here"? Yeah. Uh, I mean, our reputational damage, right? That is... The, the, the risk to that is, is practically infinite, right? I mean, there have been lots of horror stories about hallucinated articles that go out under really big banner headline, you know, media organizations. And that's, that's not just egg on the face. It, it signals perhaps an irresponsible, or at the very least, a passive, you know, recognition of how AI is just sort of seeping into your organization. You're not really paying attention to it. You know it's out there, but you're not governing it in any way. Um, so we're working our way up to that, right? We're talking about what are some of the routine tasks that we can eliminate. We're piloting, we're experimenting with other ways before we get to the meat of how do we... You know, how do we use it to generate cover stories, right? Like, I mean, that's... That is a little bit far out and we need to be really, really careful about that. Um, so going back to meeting people where they are, it's not only about the individual comfort, but also our willingness to, you know, treat our brand as a guinea pig. W- we don't want to do that given who we are and what we do. It's probably no different than a pharmaceutical company that doesn't want to use AI to concoct the, you know, the next, the next medication. I mean, that would... That could be deadly, to Danny's point, about a fatal error. So, um, moving cautiously with the, the speed that is comfortable, um, and hoping that eventually momentum will take on its own acceleration. Amy, I would love for you to add some additional perspective here. Do you have any thoughts on how we continue to find that sweet spot with, you know, velocity and, and integrating this into our organization? 'Cause we want to be kind of at the leader maybe within our industry, but do it in a way that builds trust and, and safety with our people. Mm-hmm. Yeah, I mean, what we've done is we have, you know, a culture that... where we are looking at like approved... We have sort of set of approved tools. We've kind of vetted them for various usage. And, and in those cases, um, you know, there's, there's a lot of r- room and latitude for experimentation. Um, but then anytime... And, and, um, you know, we didn't get a chance to talk about it much, but we have this sort of AI champions network and, and each one of them is sort of tasked with helping roll, um, roll out AI transformation in a way that, that follows all our various, um, you know, rules, guidelines, governance structures. Um, but then new pilot tools, anything that we're bringing in new or anything that might impact, you know, the backend, those all go through a... more of a governance process, more of a vetting process. Um, actually, I would probably leave questions about that to our next panel. Saumya's on (laughs) so she's, she's the one that's involved deeply in that process. Yeah. That... I want to kind of second something Amy shared there. We didn't get a chance to really dig in too much, but we'll touch on even more throughout the day. But how you can develop these different types of groups, amplifiers, champion networks to help kind of guide what...... safe use of AI and effective use of AI looks like. And even Danny talked about it, right? Like, let's double down maybe some of our focus and investment into the influencers and the champions, and they're really gonna create some monumental change and impact because they can... Th- they have that skill set or maybe that, that mindset and philosophy behind it, and then they can leverage social influence and almost peer pressure to get other people on the same path. So this was awesome. Uh, we are, uh, coming up on time, sadly. I feel like we were just getting warmed up. This conversation was incredible. You three are incredible. This was awesome. I would love just to do maybe quick round-robin closing parting shots around as our community of other HR leaders are on the call and they are trying to create AI readiness within their organization. What's one thing maybe they could start on Monday? And maybe not this Monday, but, uh, th- in theory, 'cause I know it's, you know, holiday next week. But let's say on Monday we wanted to get, uh, another step forward onto this journey. What is something you would promote to this audience to take? And Danny, I'll pass it over to you first. Well, a very easy one, if folks are at the beginning, is something that I- I've done sometimes as a meeting prompt with my team, is just ask them to share their use cases within, within the team, so how they're using it and what's working and not working, particularly if you're starting from the ground up and there's, and there, and there are people that are nervous about things. Um, that's a very easy one. There are much larger scale... The, the thing that I'd mentioned, just real briefly, about a, a lot of what Amy mentioned, which has been great, is that it's very parallel to, to kind of standard change management. So I think as we think about AI, think about it as any kind of change management process, um, that we've experienced in the past, even though it may have some different elements to it. So if you're at that level, I think use your same change management techni- techniques. Don't throw that away. On the other hand, if you're at the very beginning level, just get people talking about some use cases as a way to develop energy around it. And I would, uh, really reaffirm that social kind of sharing piece. I always say whatever's shared grows, so if you can create a culture of sharing and peer-to-peer support, you're gonna find whatever that is to continue to cultivate and grow itself throughout the organization. Angela, what about you? What's one thing people should start on Monday? I'd underscore and say make sure that your executive team is walking the talk, um, and that they're actually doing their own exploration, their own experimentation, that they're highlighting, showcasing the ways in which they're using it, um, for their own work and the ways they're using to advance the business. They've got to role model. Um, HR is a critical thought partner in this, but they've got to be, you know, half a step right behind you, um, in this effort, because it, it all falls apart if leadership isn't on board. Makes me think, like, if you were to do, let's say, Monday share-outs where the organization gets to share these things across peers, you assign each executive to lead the share-out first, and they have to come prepared with something, right? It kind of puts them on the spot- Yeah. Yeah, yeah. ... and forces them, right? So... All right, Amy, bring us home. One last thing that we should do on Monday? Well, you know, I'm gonna say start by starting. I know that's not very specific, but I think it's really good advice, is that we can get really sort of trapped in this idea of like e- like, we need to have a big program or roll out everything, and I think yes, there's all sorts of these different pieces and things you can do, and it's gonna really depend where you are. But more specifically, I would also say, kind of if you are really new in this process, one thing you can also do is an AI sentiment survey so you get a really good understanding of like where people are, where their head is at. Are they actually using AI? Um, so that, that could be a place to start for some organizations. Amazing. Well, thank you so much, Amy, Angela, Danny. Can we all give it up for these incredible leaders for really spending a short amount of time with us and sharing an enormous amount of guidance and expertise? So thank you so much to the three of you. This was awesome. (instrumental music)












