Strong Leaders, Strong Tech: Equipping Leaders to Grow in an AI World : Ask the Experts — HR, Compliance, and AI in Practice

Strong Leaders, Strong Tech: Equipping Leaders to Grow in an AI World : Ask the Experts — HR, Compliance, and AI in Practice
In a highly practical, insight-heavy conversation, Kyle Cupp, Bethany Lopusnak, and Somya Kaushik broke down what it truly takes for HR, Legal, and Compliance teams to operate confidently and responsibly in an AI-enabled workplace. Rather than speaking in hypotheticals, the panel focused on real-world challenges—shadow AI, governance gaps, data security concerns, employee confusion, and rising regulatory pressure—and offered concrete frameworks for building AI practices that are safe, ethical, compliant, and scalable.
The speakers blended hands-on operational guidance (AI policy development, risk scoring, governance workflows, documentation requirements, vendor evaluation) with people-centric leadership practices (transparency, education, change management, cross-functional partnerships). Above all, they emphasized that AI success depends on HR and Compliance working together—not in silos—to build systems, training, and guardrails that empower employees while protecting the organization.
Session Recap
The session opened with a shared recognition: AI use across organizations is already widespread, whether sanctioned or not. Employees are turning to AI tools for support, but HR and Compliance often lack clarity about what’s happening behind the scenes. The panel examined how to move from scattered, informal, high-risk AI usage toward a structured, legally sound, ethically aligned practice.
Kyle outlined the current state of AI in the workplace: fragmented experimentation and “shadow AI” happening inside every organization. He emphasized the urgent need for clear policies, plain-language guidelines, and practical examples so employees know what is safe and what is not. He also discussed the importance of workflow mapping—understanding where AI fits into everyday processes and how HR teams should document decisions, approvals, and required guardrails. Kyle stressed that AI policy is not a one-time deliverable: it must evolve as regulations, tools, and internal use cases mature.
Bethany brought the operational lens, specifically around risk, documentation, and governance maturity. She explained the importance of consistent approval pathways, centralized repositories for AI requests, and clear delineation of roles between HR, Legal, IT, and business leaders. According to Bethany, organizations should adopt a “trust but verify” approach—empowering employees while embedding oversight and tracking. She also described how advisory teams are increasingly guiding clients through scenario planning, compliance monitoring, and implementing training programs so that risk is mitigated early, not after an audit or incident.
Somya tackled the legal and regulatory reality head-on. With emerging AI legislation in the U.S., EU, and globally, she highlighted the need for organizations to adopt model documentation, data privacy safeguards, explainability practices, vendor diligence, and bias monitoring. She stressed that HR leaders must understand which decisions AI can assist with—and which it legally cannot. Somya also underscored that transparency is both a leadership expectation and a compliance requirement: employees must understand how their data is used, and organizations must be prepared to show regulators exactly how AI-supported processes work.
Across the conversation, all three experts returned to a core message: HR, Legal, and Compliance cannot manage AI alone. Success requires a cross-functional approach rooted in clarity, continuous learning, ethical frameworks, and shared accountability.
Key Takeaways
• Shadow AI is already happening—organizations need policies, training, and transparency to regain control.
• AI governance must be cross-functional, involving HR, Legal, IT, Security, and business leaders.
• Documentation is critical—maintain clear records of use cases, approvals, model risks, and data flows.
• Regulations are accelerating; organizations should prepare now for explainability, auditability, and bias controls.
• Employees need simple, accessible guidance—not legal jargon—to use AI responsibly.
• AI cannot replace human judgment; HR must define where human oversight is mandatory.
• Vendor management is part of AI compliance—organizations must evaluate privacy, training data, and model behavior before partnering.
• Training is essential—leaders must know what AI can do, what it cannot do, and how to escalate concerns.
• Transparency builds trust—employees should know how AI fits into decisions that affect them.
• AI maturity is iterative—organizations must update policies, practices, and risk frameworks as tools and laws evolve.
Final Thoughts
The session made one truth unmistakable: AI readiness is about responsible, collaborative leadership—not just technology. HR, Compliance, and Legal must serve as architects of safe, ethical adoption, ensuring people understand how to use AI and how AI impacts them. By pairing strong governance with education, communication, and cross-functional partnership, organizations can accelerate innovation while protecting culture, compliance, and employee trust.
The future will not reward organizations that avoid AI—it will reward those that deploy it safely, transparently, and with strong human oversight.
Program FAQs
1. What’s the first step in reducing “shadow AI”?
Start with clear, simple policies and approved tools. Employees use AI when they lack guidance—give them structure.
2. How should HR and Legal share responsibility for AI oversight?
Legal handles regulatory interpretation; HR manages policy communication, training, and ethical use. Both review high-risk use cases.
3. How do we evaluate whether an AI tool is compliant?
Ask for documentation: training data details, bias audits, security standards, model explainability, and data retention policies.
4. How often should AI policies be updated?
Quarterly at minimum—AI laws and tools evolve rapidly.
5. What risks arise when AI is used in hiring or performance decisions?
Bias, lack of explainability, and legal restrictions. Many regions require human review for any employment-impacting decisions.
6. How do we make AI training accessible to non-technical employees?
Use role-based modules, real examples, “dos and don’ts” lists, and short workflows explaining safe usage.
7. How can we ensure responsible experimentation with AI?
Create controlled sandboxes, approval forms, and guardrails around sensitive data.
8. What documentation is required for compliance audits?
Model factsheets, use case logs, risk assessments, vendor due diligence, and policy acknowledgments.
9. How can HR encourage adoption while managing risk?
Lead with education and empowerment, backed by transparent policies and escalation pathways.
10. What does good cross-functional AI governance look like?
A shared committee overseeing policy, risk scoring, approvals, and monitoring—with HR, Legal, IT, and business partners equally represented.
I'm very excited to introduce our set of our HR experts. So, everyone, this is your opportunity to hear directly from Mitrotech's HR compliance certified experts about how AI is reshaping HR operations. From new legal obligations to the gray areas around ethics, policy, and employee relations, this is your time. So, first things first, go ahead and stop, start dropping in your questions into the chat. We're curious, what do you want to know about AI with respect to the workplace and in the context of HR? So start typing. We want to hear all of your questions. Uh, we will get to as many as possible in the next 30 minutes, and as you are doing that, I am going to ask our experts the very first question. So, experts, uh, first, can you please introduce yourself? And then when you do that, we will start nice, light, and easy and very, very fun. What is your late night, uh, guilty pleasure snack? Who's the first victim? Somya, I see that you are off, uh, microphone. You're not I can go. ... here. Sorry. Go. Hi everyone, I'm Somya. I'm associate general counsel at Mitrotech. It's really a pleasure to be here, so thank you for having me. Um, I, at Mitrotech, oversee the AI committee and run that across the enterprise. So I work with product, marketing, um, security, uh, all other stakeholders to make sure that we are adopting AI in a way that fits our business, our culture, um, and then our strategic plan moving forward. I also oversee, um, a little bit of our litigation, and privacy as well of some of our IP portfolio. Um, but, you know, AI, as you all know, has been the forefront, so that has taken a majority of my time which I do love. Um, and it's been a great experience and I'm happy to share some of those insights. Uh, coming, sharing y- with you all how to even start a program like that, what it means to have a program, um, and how do you continuously keep up to date with maybe some of those really confusing laws or ever-changing laws. So I am here as a resource, so please drop in your questions. And my late night snack, I really had to think about this before, and I think it's Doritos. It's not ice cream, it's Doritos. (laughs) Good answer. Yes. Thanks. Thank you. Yeah. Kyle, we'll move over to you. Sure. I'm Kyle. I'm a content strategist. I work with a team of, uh, com- HR and compliance experts who help, uh, our, uh, our, uh, customers navigate, uh, all the things in HR, in, in the HR compliance world. And I don't think that one should ever feel guilty about their late night snacks, but then I say that and my answer is boring. It's popcorn. Nothing wrong with popcorn. Always a classic. Yeah. Can never go wrong. Yeah. Wonderful. Thanks, Kyle. And Bethany. Hi, folks. I'm Bethany Lofoshnak, and I'm excited to be here. I have spent the last 25 years or so working in the HR benefits space, and in the last 11 with Mitrotech's HR compliance solution offering, advising employers on HR and benefits compliance and operations. I am particularly interested in how AI can help people do their best work and make better decisions. So, I'm really excited about our session today. And it's something I'm constantly exploring internally as one of our AI champions. As for the late night snack, I'm glad to say I'm in good company. I'm 100% team salty. And so, honestly, it's not unusual for my husband to walk in on a light, in the late night and find me halfway to the bottom of a bag of family size kettle chips that clearly was not meant for one person. But going with it anyway. (laughs) That is too good. All right. I see that questions are slowly starting to trickle in here. So, we will give it a few more moments to, uh, get going. And before we do that, I think it's really important that we lay some foundation work, right? On kind of just the basics of AI, all of those kind of fundamental questions. Um, and so, let's jump into that really quickly as I pull up some of those questions for us over here. All right. So, our first question that we have is, uh, are there strict prohibit- prohibitions on what AI can do for HR operations? Okay. I can, I can jump in there. Um, so a- as laws are always changing, this is, you know, this is something to keep up with because we have, um, very sectoral laws in the US and then we've got the EU AI Act in, um, as the global standard. And so, um, it depends. That, that question, um, is a great question, but it's hard to answer definitively because it's going to depend on what jurisdiction you're in, um, and which laws apply to you. But generally speaking, you know, um, at Mitrotech, we like to look to the EU AI Act as, like, the golden standard for us as there's no federal, um, AI or privacy law in, um, the US at this moment. So, um, you know, what, what you want to keep in the, in the back of your mind or even the front of your mind when you're deploying, uh, either your own AI systems within the HR space and employment space or using a third-party tool. You want to make sure that, um, you're not.... having a very, uh, dispositive decision being made entirely by an AI tool. So, we- we've heard about, you know, the recruitment process, about hiring and terminations and how you make those determinations. Those things are what a lot of the laws are trying to protect, that a con- a- an individual doesn't have one of those decisions be made entirely by an AI tool. It does need to have the human in the loop and that's where, I think, you know, when you're working with these tools in an HR setting, you wanna make sure that you understand where really... where you can truly get value, um, with an AI tool. Maybe that's sifting through, um, candidates or that's identifying certain things, but that doesn't stop there. Then you've got to use the human approach, the element of, um, critical thinking and analysis. And so, I think the- the overarching intention behind all the laws is, you don't want to have someone's... such a big, dispositive decision be completely decided by AI tools. Humans should be involved. That's one thing you cannot do. Um, and the other thing is that privacy and AI laws really try to protect consumers from any kind of deceitful, manipulative or fraudulent activity, and dis- although that's sometimes hard to completely navigate, and we know that that's always gonna be out there with scams and phishing and, you know, things like that. That's the other thing I would keep in mind as you work with these AI tools in any setting, that if you're deploying them or you're using them, you wanna make sure that, in your HR setting, you're not misleading a candidate, um, with maybe the information you're putting out there through an AI tool or, um, you know, even in the actual employment situation, you don't wanna have anyone feel like they've been misled. Um, and so I would say those two things are important and- and, um, figuring out what you can and cannot do. Yeah. We have talked a lot about human in the loop throughout the, uh, opening keynote. Even in our last panel, it was mentioned. Now, let's take kind of a- a legal perspective on it. What does human in the loop really mean and how does it affect liability? Yeah. So human in the loop is, uh, a phrase you will hear forever onwards (laughs) in some variation of it, and it's important for a company or, um, even yourself, uh, if you're using it for your own organization. Um, y- the reason why we have human in the loop to reduce liability is because we, uh, want to make sure that what we're putting out there is accurate, is up-to-date, and again, is not going to be, um, deceiving or, um, uh, you know, causing any misrepresentation for a company. And so, if we don't have someone reviewing the content that you're creating with AI, if you don't have somebody reviewing, um, let's say, the marketing material that you're using, or, um, even a job posting, if you haven't reviewed it and there's something in there that might not be accurate, or it might not... or it might have promised something and then you can't fulfill it on the backend, um, because you didn't review that and you weren't part of that, um, process. You know, you open yourselves up to liability because you have the potential for someone to rely on what you have provided them and... to their detriment. And now, the organization is responsible for any injury or harm that was caused. So having a human in the loop really means that anything you create that is going to be put out for a consumer or the external product, work product, you want someone who can review it, can verify it, can make sure the information makes sense for the context, um, and that it's truly accurate for... a- and- and serves its purpose. And that way, um, you're not putting things out there where you're exposing yourself or company to liability. So it's about filling that gap. Um, and there are certain things where maybe the risk and the liability is low, so it- it requires a little bit of assessment. So if you're putting out marketing material that offers, um, promotional, um, offers and deals and discounts, you're gonna want a human involved in that because those are going to have larger consequences if they... if you can't follow up or someone's, you know, been injured or damaged by that. So, uh, I also think there's a sliding scale of liability, and each one is gonna look a little different. Okay. And our last question for this kind of foundational groundwork that we're laying here for this section: what are some ethical concerns HR leaders should consider when using AI? Yeah. I think the- the way to approach that is when you're bringing in AI, especially in the HR context, I think it actually emphasizes the need for humans even more, um, using human skills and the soft skills that AI is not going to be able to tackle. And so I actually... I think the- the main thing is you don't wanna take the human element out of this process. And so I would advise, um, those who are in this area and deploying AI tools in the HR employment setting, a lot of times it requires the soft skills, right? Even when you're, uh, interviewing candidates or you're working with somebody if something comes up in the employment setting, that is not going to be completely replaced by AI. So I would- I would encourage folks to know that AI is an efficiency tool. It's to help you use your time, um, more efficiently as well, the o- the limited time that you may have. You can actually use that time now for valued work, for human skill work, that is going to make your company, um, stand out from others because you have that human approach may- maybe others are- are completely, um- um, replaced with AI and AI tools and ag- agents, you know? So I would say the biggest advice is use this opportunity to become more human, actually. Perfect. Thank you. And we have a lot of questions that came in, so we're gonna jump into it from, uh, all of our attendees and participants. So, the first one, how can HR ensure fairness when using AI in recruitment, especially regarding bias in algorithms? I'm not sure if any of the other panelists want to tackle this, but I'm, um, happy to throw in a few cents here. (laughs) Um, well, I think this goes, again, to the human e- element. Um, so the actual foundational model that you are using, um, or the vendor, the tool that you're using, uh, y- you also want to understand how they train the model, um, how you're going to be training the model, um, what kind of testing you or that tool does to eliminate as much bias as possible. Um, if you're deploying your own, that's gonna require working with your engineering and product folks, and really understanding, um, the testing. And so, working with, um, either the third-party tool you're using or the actual, uh, product team and engineering team. Understanding the way ha- uh, uh, the way of training and what kind of data you use and how to eliminate bias as often as possible. It requires revisiting. Um, it's not sort of this, um, set it and forget it. And, um, I don't think you can ever ensure that there's no bias, which is why human in the loop is also very important, because there are... It's a learning system, and in the beginning especially, you are gonna see things that are not accurate, that might be biased. And so, with the continuous training and auditing of a system, along with the human in the loop, I think you can get pretty close to that ensuring, uh, that you'd like. I'd love to jump in and just add on things. So, what Sonia said, I think, is really critical in making sure that you are testing your systems to make sure that you're not creating discriminatory outcomes. One of the things that we're seeing in some of the legislation coming down the line is that's coming up more and more. And I think even when it's not always required, um, you wanna make sure it's something that you're thinking about doing, even voluntarily, because it's the right thing to do. And I think even going beyond sort of the legal have-to's, you know, one sort of should I would add is that it's really important to make sure that you're building in those regular checkpoints to make sure that, as Sonia said, you don't set it and forget it, but that the way that you're using the tool still matches the kind of company that you say you are. So, don't just ask, "Is this compliant?" Ask things like, "Does our overall process still fair, feel fair and human to candidates and employees, or is it feeling really cold and automated? Are we being transparent about how we're using AI in our processes in the same way that we're saying we value as a company?" Because I think you can be technically compliant and still undermine your culture if what you're doing leads to less transparency, less human connection, or less ownership. I would also add that this is not an issue that is exclusive to using an AI tool. I mean, in any kind of recruitment process that you have, you want to be watching for, for bias and discriminatory, discriminatory practices or potential outcomes, so. Great. Uh, Kyle, I think this question is, uh, well-suited for you. What uses of AI could put an employer at risk? Ooh. Um, I think the obvious one is acting on bad information in a way that leads to non-compliance or other undesirable outcomes. Uh, but I think I w- gonna use this time to highlight a risk that maybe isn't discus- discussed quite as much, although I know Susan talked a bit about it earlier, and that's the risk of u- employees using AI in such a way that they're... end up being less knowledgeable or skilled than they otherwise would have been. So, the workplace, like the classroom, is a learning environment where we gain knowledge and develop skills. And while AI can be an aid to learning, it can also be used to bypass that. And we see this in the classroom when students turn in papers they didn't write or even review, and, you know, maybe they get away with that cheating, but they leave school less knowledgeable than they would have been had they done the work themselves. And unfortunately, I think as one of our questioners brought out, brought up earlier that, you know, this kind of thing also happens in the workplace and employees effectively turn in work they didn't actually do themselves, and in the end they're poorer for it 'cause they skipped over the hard mental problem-solving work that would have developed them as employees. Uh, and they might not even understand the work that they've turned in. So, that's not good for them, it's not good for the organization. So, I would say if you don't know whether or how your employees are using AI to do their jobs, I would absolutely inquire about that. And then, you know, be explicit about your expectations. You know, what are legitimate uses of AI in your organization? What, what isn't? What aren't, um... What, what do you allow and what do you not? And, uh, just be consistent about how you're, um, following and executing your policies. Great. Uh, question cutting through. Sometimes the process feels very non-human when humans are involved. Do you think that AI can make it even more human? I can grab that. Um, I think there's real opportunity to make it more human. We've talked a lot about, you know, having that human in the loop, and, um, one of the things that I think it's really important is that we're using AI for the uses that it's really good for and using humans for the things that humans are really good for, so that decision-making critical thinking. And when you're injecting AI into that process, there is even more of that opportunity for AI to pick up those things that perhaps are more rote or don't require that critical thinking skill, and maybe leaving space for humans to lean more into those things that actually make them more human and, and result in that process feeling more human to employees and candidates. Yeah. And I wanna add actually, there is gonna, there is a trend that we're creating job posts with AI, right? You can make that in, like, four seconds now. (laughs) And then don't forget on the other side, candidates are seeing the job post, and they'll put that into AI and say, "Rewrite my resume for this job." And so whenever your AI tool is scanning all the resumes that are coming in, they're probably scanning it for certain buzzwords and for, for certain matches to the job description, and that's not going to help the whole process. Actually, you're probably gonna get a lot more matches that you wouldn't have, um, if both parties were not using AI to, like, match up. And so I think it's a, it's an interesting place because that requires way... That actually requires, like it negates itself almost, and now you need human skills. Now you need to actually... I've seen a little bit of a trend of cover letters coming back and to get to know someone and see their writing style, which, again, that's probably gonna be hard now. So even more, now you don't know if any of this has really been, you can't get the personality of the person and the experience of the person through the writing. Um, and so I think even more, your interview process, um, the, the calls that you do with them, that's really gonna be where you hone in on the true experience the individual and candidate has and also the culture, the fit, the personality, all of that. So that's all the soft skills, the human skills. So I think it's, it's actually becoming even more important. Yeah. And your example brings up a really good point of the challenges that, uh, we face using AI in HR, right? And so, uh, perhaps all of you could kind of touch on some of those challenges going a bit deeper and maybe, uh, some examples of how we can overcome those. I can start. Um, for as far as, like, challenges, like, two kind of come to mind and they're kind of related. Um, so first, you know, there are just, there are so many ways of using AI that it can feel overwhelming. I know I feel overwhelming. And even, like, a single tool, like a large language model can seem to just have, like, unlimited use cases. Like, you could present it with this brand new tool and it's like, "Okay, I can do this to do all sorts of things, but what should, where should I start?" And so that's one challenge. I think another is that, you know, this technology is evolving so rapidly, um, that the results from using it can really kind of differ from one day to the next. And I, so I... Sometimes I feel like I'm in just this constant state of experimentation and, like, I'm in the back seat of the car and I'm yelling up to AI, like, "Are we there yet?" And yes, we are. We, we're, we're making progress, but we, we also have more, more ways to go. So, um, when it comes to overcoming these challenges, uh, for me, like, while it might sound counterintuitive, I've really found it helpful to actually think small and really focus on what I can quickly test and measure. Like, I've tended to have more success, um, when I'm asking AI to do something with very, like, clear parameters. Like, the narrower my instructions to AI, the easier it is for me to evaluate the results. So, like for instance, if you, like, ask AI to edit an email for you and, like, um, look for every possible issue, you're probably not gonna be able to tell, like, what it's catching and what it's missing. But if you ask it to review for, like, a specific set of issues, then you've got clear criteria for assessing how well it's performed. So, that's a little bit of how I've kind of navigated those, those two challenges. Great. I, I love that. And Oh, sorry. Go ahead. No, go right ahead Bethany, please. Um, I was gonna share, you know, one of the challenges, um, that we've seen was really actually adoption and trust within our own team of HR professionals. So, HR people tend to be naturally critical, without any accuracy, and there's an understandable worry that AI's coming for your jobs, right? You put that together and a lot of people's first instinct is actually to avoid using it. And so we tackled that in a couple ways. First, we encouraged people to use AI in just fun, low-risk ways to get comfortable with it, and then we actually built a custom GPT specifically designed to help us speed up drafting responses to HR questions. And we were really explicit in telling our team how we built it, what the guardrails were, what it could and couldn't do and giving them guidance on how they should and shouldn't use it. And we framed it as an accelerator, not a replacement, and re- reinforced that being the human in the loop was not optional. That was really the whole point. So we kept reinforcing that their judgment and discretion were required and the tool is really just there to help with speed, consistency, tone, things like that. So, kind of back to what Kyle mentioned earlier, I think the key for us was being really intentional about using AI in a way that it made sense for us and where we could go deep and not just trying to throw AI at everything. 100%. Yeah. I, I think the cultural thing is a really important one as well. Like, even from just the HR team, but having, um, worked with, you know, two companies now in setting up their AI programs, um, you know, we... I've learned a lot in term the, um, the inflection. We started off with, like, a very strict policy where we wanted all the use cases for the, the people in the organization to go through, um, legal, and, you know, there was, like, a, a funneling effect we, we found out. And in fact, we all know that folks are gonna use the tool, um, so it really doesn't matter what your policy is saying. You might as well learn and take the feedback from your people and, and open the gates a little bit and give them more of guidelines. And so I think one of our challenges, and it's a good challenge, not, not really a bad one, but a good challenge, was to see how to understand the culture of your own organization and how to work with it so that everyone feels comfortable with the adoption. And then also from a product side, really understanding, like, what products, what's the value we want our consumers to have from this AI tool we're using?... instead of just adding AI to every product, because the, the race is off, right? Um, so I think those are two challenges and it requires balancing the need for moving fast with being strategic. Extremely well said. Uh, definitely to the last point on ada- putting AI into absolutely everything, you need to listen to your customers and what they're asking for to really meet them, uh, where their needs are at. So, uh, that is an extremely fast 30 minutes. There was so many more questions we could have gotten through. Uh, so I really, really appreciate all of your time. Before we sign off, I'll give everybody, uh, one last, uh, moment. Any final thoughts you want to share with the group around, uh, AI and, uh, one last question you'd like to answer? I can jump in. Um, you know, I think one parting thought that often comes to mind for me is, we talk a lot about the technology and the parameters around it, and it's just really encouraging people to make sure that you're preparing your workforce, not just your technology. So make sure that you're giving your employees clear and simple guidance around where you want them to try AI, where it's off lim- limits, how to handle sensitive data, and what good use looks like. So provide them with actual training on how to maximize leveraging those AI tools. Yeah, I would add to what Bethany says. Um, particularly when you're first getting started, uh, the learning curve is really, really high. Uh, and so you need to set aside time for people to experiment, to figure out what works, what doesn't, and then to figure out, like, how are they going to put, uh, incorporate AI into actual workflows? So, but all that takes time and kind of permission to fail and to, and to not necessarily see, like, the results you want to see right off the bat, but to be persistent and patient, uh, throughout that process. Um, I'll add that we all know AI is a little overwhelming to say the least. And I think to start small and have a framework that works for your organization is more important than, you know, trying to dash out into the market or have all of your divisions and business units within the org having a separate sort of policy about AI. So really coming together, making a central understanding and place you could share knowledge. Um, you know, we've got a framework ourselves that we've put together and it's, it's been working and, you know, it's about how we bring the organization together with that AI committee, and it's really helped unify a lot of our tools that we want to use from across the, uh, all the business units, and it gives us a good product roadmap understanding. So it feels more unified instead of piecemeal. It feels like we can actually tackle it because we're all on the same page, and it has been a good way to quell some of those fears and anxieties and the, the need to move with speed without having all the information. Wonderful. Sowmya, Bethany, and Kyle, thank you so much. This was excellent and we really appreciate your time. (instrumental music)












