Intro: [00:00:00.96] Welcome to the Workology Podcast, a podcast for the disruptive workplace leader. Join host Jessica Miller-Merrell, founder of Workology.com, as she sits down and gets to the bottom of trends, tools and case studies for the business leader, HR and recruiting professional who is tired of the status quo. Now, here’s Jessica with this episode of Workology.
Jessica Miller-Merrell: [00:00:26.19] This Workology Podcast is sponsored by ACE the HR exam and Upskill HR. This episode of the Workology Podcast is part of our Future of Work series powered by PEAT, the Partnership on Employment and Accessible Technology. PEAT works to start conversations around how emerging workplace technology trends are impacting people with disabilities. Today, I’m joined by Merve Hickok. She’s the founder of AIEthicist.org and a business process analyst at High Sierra Industries. Merve is an independent consultant, lecturer and speaker on AI ethics and bias and its implications on individuals, organizations and society. She’s also a senior researcher at the Center for AI and Digital Policy and has over 15 years of global level senior experience with a particular focus on HR technologies, recruitment and diversity and inclusion. She is a SHRM senior certified professional and a certified HIPAA security expert. Merve, welcome to the Workology Podcast.
Merve Hickok: [00:01:27.51] Thanks so much, Jessica.
Jessica Miller-Merrell: [00:01:28.92] I feel like you’re one of the few people that I’ve ever met that has experience in HR but is also really knowledgeable and experienced in artificial intelligence. I’m really excited for our conversation today.
Merve Hickok: [00:01:42.90] Likewise, I’m really excited to be here today. And you’re right, this is still a very small field. So, I feel like I know just the handful of people that are interested in or experienced on both sides as well.
Jessica Miller-Merrell: [00:01:57.24] Well, AI is touching so much of what we do in human resources and really the human capital space. I wanted to ask you a little bit more about your background. How did you get involved in the AI work and digital policy?
Merve Hickok: [00:02:11.61] Bit of a background. I was, I was working for Merrill Lynch in Turkey. I was Country HR Manager after the acquisition by Bank of America. So, I worked a lot, a lot on some of the technology implementations, platform changes. That was a very generalist role. But I had insight to a lot of the technologies. The company then asked me to move to London to take over this brand-new role, Diversity Recruitment Manager for our graduates hires. So, I had to build all the strategy, execute the strategy on diversity recruitment across the colleges in Europe, Middle East and Africa. That role also required me to be the admin for our recruitment technology, our ATS back at the time. So going around these colleges, talking to students, trying to build these partnerships for a more diverse future workforce, I was hearing a lot about what obstacles they were running into. You know, in addition to being a female, being a minority, people like students with disabilities, et cetera, you know, they were interested in the industry, but they were running into issues. And with some of these technologies and the practices that we had in place back then, you know, not only the bank, but, you know, in general our competitors as well.
Merve Hickok: [00:03:38.94] I started realizing that this, there had to be a more thoughtful approach to this. And I’m a technology optimist. I love technology. So, when working with these technologies, with the students, I started going more into AI and it wasn’t very, maybe there was a handful, if even then, HR recruitment technologies that were using AI, so I started sort of looking into those, looking more wider into AI and bias issues and impact on society and social justice. And one thing led to another. Like you mentioned, I got SHRM certified, then I came to U.S. and I’m wearing multiple hats, on AIEthicist.org, I’m a business process and management analyst at High Sierra Industries. That organization is a 40 year+ nonprofit in Nevada developing and delivering learning systems for people with disabilities. And I’m also involved with the Center for AI and Digital Policies as a senior researcher. So, everything is coming together and has been coming together. I was able to interact with a lot of people with diverse perspectives, diverse backgrounds. That was probably the journey.
Jessica Miller-Merrell: [00:04:57.90] I love hearing your background experience. And you’re absolutely correct. There is so much use case for artificial intelligence, and I feel like it’s more part of our conversation as HR leaders. And I feel like so many HR technologies, these demos or briefings that I’m sitting on, they’re talking about AI, but it’s really sort of a gray, fuzzy area for a lot of HR pros. I want to dive right into this topic and ask you, as HR pros are having these kind of conversations, talking to HR technologies, what should we be concerned about when it comes to people with disabilities and using AI technology in the workspace?
Merve Hickok: [00:05:38.78] You’re absolutely right. I see new products coming out every single day. And look, this, that’s the next shiny thing or brand-new thing. But we really need to be careful about what these technologies are actually doing and above everything we need to understand the logic and science behind these technologies, you know, so. Not all AI technologies in recruitment or HR workspace are bad. There are some really good examples out there. So, I don’t want to generalize this, but there are also some that are based on, on pseudoscience, on like really flawed science or even science and technology that needs to develop further to actually work better. So, you know, what does it mean that they can predict a person’s success at a role by looking at their facial features or analyzing their sentiments? Like, how would you feel if someone made a decision about your character and future success, you know, just by analyzing the way you look or the tone or pitch of your voice. A lot of these companies come in to, to employees. They’re out in the market promising a better, faster, cheaper practice. And I think a lot of the employers are jumping on the wagon without understanding what is actually behind it. But these technologies come with the definition of what is normal, what is acceptable, what is worthy. This goes for all candidates and employees, not only for people with disabilities. Right? So, it’s impacting all of us.
Merve Hickok: [00:07:09.50] But there is an additional burden on people with disabilities. We forget that there’s now more than one billion people with a kind of disability in the world, 15 percent of the world’s population. But a lot of these technologies are made by non-diverse teams. You know, they still try to fit people into their own assumptions, their own norms of what is normal, what is acceptable, and continue to refuse to see how each individual can contribute to the workplace. So that’s one of my focus areas, 1) trying to help HR professionals to help understand this and the possible issues, and also looking at it from a fairness and equity perspective. Now, you mentioned what should concern us? First of all, we know the recognition analysis models don’t work accurately for certain groups. You know, it’s, we have seen a number of studies by government agencies, by different researchers and scholars that facial recognition performs significantly worse for those with darker skins, Asians, women and those with disabilities. If you’re not represented in the data set powering those models, you’re not accurately even recognized as a person, let alone, you know, having a proper recognition. It doesn’t even recognize you as a person. And it gets even worse when you’re at the intersection of these groups.
Merve Hickok: [00:08:37.22] Say, if you’re a woman with darker skin or an Asian with a disability, for example, that you’re hitting a number of these groups. Facial analysis, it’s also what I mentioned as facial recognition, facial analysis itself is a whole totally pseudo-science. I could talk days about some of these technologies. what should concern us, they use natural language processing, so they analyze what you write, what you say, the, the language and the text behind it. But again, they’re not able to capture correctly if you’re speaking with an accent or if you have a speech impediment, et cetera. They’re also not good at yet, I don’t know if they will ever be, on understanding the context of what is said or the nuances or analogies that you might have in an interview. And then there is the emotion analysis software that claims to analyze your face, to predict job success, you know, whereas we know that there is no universal way to express our emotions or that we might be feeling one way, or our face say something else. So, a lot of this technology is when you break it down to smaller pieces, you see the assumptions behind it are really problematic. They also try to extrapolate the samples we have to the wider community of people with disabilities, so even if you had some, some people with disabilities in your dataset, that’s not reflective of the range of possibilities.
Merve Hickok: [00:10:10.83] We know for a fact that some disabilities manifest themselves very differently across everyone. If you have time, I would like to come back to this again, it’s about customizing the models according to your company. You know, to employers, a lot of these tools suggest that they can customize their models according to your own current population, employee population. So, they ask, who do you define as successful employees and what features would you like to highlight and optimize? But we know that in a lot of companies, people with disabilities are not equally represented to start with. And the way that employers, a little unfortunately, a lot of employers measure success to exact longevity in a job, no break in between jobs, promotions, etc., might not really translate to the realities of people who might have to take time off, say, for medical reasons. So, what happens is when you’re not fitting into those norms to start with, you’re considered an outlier, an error in the systems, and also you’re constantly facing or being subject to these technologies which still have serious shortcomings. So those are the things that really concern me when it comes to AI technology in the workplace in general, but also for people with disabilities.
Jessica Miller-Merrell: [00:11:31.08] It’s quite the exhaustive list. You mentioned a lot of different types of tech. I feel like a lot of it was recruiting based. We have job matching; we have video interviewing. But what about other types of technologies that are using artificial intelligence that we maybe haven’t talked about other parts of, I feel like HR or the workplace, that you wanted me to call out and just make us aware that AI is being leveraged in these tech?
Merve Hickok: [00:11:58.02] Yeah, absolutely. For me, the most troublesome users, other than recruitment, are social media background checks and employee surveillance tools. This is the social media background checks, those are illegal in some states and in some countries, but they are legal in others. What happens is the employer, an employer can run social media background check on employees, the current employees, as well as the candidate if they want and receive scores from these tools. These also are based on some of the text and sentiment analysis. But what happens is you’re not only crossing a boundary with your employees that, you know, you’re, you’re peeking into their private life outside of, outside of the workspace, but also trusting these tools to be accurate assessors of a person’s tendencies, whether it be political or social. You might also find things about your employee, which you wouldn’t otherwise know, which might impact your judgment of that employee. So, for me, that is a pretty, like you’re actively going after your employees to get more information. The other pieces, employee surveillance parts, we started to see practices of this either in the workplace or for those working from home. You know, these might be cameras that monitor your every move at work, and they might be tools that are monitoring your emails or chats and doing sentiment analysis of your interaction, might be keystroke loggers or screen capture technology that, you know, capture what you’re doing, when you’re doing, when you’re using a company device. It might be even in the form of a training software that logs our starting and finishing a training and how many times you interact with it, etc. Or it might be video conference.
Merve Hickok: [00:13:56.26] The latest was a video conference tool that was analyzing your face during the meetings and trying to gauge your engagement. What’s so problematic about this for me is, one, you’re not, like I said, you’re not respecting the employee’s right to privacy, which became even more of an issue during the pandemic. But we now have managers as part of our homes interacting with our home environment and family. We are part of our children’s class, the teachers are being surveilled, and we are shaping behaviors because of these technologies. You know someone is watching you and you don’t have any control. There’s a power imbalance between you and your employer, so we can’t really fight against it. So, the next best thing is shaping the behavior around it. You also tear down your trust relationship with your employee when you start surveilling and collecting data. You know, we don’t, again, we don’t question the science or relevancy or ethics of these tools, and we just question the employee who’s working to move your company forward. Now, you start datafying this behavior, you know, interactions between the employees and reduced employees to just data points. What happens then is your culture turns from a cooperative one to a competitor one. You also risk the system to be gamified. You know, your employees, instead of working towards the goals of the organization, having teamwork, they start to try to game the system to protect themselves. So, we really need to question why we’re using these technologies, what’s the end result and how we are shaping behaviors. And those two are, for me, some of the most problematic ones.
Break: [00:15:46.96] Let’s take a reset. This is Jessica Miller-Merrell, and you are listening to the Workology Podcast sponsored by ACE the HR Exam and UpskillHR. Today we’re talking with Merve Hickok about ethics and bias in AI technology. This podcast is part of our Future of Work series with PEAT. They’re the Partnership on Employment and Accessible Technology.
Break: [00:16:07.24] The Workology Podcast Future of Work series is supported by PEAT, the Partnership on Employment and Accessible Technology. PEAT’s initiative is to foster collaboration and action around accessible technology in the workplace. PEAT is funded by the U.S. Department of Labor’s Office of Disability Employment Policy, ODEP. Learn more about PEAT at Peatworks.org. That’s Peatworks.org.
Jessica Miller-Merrell: [00:16:36.09] This has been so helpful and, for me, and I’m thinking of all the people who are listening now that are thinking about the different types of workplace technologies that they’re leveraging beyond recruitment, you said the social media background checks. I think most of us are doing employee surveys. And, you know, the world has shifted so quickly for us over the last 14 months or so. I wanted to ask you about how we should be talking to our HR technology vendors, maybe those that we’re already working with or considering working with and implementing maybe some new artificial intelligence technology for hiring in human resources. How do we make sense, like what questions should we ask, and then how do we make sense if their actions or activities or technology is ethical?
Merve Hickok: [00:17:24.51] Thank you for that question, because I do this a lot with the companies that I consult with. This is where the proper due diligence comes into play. Right? So first of all, we need to remember that, as employers, we’re still carrying the risk, even though you might have outsourced the process to a technology, you know, when you are onboarding this, these tools, it’s not the vendor that is going to be in trouble if the tool you selected is disparately impacting certain groups or has been built to make decisions on protected classes, for example. You as the employer still carry the liability. That’s why I cannot stress this enough that employers as clients have to be really diligent and ask the right questions. In terms of what questions to ask, I would say, first thing first, ask the vendor to explain the AI model or the decision-making process to you. Is it a black box that they don’t, even they don’t understand how the technology works? Or is it an explainable model that they know that, OK, these are the features that, that we use to make a decision. The model is based on this criteria. This is how the data flows. And this is kind of data that we use and ask them to walk you through that process because, and don’t take, oh, it’s an IP. You know, it’s protected information, for an answer. It’s not a trade secret. You need to feel comfortable that the vendor themselves can actually explain their model.
Merve Hickok: [00:19:00.39] Second is going back to what I said earlier, is it pseudo-science or not, you know. Why would you use something that has no science behind it or has flawed science behind it? Another question is, are the outcomes similar across different groups? How does the vendor ensure that they are, you know, is, say, white males getting better results or better outcomes when they’re subject to this model than, say, a woman with a disability or what have you. Just like looking at what are the outcomes, how are the outcomes spread across different groups, asking them what kind of quality assurance and what mechanisms and safeguards that they have, you know, how do they ensure their model is robust? I would ask, I’m pretty involved in building an audit framework for these recruitment technologies, so I would ask them, look, are they being reviewed? Do they have third-party reviewers coming in and, you know, doing these checks, et cetera, et cetera? I mean, I can walk through these questions for a whole day. But the bottom line is employers, like I said, need to remember that they carry the risk, not the vendor. So, if you don’t have the internal capacity to ask the right questions, then get external support for your procurement project, have a trusted partner to help you through that process. It’s not worth taking the huge risk and possibly alienate your candidates and contribute to injustices in society as well.
Jessica Miller-Merrell: [00:20:43.49] Thank you for, for that list. I think an audit is incredibly important. One of the other areas that I wanted to ask you about is that not only should we be talking to the HR technology companies, potential vendors, current vendors, but what about educating and kind of broaching these subjects on the dangers of artificial intelligence technology with our company leaders outside of HR? How do we have those conversations? What do you recommend?
Merve Hickok: [00:21:14.31] Oh, the million-dollar question. There is a direct correlation between HR practices and bottom line, right? So, whether that’s in the shape of cost of bad hires, alienated candidates who, by the way, for certain industries might be also your consumers. So, you’re alienating a candidate as well as a consumer, or approaching laws or penalties, et cetera. First, as HR practitioners, we need to articulate that connection well and understand our risks as well as benefits. We always talk about HR being a business partner or partner to the business. You know, whether the interest in this AI technology is initiated by HR or the business, HR needs to see the bigger picture and how it’s impacting the organization and its culture. Are you bringing in less diverse people? Are you amplifying biases? Are you getting the best candidates with this technology? Not all AI technology is bad. What I’m warning against is don’t treat AI as an all-knowing oracle that cannot be wrong. At the end of the day, it uses data that is created by humans about humans and is coded by humans. These tools usually promise that they are going to make your hiring better, faster, cheaper. They can certainly deliver on the faster and cheaper side, but when we were broaching the subject with the leaders, our company leaders, they need to understand that they might not always, these tools might not always be getting the better candidates, that they are possibly making a tradeoff between amplifying biases or possibly discriminating against candidates and decide accordingly. Is that a risk that you want to take? How is this going to impact your workplace culture? Does this align with your company’s values, the outcomes align with your company’s values? So, it’s not only just about recruitment, like a single technology that is sitting by itself independently of the company. It comes to the core. It impacts the core of the company. And I think touching on those points of risks, benefits, value alignment and you know, long term impact, is the crucial thing.
Jessica Miller-Merrell: [00:23:28.95] Awesome. Such good important points to have. The other part that I was thinking of as you were talking was training. Where can HR professionals go to educate themselves and their teams about ethical artificial intelligence and its potential for bias? Do you have any recommendations on where they should go to be able to educate themselves, training, learning, reading, growing, any of those?
Merve Hickok: [00:23:55.45] Absolutely. I think the first thing I would say is don’t be intimidated by it. When we say AI, a lot of people think that’s technology, I’m not a computer engineer, I’m not a, you know, and I’m not a computer scientist or an engineer. You don’t need to code or build AI models yourself to ask the right questions and understand the implications. These technologies impact you on a daily basis. It’s not only about HR. So, you really need to understand the impact of AI in general, what that means for you, for your family as well. So, training and understanding this is really, really crucial. There are now a few online trainings about this topic that are geared for, towards non-technical audiences. I would say definitely follow some of the names in the field who are discussing these issues to get an initial understanding. If you have time, join soft advocacy groups working on these issues. There are, generally, AI, groups that are working or discussing AI and bias. There are smaller groups that are working on HR and bias, like the ones that I’m involved in, but also, I’ll do a shameless plug here. I have a whole website, AIethicist.org, that is built for those interested in topics, in these topics but don’t know where to start yet. That was one of my frustrations when I started getting into AI and bias, that I didn’t know where to start and I was going down these rabbit holes. There wasn’t any like any sites that would help. So, I collected, I curated a number of papers that are rather non-technical that will help you start with that, with understanding some of these issues and debates. And I constantly update it. I also have a self-paced online learning, online training on AI and bias, and ethical decision making. But at the end of the day, when it comes to is, you know, don’t just look at this for HR. It impacts you 24/7, you and your family. And it’s crucial that we understand these technologies.
Jessica Miller-Merrell: [00:26:15.95] We will make sure to include in the show notes your website, which is AIethicist.org, correct?
Merve Hickok: [00:26:23.93] Right. Yes.
Jessica Miller-Merrell: [00:26:25.19] AIethicist.org, we’ll include that and also the self-paced learning. I, I feel like, now is the time to educate yourself. I encourage HR leaders to become subject matter experts, not, you don’t have to know it all, but be strong and confident and knowledgeable in this area because you’re not only serving HR, but you can be a point of contact for the entire organization as others have questions about artificial intelligence. And Merve has some really great resources that I encourage you to check out. One other question I wanted to make sure that we asked was what does a healthy balance of ethical policies and artificial intelligence look like for HR? Do you think we should have a written policy in our employee handbooks and on our website that talks about our, the ethics around our use of artificial intelligence? Are we going in that direction? I would love to hear your thoughts.
Merve Hickok: [00:27:30.28] Jessica, for me, it’s always about walking the talk. You might have elegantly written policies, but the bottom line is what are you actually, like, are you actually practicing those? Do you have a policy about nondiscrimination? You might have a policy about non-discrimination, but have you implemented systems that result in discriminatory results? Because you haven’t done the due diligence first. You know, how is your hiring contributing to the company’s culture and composition like we mentioned It’s really about, you know, vetting those policies with actual practices. It’s not only about recruitment. What are the other policies and practices that might result in either a growth environment or a toxic work culture like your compensation and promotion and development opportunities. I always say you can use AI in a very positive way to understand your company first. You know, start with that. Use your, use your data to understand, you know, are there any wage gaps or like who’s being, like what kind of people are being promoted? What kind of people are being given development opportunities? How is, what is the composition of your company and your applicants versus those who are exiting the company? You know, what are the reasons for that? And try to understand that first before you try to fix something or put something on top of what is, what might already be a problematic issue. You know, we’ve seen a number of big tech companies, you know, looking at establishing their ethical AI policies. It’s on their website. You turn around and they are constantly collecting consumer data, platform data, and using it to manipulate those people, selling their data, whatever. So just having a policy is not enough. You really have to show that you are actually practicing that.
Jessica Miller-Merrell: [00:29:38.14] I love that. It really gets back to the training portion, educating yourself, educating your team, educating the organization and walking the talk and then following through with the policy and maybe the guidelines and the processes that you’ve put in place.
Merve Hickok: [00:29:55.78] Absolutely. I mean, it’s not only, as you, as an HR practitioner, you touch on a great point. Be that person that people in your company can come to, to ask these questions, you know, take that lead. But also, yeah, you know, this is important for you and your family. How is your kid being impacted at school? Is he or she being like surveilled? How are their grades, like how are their grades assessed when they apply to a school or college? How are they being assessed in there when you apply for a credit? The news that you see. So, it’s you as a consumer, you as a citizen, you as a parent, you as you, not only as an HR practitioner. There’s impact of AI now across, like I said, 24/7. Even when you’re sleeping, your Fitbit might be collecting information about you that it shares with insurance companies that make decisions about you. You know, understand those consequences and advocate for a better world. Imagine a better world and advocate for it.
Jessica Miller-Merrell: [00:31:01.72] Well, Merve, thank you so much for taking the time to talk with us today. I’ve learned a lot. We have some really great resources that we’re going to share in the show notes. Where can people go to connect with you and learn more about the work that you’re doing?
Merve Hickok: [00:31:16.63] I’m very active on LinkedIn, so if you want to connect with me on LinkedIn, Merve Hickok, more than happy. A lot of the work that I do, I publish there as well. If you want to listen to any of my previous work or read any of my previous work, all of those are also included on AIethicist.org. Happy to connect with, especially with professionals in this field.
Jessica Miller-Merrell: [00:31:43.15] Thank you so much. I love that you, you’re one of the few and hopefully soon to be more HR professionals that are experts in this area. And I mean, you have such an understanding, I think, that you can really speak to the work that we do every single day as leaders of our organization working in HR. So, thank you again.
Merve Hickok: [00:32:06.79] Thank you so much, Jessica. Thank you for the opportunity.
Closing: [00:32:10.12] Personal and professional development is essential for successful HR leaders. Join Upskill HR to access live training, community, and over a hundred on demand courses for the dynamic leader. HR recert credits available. Visit UpskillHR.com for more.
Closing: [00:32:25.99] Technology can be a bridge or it can be a fence. Artificial intelligence has come a long way in the past decade, and we see it everywhere on our career sites with chat bots and automated emails from our ATS, candidate matching, candidate assessment tools. This AI is the grocery store self-checkout, but for HR. As much as we want to implement this new tech, I love me some tech, it saves time in hiring and recruiting, but I want to caution you, we have to pause to consider what impact that technology will have on our workplaces, including employees as well as people with disabilities. I really appreciate Merve’s insights and expertise on this special podcast episode for PEAT as part of our Future of Work series. Thank you to Merve. Thank you to PEAT. I hope you enjoyed.