Future of Work Podcast, Episode 24.
Alexandra Reeve Givens, Founder and Executive Director of the Institute for Technology, Law and Policy at Georgetown University, discusses how employers need to be aware of both the benefits and potential liabilities associated with using AI in the hiring process, particularly regarding the recruitment and interviewing process for people with disabilities.
This podcast is developed in partnership with Workology.com as part of PEAT’s Future of Work series, which works to start conversations around how emerging workplace technology trends are impacting people with disabilities.
Transcript
Intro: [00:00:00.3] Welcome to the Workology Podcast, a podcast for the disruptive workplace leader. Join host Jessica Miller-Merrell, founder of Workology.com, as she sits down and gets to the bottom of trends, tools and case studies for the business leader, HR and recruiting professional, who is tired of the status quo. Now here’s Jessica with this episode of Workology.
Jessica Miller-Merrell: [00:00:25.47] As the adoption of artificial intelligence in our workplaces increases, so do those conversations around AI and discrimination. Probably the best known case study right now is from Amazon, who developed their own artificial intelligence candidate-matching and sourcing tool for technical positions. The company, after a while, abandoned this tech. They determined that it was matching candidates to positions who had the majority of male names, went to male universities or colleges, and had specific work experiences. Over the last several years as the adoption of AI in HR and recruiting has increased, I’ve been doing my own research on how this is transforming, how it’s impacting our workplaces from hiring to training and to other employee engagement activities. This episode of the Workology podcast is part of our Future of Work series powered by PEAT, the Partnership on Employment and Accessible Technology. In honor of the upcoming 30th anniversary of the Americans with Disabilities Act this July, we’re investigating what the next 30 years will look like for people with disabilities at work and the potential of emerging technologies to make workplaces more inclusive and accessible. Today I’m joined by Alexandra Reeve Givens. Alexandra Reeve Givens is the founding Executive Director of the Institute for Technology, Law and Policy at Georgetown Law, which operates as a Think Tank, working on cutting edge issues in technology law. She previously served as chief counsel for IP and Antitrust to the Senate Committee on the Judiciary and its chairman and ranking member, Senator Patrick Leahy. She began her legal career as a litigator at Cravath, Swaine and Moore in New York City. Alexandra is also the daughter of the actor Christopher Reeve, who experienced a life changing spinal cord injury in the 1990s. She serves as the Vice Chair of the Reeve Foundation, which supports people living with all forms of paralysis. Alex, welcome to the Workology podcast.
Alexandra Reeve Givens: [00:02:31.56] Thank you for having me.
Jessica Miller-Merrell: That was a lot in a short amount of time in talking about your introduction. Tell us a little bit more about you.
Alexandra Reeve Givens: [00:02:40.2] Sure. So I’m a lawyer by training and I am deeply focused on law and policy as tools to protect individual’s rights and to affect social change. In particular, I focus and care a lot about technology policy, really because of the dramatic impact that technology is having on society today, whether that’s the way we access information, public trust in institutions, or how technology is affecting workers, consumers and historically marginalized communities. Obviously, as you mentioned in my bio, disability is a personally important issue to me because my father, who was the actor Christopher Reeve, experienced a spinal cord injury when I was very young. We lost him in 2004. But I serve as the Vice Chair of the Reeve Foundation, have continued a lot of his work investing in research, support, advocacy and building community for the almost 6 million people living with paralysis in the U.S. So that’s also a deep driver of my work as well.
Jessica Miller-Merrell: [00:03:42.45] Well, I’m so grateful to have you on this podcast and I can’t wait for everyone to hear and dive into more on this topic. So let’s talk about the pillars of work and the focus at the Institute for Technology Law and Policy at Georgetown Law. Walk us through that.
Alexandra Reeve Givens: [00:04:00.06] Sure. As you say that, I realize what a mouthful we are in the name of this organization. At the Tech Institute, we’re focused on key issues and how technology is transforming society. We have a whole bunch of different projects and fellows working on these questions. But several key themes are things like advancing civil rights and digital personhood, which thinks about how technology is affecting people, particularly from marginalized communities. We think about promoting digital equity and access. So how do you create opportunities, in terms if you can access and benefit from technology. Then we also do a lot of work on improving how regulators, the courts, lawyers, policymakers actually understand technology and the role that it’s having in society. One of the core issues I care about in all of this is expanding who has a seat at the table in conversations about how technology should be used and how it should be regulated and deployed. In the world today, we just can’t have people stepping back from the conversation because they aren’t technologists. I was a history major in college. I don’t think of myself as a technologist at all. But there’s a real risk if people can step back from those conversations because they feel like they don’t have the technical chops, when in reality technology is affecting so many different parts of our lives that we need that to be a broad and inclusive conversation. So I spend a lot of time thinking about how to bridge the gap between technologists and policymakers to facilitate smart and thoughtful decision making.
Alexandra Reeve Givens: [00:05:37.01] Technology is such an important part of everyone’s lives and we’re talking over Zoom technology. I have my computer, my laptop, three screens and another computer on the side in addition to all the podcast technology.
Jessica Miller-Merrell: [00:05:49.75] So without it, it makes, I don’t know. We couldn’t live without it. I think. So I love that we’re tackling this topic today. Talk to me about hiring and employment as it relates to the use and application of artificial intelligence when it comes to people with disabilities.
Alexandra Reeve Givens: [00:06:09.04] So in recent years, we’ve seen a huge surge in the use of AI in the hiring process, both with existing companies and with new companies that are kind of selling the promise of this technology. Part of that comes because hiring platforms like LinkedIn or Indeed are making it so much easier to apply for jobs. So employers are getting a lot more candidates for every position and they’re looking for efficient ways to just manage that deluge of resumes and candidates that they’re getting. So the types of tools we’re talking about are things like resume screening tools, which automate the screening process and then send a set of the resumes through to an-person review. There are tools that purport to do sentiment analysis on recorded video interviews. So in those instances, candidates, instead of meeting with a person, will record a video interview where the computer is kind of asking them questions and they respond. And then some companies are purporting to analyze that video, looking at your facial movements and the tone of voice that you use, among other things. There are also tools that have candidates play certain games or puzzles online, and they analyze how people engage with the test as they’re doing it.
Alexandra Reeve Givens: [00:07:23.08] And then another category that’s important to mention has actually been around for a long time. It’s the use of personality tests. So these are questions or surveys that ask a whole range of questions and score individuals for kind of perceived personality attributes like openness or conscientiousness. Now, when you think about that landscape, as a general matter, there is an increasing amount of concern about some of these tests and the risk that they could entrench existing inequities in our society. I think one of the most famous examples of this was a resume screening tool that was developed by Amazon several years ago, which was trained by analyzing patterns in the resumes of candidates that had applied for jobs at Amazon over the past 10 years. So Amazon ran this program for a while and pretty soon they realized that barely any female candidates are making it past the resume screen. Obviously, that’s a red flag. Why is that happening? It turned out that the training set they’d used, which was the set of resumes Amazon had received over the prior 10 years, reflected the hugely disproportionate ratio of men who were applying for software development jobs at the company at the time. So the model, because it was learning from that set of resumes, had literally learned to downgrade the word women’s when it saw it on a resume.
Alexandra Reeve Givens: [00:08:44.81] As in women’s chess club. And they even found that it was penalizing attendance from people that had gone to an all women’s college because it wasn’t recognizing those as desirable markers on a resume. So the problem here is that the algorithm is being trained on something, to know what to look for in incoming candidates. But if you train it based on your existing employee base, there’s a high chance you’re going to entrench existing inequalities. So we can think about that from a gender context, which is super important, really important to think about it from racial and ethnic disparities as well. But let’s pause and if you think about that in the disability context for a moment, Amazon’s training data was struggling simply to have women in the mix. Right. And we make up about 51 percent of the population. So then you have to think about disability and the long history of disabled people being systematically excluded from the workforce. And what you realize is that there are real representation problems here in tools that are trained to screen people this way.
Jessica Miller-Merrell: [00:09:47.12] I appreciate you going into such detail with regard to Amazon, because I still find that many folks in HR and the recruiting space are still unaware of the Amazon case study example.
Alexandra Reeve Givens: [00:10:01.64] It’s just a really useful illustration. And to be very fair, it’s not, I’m not trying to put Amazon up as a pinata. Right. They learned this. They disclosed it and they fixed the problem. And actually, more credit to them for kind of talking about this openly as a case study of when things go wrong and how we need to learn from our mistakes. So I say it not in a negative way, but more just as a really important lesson for all of us to take home and think about as these tools are increasingly used.
Jessica Miller-Merrell: [00:10:29.63] Agreed.
Jessica Miller-Merrell: [00:10:30.47] People with disabilities, as you were mentioning in your earlier example, they make up a large percentage of the population, which is also part of what this Future of Work series is all about, employment and accessible technology to help employ people with disabilities and bring them more into the workplace. Disabilities affect one in four adults in the United States. And when you’re talking, it sounds like you’re saying representation is a barrier and that, simply, tools aren’t being developed or trained with people with disabilities in mind. Is that correct?
Alexandra Reeve Givens: [00:11:05.9] Yeah. So so that’s exactly right.
Alexandra Reeve Givens: [00:11:07.85] Now, part of the difficulty here is that disability is a complex question. As you said, one in four adults in the U.S. are disabled in some manner. But even though that number is huge, the diversity of disabilities mean that the group is actually made up of a lot of different diverse subsets of people. There’s a researcher, Jutta Treviranus, who speaks really eloquently about this, basically making the point that even in this big population, it’s really a lot of smaller underrepresented populations that each face barriers in their own ways. So if there’s an AI driven tool that’s doing sentiment analysis on a video job interview, that may create one barrier for a person with facial paralysis, for example, but a separate barrier for somebody who is blind and therefore not making eye contact with the screen, or for an autistic person whose facial expressions may manifest in a different way. And even within those categories of disability, autism has a ton of different presentations. Right. So you may not even know one autistic person may be affected in one way, but another might be in a completely different way. Then you start to learn questions of who has darker skin, when facial recognition technology has been proven to not work so well on people with darker skin tones, or people who speak with an accent on top of having some impediment to more traditional speech patterns. That is a lot of diversity to account for and it’s impossible to imagine these systems being trained to fully grapple with that.
Break: [00:12:44.33] Let’s take a reset. This is Jessica Miller-Merrill. And you are listening to the Workology podcast. Today we are talking with Alexandra Reeve Givens about eliminating bias with artificial intelligence. This podcast is sponsored by Workology and is part of our Future of Work podcast series in partnership with the Partnership on Employment and Accessible Technology or PEAT.
Break: [00:13:07.33] The Workology podcast Future of Work series is supported by PEAT, the Partnership on Employment and Accessible Technology. PEAT’s initiative is to foster collaboration and action around accessible technology in the workplace. PEAT is funded by the U.S. Department of Labor’s Office of Disability Employment Policy, ODEP. Learn more about PEAT at Peatworks.org. That’s Peat Works.org.
Jessica Miller-Merrell: [00:13:36.75] I appreciate you sharing all these things because when I think about my trip last year to Las Vegas for the HR Technology Conference, all the buzz was all about AI, artificial intelligence, machine learning, all these things. And what you’re talking about is something that I don’t think is being discussed enough.
Jessica Miller-Merrell: [00:13:59.22] There are companies in the HR tech that are leveraging AI and it is everywhere. I’m beginning to see a number of companies adding in algorithmic auditing to help repair and also eliminate bias. I wondered, what was your opinion on this? Is there enough to eliminate that bias from artificial intelligence?
Alexandra Reeve Givens: [00:14:19.56] It’s a crucial question. So auditing is incredibly important. And now there are entire disciplines in academia emerging that look at how we can do auditing, how do we do this in a responsible way? Part of the problem I was outlining before is, it’s hard to develop these tools in the first place, right. Because if AI learns by observing patterns and training data, it’s hard to get sufficient training data to really model for all different types of people. So the answer to that is if we can’t do it upfront, then we really do need good auditing techniques to audit our tool for bias and see what’s happening on the back end. That’s a really important and a good instinct. But unfortunately, where we are today, we are not nearly at the level we need to be.
Alexandra Reeve Givens: [00:15:05.66] And to be honest, I actually get very worried when I hear companies making some of the assurances that they’re making about how their tools and their platforms are audited for bias. I mean, they’re actually saying one better, right? Not only that they’ve audited their systems, but because their systems are audited for bias, they actually should be deployed in place of human review because we do know human review is really biased as well. And so these tools are actually better than the alternative. There’s no question that human review has significant problems. So I am not in the camp of saying that we just need to leave it to people and everything will be fixed because sadly, that has not proved true over time. But these claims about the platforms kind of being fully audited just don’t ring true and are really oversimplifying the problem. If it’s helpful, you know, one way I can unpack a little bit how people are talking about these issues. So when the companies say that they’re doing an audit for bias, the way it seems to work, from what I can see from the outside, or at least how they explain it to worried people like me, is that they run an analysis on their models to see if they’re having a statistically disparate impact on protected groups. So they’ll literally take a set of dummy data and see if women in the group fare less well than men. And if they do so at a significant rate, then the vendor comes in and they adjust the model to try and get better parity.
Alexandra Reeve Givens: [00:16:29.8] The genesis of this task, or really the rules that are cited to justify it, is the Equal Employment Opportunity Commission’s 1978 guidance on employee selection procedures, which alludes to this idea of a four-fifths rule which creates a presumption or not of disparate impact. So basically they are pointing to a piece of the 1978 guidelines from the EEOC to say run a statistical analysis, if one group is being screened out or not making it through at 80 percent or less of the dominant group, then there’s a problem and you need to come in and fix the model. Nice concept. But here’s the problem. That approach might work for traditional gender categories, right? So I can take a sample set, see who is identified as a female and who is identified as a male, and compare how the women in the sample set are doing to the men. But pause for a moment and consider how we should think about that for disability.
Alexandra Reeve Givens: [00:17:28.24] So number one, disability status is very rarely disclosed, so we don’t even have the data of people that we should run through in such an audit. But even if there were, what level of aggregation of disability should we be looking at? If you just compare how people that identify as disabled in some way do against non-disabled people, that’s actually not going to tell you very much, because there are so many different forms of disability that people will be affected in very different ways. So even within a particular type of disability like neurodivergence, a test may impact one autistic person and not another. So really, there’s no way to fully test and show with statistical evidence that certain candidates are being screened out by virtue of their disability. So this is a huge problem in this space and it gets messy explaining it.
Alexandra Reeve Givens: [00:18:19.57] I’m kind of conscious about not being particularly eloquent here, but it’s a really important point to drive home, that if you’re looking just at statistical auditing to try and find bias or test for bias in these models, that really doesn’t work for disability. The data simply isn’t there. If you think about it, the same problem really exists for other types of protected identities, too. So non-binary gender identities, for example, non US race and ethnic categories, people who live at the intersection of multiple marginalized identities, like being a disabled person of color. All of those people risk having being affected by how a test is manifesting, but really do kind of escape these broad swaths of statistical auditing that many of the companies are relying on right now.
Jessica Miller-Merrell: [00:19:15.17] This is a lot to take in. Right. And I mean, I will be going back to listen to this several times because these are things that, again, have not been talked about at many of these HR technology conferences, trainings, many books that I’ve read that are focused in these areas. What are maybe new or other solutions that exist or maybe don’t yet exist, but can help address this, or reduce bias in artificial intelligence?
Alexandra Reeve Givens: [00:19:49.4] So the most fundamental change I want to see is an end to these promises that tools are audited and thus, bias free. The vendors are simply selling snake oil if they say that. And I think it’s a huge vulnerability for employers. There’s no quick fix here. And so we really need to make sure that employers understand the complexity of the issues and know to be smart customers who are really pressing to make sure that these tools are being developed for them in appropriate ways.
Alexandra Reeve Givens: [00:20:18.47] The reality is that employees and vendors have to do a lot of hard work for developing meaningful hiring practices. There’s really no substitute, whether it’s AI driven, fancy new technology or not, for your obligation to do a meaningful job analysis, for the job analysis to be tailored to the specific needs of your job and to focus on the real skills required for the position you’re trying to fill. So we need to get away from those risks of job descriptions that say, can you lift a 10 pound weight as a requirement for a desk job? Right. That’s lazy drafting or lazy job analysis. If a job doesn’t actually require you to engage in that type of lifting, but it’s just a placeholder to say that this is a desk job, and it creeps in all the time into job descriptions, that is the type of impulse that we need to fight against. And that’s even more so in this AI world where often vendors are kind of helping you develop what your testing criteria will look like. You need to really challenge, to see whether that actually fits the job that you are trying to fill. I’m kind of scared that some of the AI vendors are letting people sidestep that hard work, even if they purport not to, by telling employers we’ll study your successful employees and we’ll use them as a proxy to decide the qualification standards for a role.
Alexandra Reeve Givens: [00:21:39.71] In that instance, you’re really just relying on correlation or observed trends and the people that you happen to have employed so far. You’re not actually doing a causal analysis of the specific functions of the job and the skills that are required. So one thing that I think is really important is for employers to internalize that. And then, stepping over to the regulatory side for a minute, for the EEOC, OFCCP and others, to really make that obligation even clearer so that everybody really does, does recognize it and start to follow it.
Jessica Miller-Merrell: [00:22:12.08] Let’s talk a little bit about current litigation that’s happening right now when it comes to artificial intelligence and workplace discrimination. Can you walk us through some of that?
Alexandra Reeve Givens: Sure.
Alexandra Reeve Givens: [00:22:25.1] So one area that now has a fairly developed body of case law is whether personality tests are acceptable in hiring. These are increasingly common. And there was a Wall Street Journal article about five years ago reporting that 70 percent of U.S. workers are taking them when applying for a role. They’ve been around for a long time. So the main cases on this are actually from kind of the early 2000s. And the main claim that comes up in these cases is that personality tests are essentially a pre-employment medical exam, which is in violation of the ADA. In 2005, there was kind of the highest level case on this came down from the Seventh Circuit. It was called Karraker v. Rent-A-Center. And that case made clear that tests that are developed by psychiatrists for psychiatric evaluation clearly crossed the line. So that’s kind of one end of the spectrum. But then there’s still this big gray area of kind of what other things border on that without necessarily, you know, at what point does something cross over into that territory? There are a couple of major retailers. So CVS, Target, Best Buy have entered into settlements with the EEOC to modify tests that they were using. They’re confidential settlements, so it’s hard to really unpack a lot of the details, but that shows that the EEOC is paying attention to this area and it’s something that employers really need to keep an eye out for. When they’re talking to vendors, they need to ask, have you thought through this body of law? Is there a risk that this could be perceived as a pre-employment medical exam? Something that’s really important to keep in mind when you’re thinking about that is the EEOC has guidance saying that something is a medical exam if it’s likely to elicit information about a disability before making an offer of employment. That’s actually much broader language than I think many people would kind of take off the cuff if you think of a medical exam.
Alexandra Reeve Givens: [00:24:24.47] So this is an area that employers have to pay attention to and again, really be smart in this area as they think about the potential risks. Then there’s a separate area which hasn’t been litigated as much yet, but is really important when people think about their legal obligations. And that’s whether there’s a case under the ADA for hiring tools that tend to screen out people with a disability and whether or not some of these AI tools fall into that category. The ADA has really clear language on this, in many ways actually more express and explicit than Title 7, which is obviously the doctrine that is familiar to many hiring managers because of racial ethnicity and gender discrimination. So the question becomes, in that instance, if the test screens out an individual with disability, the employer then has to show that the test is job related and consistent with a business necessity. So that puts the burden on the employer to be able to justify the type of evaluation that they’re doing and to actually be able to show, point by point, that the questions that they’re asking or the skills and aptitudes that they’re evaluating are directly job related. If you think about the black box nature of many algorithms and the fact that many of them are proprietary developed by vendors, employers really need to be looking under the hood to see what they’re testing their candidates for, so that they can turn around and justify that the test that they’re doing is fully job related and consistent with business necessity. So that’s another really important area of the law that employers need to be paying attention to. There’s a final set, which is that one civil society organization recently filed a complaint with the FTC against the company HireVue, which is one of these vendors that does sentiment analysis on recorded video interviews from candidates. The main thrust of that complaint, actually focused on a different thing. It focused on HireVues’s claim that they don’t use facial recognition technology when at the same time they’re purporting to analyze videos with a focus on facial movements. So really, this is kind of the organization filing the complaint is trying to just push on them in terms of questions of transparency about their evaluation methods. So that’s one of the initial suits that a lot of us will be looking at, just because it’s a first, you know, throwing down the gauntlet or something, in terms of challenging that type of that type of business model.
Alexandra Reeve Givens: [00:26:47.38] But I wouldn’t be surprised if we start seeing more challenges in the future. The language that I think is important to use is conscious as a moral issue and like for future proofing gets future legal claims. Employers need to be able to justify the tests that they’re using. They need to be able to look under the hood, too, because it is they who are going to have to defend themselves, saying the test that we’re using is job-related and consistent with business necessity. There’s another part of the ADA, another regulation that says that failing to select and administer employment tests in the most effective manner is also a violation. So there’s an affirmative obligation to administer tests that accurately reflect the skills and the aptitude of candidates rather than reflecting their impairment, is the language of the statute. So in both of those instances, we see that employers need to know what’s going on. They need to have made an informed decision about the hiring tool that they’re using to be able to protect themselves down the line. There’s a really interesting question about whether the responsibility is with the employer or the vendor. And many vendors are including indemnification agreements to help reassure employers that their tools are kind of appropriate for use. That’s fine that contracting parties enter into contracts. And so that’s a normal business relationship. But I still would counsel the employers that they need to be wary of those agreements that ultimately distill their reputation, their company culture that is on the line, and the moral responsibility is still with them to be delivering on effective, fair, non-discriminatory hiring practices.
Alexandra Reeve Givens: [00:28:31.78] I would say that the pressure is also on the vendors as well, because they have touchpoints to so many different employers around the country, that they also need to be above reproach in terms of counseling good business practices on this question.
Jessica Miller-Merrell: [00:28:43.48] Absolutely. I want to talk a little bit about different state laws, including the Illinois Artificial Intelligence Video Interview Act that went into effect in January of 2020. What are your thoughts on this? How will this impact the workplace and our topic that we’re talking about today?
Alexandra Reeve Givens: [00:29:05.11] Sure. So as it ended up, the Illinois bill is essentially now a notice bill. It’s designed to make sure that people know that their video interviews are being recorded and that they may be analyzed. That definitely has some value, right? Better that than surreptitious recording and analysis of a video of you in kind of a personal moment with a future employer. But at the same time, I still have real concerns about its efficacy. One of the key goals of disclosing, so telling somebody that you’re recording their video and that you’re going to analyze them needs to be for people to know what they’re being evaluated on. So that in the disability context, they need to know whether they have to ask for a reasonable accommodation. If I’m colorblind, I know that I need to ask for a reasonable accommodation before I engage in a test that requires me to click on all of the red squares in a red and green interface. Right. That’s an obvious one. You know, the accommodation that you need. But if I’m autistic or have speech aphasia, I may have no idea that those will impact how the AI program scores my recorded video interview. I need to understand what the phrase “this video may be analyzed as part of your review process” actually means, so that I can make an informed, thoughtful decision on whether there’s a chance my disability may impact how I do, and then request an accommodation for that.
Alexandra Reeve Givens: [00:30:33.77] So for that to happen, it really requires a pretty detailed disclosure about what, as an employer, what you’re planning to do with the video, what your analysis actually is going to test for. That’s a lot more than I imagine most people will feel comfortable going into. There’s kind of this creepiness factor when you start really explaining how you’re going to analyze things or start talking about facial movements and patterns of speech. Right. But if we want these disclosures to actually serve the purpose of putting people on notice so they can know whether or not they’re going to be, there’s a chance that their disability may impact their performance, we really need that level of detail. So that’s a big area of concern with this. The other thing to think about is, well, first of all, the kind of the power imbalance in that situation. It is asking a lot of a candidate to ask for a reasonable accommodation. There are some things that are natural. Right. You assume when you go for an in-person job interview that a building is going to be wheelchair accessible. And, you know, there’s not a lot of emotional baggage and making sure that’s the case. They have a legal obligation for it to be accessible. But there’s a lot of emotional baggage and powerful weight in disclosing a disability and asking for an accommodation, particularly if it’s around something like this, that’s purporting to engage in sentiment analysis, it’s about your mode of communication, that can be really tricky for people. And employers just have to be aware of that dynamic.
Alexandra Reeve Givens: [00:32:02.59] So it’s not only a question of disclosing: this is what we’re planning to do with this video so that people can know whether they’ll be impacted, but communicating that in such a way that people still feel able to actually seek those reasonable accommodations. Then when they do ask for them, it is equally important and indeed legally mandated that when you choose that path, if the reasonable accommodation actually means that they’re not using this hiring tool, they’re getting an in-person interview or a phone interview with somebody, you have to make sure that that track is given equal weight, that the reasonable accommodation is still keeping them in the funnel and allowing them to progress against others in the pool, that it’s not an off-ramp that’s actually taking them entirely out of the process. No vendor can fix that for you. Employers have to be the people that manage that process, that are taking accountable steps to make sure that reasonable accommodations are being offered in the right way, in a thoughtful way, and that those people are being equally valued in the process. And I fear that things like the Illinois law, that just focus on disclosure, only deal with a very small piece of the problem, when really employers have to think of the whole lifecycle that comes around this process, and making sure that they’re being fully accommodating throughout.
Jessica Miller-Merrell: This year is the 30th anniversary of the Americans with Disabilities Act.
Jessica Miller-Merrell: [00:33:27.01] Does the A.D.A address any type of AI employment concerns? I’m asking this because I realized that when the Americans with Disabilities Act was enacted, AI really wasn’t something that we were thinking about.
Alexandra Reeve Givens: [00:33:41.47] Right. So, yeah. So the ADA didn’t talk about. It would have been super impressive if it did. But it does have remarkably detailed provisions about employee selection tools, among other tests.
Alexandra Reeve Givens: [00:33:54.37] We’ve talked about that a lot already, but it really is interesting, just the level of specificity, the number of overlapping provisions that are all kind of getting at similar ideas but doing it in different ways, to make sure that people are being evaluated on their individual merits, not based on stereotypes about a perceived disability or on the results of a test that doesn’t reasonably accommodate their specific circumstances. I think those are really kind of the key lessons of the ADA and the spirit that should continue to animate us even as we move into conversations about AI and hiring.
Alexandra Reeve Givens: [00:34:31.42] So it is these ideas, right, that you shouldn’t be hired based on stereotyped assumptions about your ability, but on your actual ability to perform. And the job evaluations should focus on your ability to do the essential functions required for that job, acknowledging that you may need a reasonable accommodation to do them. So the ADA places a really heavy hand on this scale in terms of individual assessment about your unique capacity to do the specific things required in a specific job, and in a way all of that actually runs fairly counter to the spirit of grouping and assessing people at scale like many of these AI tools do. So even though the A.D.A didn’t talk about AI, it has an animating spirit and specific provisions in it that clearly read on to these situations today.
Jessica Miller-Merrell: As we look into the next 30 years of work, what emerging workplace transfer technologies do you think will have the biggest impact on people with disabilities?
Break: [00:35:29.09] There’s no question that technology is transforming the lives of many people with disabilities. I mentioned before, my father was paralyzed from the neck down. And I still remember to this day, the day that he got access to Dragon Dictate software, which was software that allowed him to dictate and to word processing documents independently. No disrespect to the company. All these years later, the comprehension back then wasn’t all that great, and it obviously only worked for word processing documents on a big clunky computer in his office. So now I just think about how he would have loved the world we’re in today, right, with a voice activated personal assistant to make phone calls independently, to choose music. You know, all of the things that come with the new powers of technology and particularly voice-activated technology for people with mobility impairments. It’s just a huge transformative force in people’s lives.
Alexandra Reeve Givens: [00:36:28.43] That’s life in general. Now, of course, when we start thinking about remote technologies, it really should and can be improving workplace opportunities as well. There’s a clear shift in workplace culture, a more open understanding now about teleworking and flexible schedules. And in my mind, that is all for the good and can be really enormously helpful, the more that we make those opportunities available for people. I will say that that’s good stuff. I do still think there’s a lot to be done, right. The unemployment rate for workers with disabilities is still more than twice that of non-disabled workers. And anybody who cares or thinks about this space needs to own that and think about our roles in trying to transform that reality. We have these massive systemic problems that technology is just not going to fix. Right. Like the fact that once you lose time in the workforce for any reason, it’s really hard to make up that gap and you start to have compounding disadvantage, that countless disabled people are penalized for working because it jeopardizes their benefits. I mean, there’s so many different forces and barriers we need to combat. That while technology is a piece of it, there’s just more work that we need to do that’s about much more than that. If anyone’s looking for a reading recommendation, the Center for American Progress’s Disability Justice Initiative has a really powerful paper with recommendations for removing some of these barriers to economic mobility that’s just, it’s a powerful read and I really recommend it. Also, most importantly, I think the changes that technology are bringing in the workplace are wonderful, but only if people have affordable access to them, and if there’s a workplace culture that really supports and endorses them. So people need to see disabled folks in leadership positions and all workers respected and trusted, that they can do their work with consideration to their personal circumstances. So that part is deep, ongoing person-to-person cultural work that needs effort every day.
Jessica Miller-Merrell: [00:38:30.05] Alex, thank you so much for joining us today on the Workology podcast. Where can people go to learn more about you and the work you’re doing?.
Alexandra Reeve Givens: [00:38:37.67] Sure. So the Institute for Tech Law and Policy at Georgetown is at www.GeorgetownTech.org and the Christopher and Dana Reeve Foundation is at ChristopherReeve.org. And we’d love to share information with anyone who’s interested.
Jessica Miller-Merrell: [00:38:54.2] Awesome. Thank you so much for taking the time. I really appreciate it.
Alexandra Reeve Givens: Thank you. This is a real treat.
Closing: [00:38:59.66] Are you tired of putting your professional development on the backburner? It’s time for you to invest in yourself with UpskillHR by Workology. We’re a membership community focused on personal development for HR. Gain access to our elite community, training, coaching and events. Learn more at UpskillHR.com.
Jessica Miller-Merrell: [00:39:24.71] Alex’s insights here are invaluable for HR leaders and I appreciate her expertise as a lawyer with a personal story and a passion for helping to educate the public on the likelihood of AI causing discrimination for protected classes and underrepresented groups, including people with disabilities. It’s imperative for HR leaders to be aware and informed of these types of conversations, the research, as well as the current employment law and protections that are happening and being developed right now. The Future of Work series is in partnership with PEAT, which is one of my favorites. Thank you to PEAT as well as our podcast sponsor Workology.