A Conversation with Jill Finlayson, Director of the Women in Technology Initiative at the University of California

Jill Finlayson is the director of the University of California’s Women in Technology Initiative (WITI@UC) and a lifelong advocate for women in the technology sector and equitable workplaces. As someone who worked at eBay in its early days, founded and mentored technology startups, and now runs WITI@UC, she utilizes her technical background in conjunction with her ethical perspective to break down barriers causing disproportionate advancement in the technology sector. There are clear synergies between EGAL and WITI, and Jill has been a helpful thought partner, providing valuable insights to the EGAL team in the development of EGAL’s forthcoming Equity Fluent Leadership Playbook: Mitigating Bias in AI.

As an undergraduate Haas student who served on the EGAL Student Advisory Board and is pursuing a job in the technology sector post-graduation, I heard Finlayson speak about the bias inherent in AI tools used for recruitment and retention and realized its real-world implications on current students applying to jobs. After hearing Finlayson speak about opportunities to reduce bias in HR processes, I thought she would be a great source to emphasize the importance of advocating for more equitable practices in the AI space so that my peers and business leaders could start to understand the importance of the mitigation techniques coming out in EGAL’s playbook.

Q: Bias in AI is just one aspect of the broader biases in engineering and technology. For example, you have previously spoken in lectures about how women are more likely to be hurt in car crashes because the test dummy in the driver’s seat is modeled after a male body. Why does this kind of bias in engineering persist and how can it be prevented?

A: It’s hard for people to challenge the way things have always been done and the “gold standards” since change is hard and change is costly. For example, the crash test dummy was used first in the military where it was mostly men at the time. However, women for more than half of all licensed drivers. To adjust for women taking the driver’s seat in the world outside of the military, they just shrunk that original dummy down, which does not accurately reflect their dimensions. Currently, an anatomically correct female dummy exists, but it presents a cost and regulation change that the industry would have to absorb. It’s going to take intentional effort to change the system. [The way to mitigate this] would be for customers to demand the shift or for the government to regulate it.

Q: What are the most pressing challenges regarding AI in Human Resources (HR) today?

A: I worry about AI being done wrong and a dataset or algorithm being biased. I worry about what is in the algorithm, what proxies are being used in the algorithm, and what the output of the algorithm will be. In the process of creating AI tools, if developers don’t test in multiple contexts they won’t be aware of the biases in their algorithms. Then, if someone is negatively impacted by AI later on, there is no way for redress. I am usually worried about if AI doesn’t work. But, what does it mean for women if AI works correctly? What if it’s working exactly as intended and it harms women or other under-represented groups? There are, of course, aspects of intersectionality that compound the effects of AI on women who hold intersectional identities (see video for more on the compounding effects). The intersection of HR and AI is a small microcosm of the biases of AI. AI in HR decides who will see job advertisements, women being less likely to see high paying jobs. Also, AI decides who will make it through a resume filter…and women often talk about their skills differently on resumes. Once candidates make it to an interview, AI can be used by HR to assess their personality through recorded video interviews, which is problematic since facial recognition technology is much worse for women and people of color. However, one upside to video interviews is that they are structured (which allows for more standardized, equitable treatment of candidates). Then, when a candidate is hired, AI can impact advancement within organizations. For example, datasets and proxies may not be transparent, so even if there are intentionally inclusive developers, machine learning looks for patterns, that developers may be unaware exist, and replicates them. This is the reason we need explainable AI for human checks and balances. Explainable AI means that for every block of code, we can check if it actually did what it was supposed to do. Note: Finlayson suggests watching this video for more detail on these issues, including a legal perspective.

Q: Can you give a brief overview of your main recommendations for what people can do to prevent bias when using AI for recruitment? What can business leaders and companies do to prevent biases in the use of AI for recruitment and retention?

A: First, we need to have diverse teams of creators, so we don’t have blind spots in our innovation. Those teams and leaders need to be asking the ethical questions. Who is helped by this and who is harmed by this? How might somebody misuse this technology? Asking these questions early-on and having clear requirements for transparent data are important for mitigating bias. If you are the business leader hiring an AI firm for your recruitment processes, are you asking what’s in the algorithm or who was on the team that created it? Are you asking them to show how they tested it in different contexts and mitigated for biases? If you, the client, don’t ask these questions, the AI firm has little incentive to test for bias on their own. If it’s an algorithmic problem on the part of the firm, it can be fixed with effort; eliminating AI bias is often a matter of priorities. At the end of the day, companies creating or using AI have to take responsibility for their algorithms and their fairness and put the effort into mitigating bias.

Q: What actions can applicants and employees take to prevent themselves from falling into the biased recruitment processes created by AI? What advice do you have for students recruiting for internships and post-graduation jobs when facing biased recruitment processes?

A: As individuals, we need to be aware of and participate in these conversations. We need to demand transparency, consent, freedom from bias, redress, and oversight. [Students and entry-level employees] can still ask questions of peers and immediate supervisors; they can bring up these issues and be the voice in the room for better, more inclusive innovation. Questioning the “gold standard” processes and criteria like the requirements for crash dummies (see first question) would provide some real value to a company. If you believe you have experienced a biased recruitment process, point it out and learn from it for your own innovation; then you’ll know what the barriers are that should be addressed. Applicants and employees need to understand the gravity and reach of AI, they need to understand the data issues, and they need to understand the systemic impact on so many people. I recommend that they ground themselves in books like Invisible Women, Weapons of Math Destruction, Algorithms of Oppression in order to be see the issues and to provoke questions. We need people to understand how urgent this problem is and that we can’t wait to deal with it.

Want to go deeper and get more insight on solutions needed?

Make sure to look out for the release of EGAL’s “Mitigating Bias in AI: An Equity Fluent Leadership Playbook” coming soon to understand how to recognize some of the problems described in the interview and how to implement solutions recommended by EGAL. Sign up here to be alerted when it goes live and get exclusive insights.

--

--

Center for Equity, Gender & Leadership (EGAL)

At the heart of UC Berkeley's Business School, the Center for Equity, Gender, and Leadership educates equity-fluent leaders to ignite and accelerate change.