Reflections on Artificial Intelligence, Bias, and Leadership from the Global Women’s Forum
Innovation is moving fast. Systems are being broken and transformed. Now is the time for leaders to reflect and act for inclusion, before it’s too late.
I stepped into the Carrousel Du Louvre in Paris alongside 3,000 other delegates across 95 countries to attend the Women’s Forum for Economy & Society Global Meeting. Participants spanned European government officials, multinational Fortune 500 companies, academics, scientists, non-governmental and charitable organizations, and UN and other multilateral officials — all under the theme of “taking the lead for inclusion.” The 3-day Forum stretched across five focus areas: climate; health; artificial intelligence (AI); Science, Technology, Engineering and Mathematics (STEM); and business.
Throughout the conference, there was a sense of urgency. In particular, an urgency around getting our existing economic system to work for more people, something it’s not doing for the vast majority. Of particular interest to me was the Forum’s focus area on AI. As AI continues to rapidly advance, it will undoubtedly create radical societal and economic change with immense implications for who wins and who loses.
Critically, the question I keep asking myself is: how can AI promote the inclusive, equitable society we seek? Currently (and all too often) AI inadvertently replicates and/or amplifies harmful norms and biases that can solidify power dynamics and inequities. This stems from biased data sets, lack of diverse engineers and data scientists, as well as our own societal/individual biases that are embedded in algorithms and AI. Solutions are required, and are thankfully being pondered by some of the brightest minds, and the biggest market capitalists.
As more and more companies design and adopt AI-based solutions around employment, healthcare, policing, education, etc., and governments increasingly manage state systems and social programs with AI, it’s critical that we understand and address this challenge. It seems that with self-reflection and prudence, bias can be mitigated in AI and, just maybe, AI could even help to promote equity and inclusion. But how?
Some narratives emerged during the forum that illustrate a starting point:
Discrimination is relative over geography and time. We need to ask questions around ‘fairness’ and understand how bias can come into play for different identities across different contexts. The problem is, fairness is relative, so who is defining what is fair and where? This remains a central question facing all stakeholders as principles and frameworks are sought. Understanding power dynamics, as well as how concepts and language around diversity, equity, and inclusion vary in and across contexts, is key.
Some solutions are working, but most don’t know where to start. Examples of promising solutions span from inclusive teams in engineering and data science, to auditing algorithms for both obvious biases and more subtle harmful societal norms. Regardless, change needs to happen on a holistic, systems level and will require time.
There is a role for everyone in solutions. The private sector must continue to work to address biased AI, recognizing the societal implications and seeing biased AI as a business risk. Meanwhile, academia can fill research gaps, shine a light towards hot issues and early solutions, and advance the multidisciplinary education necessary for future leaders. Government agencies and multilateral institutions can also lead the way on guidelines and legislation for responsible AI.
The Forum left me full of ideas and inspiration, but also with a deep sense of urgency. There can be a tradeoff between ethics and innovation — Innovation is rapid and doesn’t always incorporate ideals around fairness and accountability. As the Facebook motto puts it, “Move fast and break things.” But what if those things are people and democratic systems?
As my plane descended into San Francisco, a wave of hope washed over me again: The time is NOW for collaboration, and for action.
For our part, at the Center for Equity, Gender and Leadership (EGAL) at UC Berkeley’s Haas Business School, and related to AI specifically, we are pushing ahead to translate academic research into practitioner-oriented solutions for business leaders in order to mitigate bias in AI. Our recommendations will be available in a forthcoming Equity Fluent Leadership Playbook.
We look forward to new, and ongoing, collaboration with stakeholders from the Women’s Forum meeting and beyond. True to Facebook’s motto, things are moving fast, and the time is now to help business leaders both reflect on the diverse impacts of the value they create and instill inclusivity — our global future just might depend on it.
To stay up-to-date on how EGAL at Berkeley Haas is helping leaders advance businesses that are more diverse, competitive, and innovative in a changing global landscape, informed by knowledge and movements for equity and inclusion, visit www.haas.berkeley.edu/equity.