How might business schools & students lead the charge for artificial intelligence that is inclusive and equitable?
Learnings from our speaker series on “Advancing Inclusive AI”
By Genevieve Smith and Ishita Rustagi
The use of Artificial Intelligence (AI) is advancing rapidly, impacting all areas of our lives. AI is also increasingly being used for “social good”, tackling issues spanning climate change, inequality, financial inclusion, and healthcare. This is exciting, and we must continue exploring how this technology can solve some of the world’s toughest issues. At the same time, it is important to ensure these systems are built and managed in ways that do not inadvertently perpetuate inequities in our society. That is, it is important to work towards ‘inclusive AI’.
The concept of inclusive AI is twofold. It includes having inclusive processes for developing, using, and managing AI responsibly. It also includes building AI systems with the explicit goal of advancing social equity and inclusion. In the fall of 2020, we at the Center for Equity, Gender, and Leadership (EGAL) at Berkeley Haas conducted a 3-part series of events on Advancing Inclusive AI with speakers across corporations, academia, nonprofits, and multilateral organizations. The series built from our Playbook on Mitigating Bias in AI.
A single thread wove through all the sessions: business leaders — current & future — have a major role to play in advancing inclusive AI. With AI slated to add $15.7 trillion to the global economy by 2030, students graduating from Haas and other business schools will undoubtedly be working at and leading organizations that interact with AI in some shape or form.
However, business school students are currently not prepared to grapple with challenges and opportunities related to inclusive AI.
There are several challenges, which our Berkeley Haas MBA candidates weighed in on. First, business school students often see data as objective. As shared by Augustine Santillan, folks coming into business schools may have the misguided notion that data and data analytics are neutral. Second, core curriculum delving into data, data analysis, and artificial intelligence does not explore ethics. It also doesn’t examine how datasets can be biased based on who is collecting data and how it is collected, or other ways in which AI systems can lead to biased outcomes for certain groups. More broadly, there is a lack of courses discussing these issues and management strategies to tackle them.
So how can business schools better prepare future business leaders to respond to risks and opportunities related to inclusive AI? Our speakers offered the following recommendations:
- Acknowledge the role and responsibility of business leaders in advancing inclusive AI. Business leaders are responsible for the high level decisions that make it possible for AI teams to work towards socially responsible outcomes. “Objectives should be aligned from top leadership all the way down to data scientists and developers themselves to understand the importance of these directives.” — Augustine Santillan, MBA candidate at Berkeley Haas
- Teach leadership as inclusive leadership. In business school settings, there remain separate threads for business and impact, with societal impacts of business often relegated to CSR-related courses. In reality, this distinction is blurry — something that is especially true when thinking about AI. “As long as we have separate conversations about ‘business as usual’ with the single bottom line of profits, and ‘impact oriented business’ that assigns the responsibility of saving the world to CSR departments, we cannot make collective progress.” — Fayzan Gowani, MBA candidate at Berkeley Haas
- Embed ethical considerations in data science courses. AI is increasingly being leveraged across and within business. Business students can better manage and lead organizations if they understand requirements for responsible data and algorithms, as well as how employees and clients might use AI systems. “The world is dynamic, and our algorithms are static so we have to put ourselves in the viewpoint of somebody who might try to misuse them.” — Nitin Kohli, PhD candidate at UC Berkeley School of Information
When it comes to Haas, our students believe the MBA program is heading in the right direction. Asif Mohammad, working on his new AI-based startup, SocratiQ, says his classes have prepared him to prioritize diversity and inclusion within his team early on, and encouraged him to reflect on important issues of bias as he builds his product. However, more change is welcome.
Beyond structural shifts, our speakers also highlighted the following recommendations for business students:
- Don’t underestimate the power of social sciences. AI systems are being developed to solve problems that exist in our world, lives and/or organizations. Only focusing on the mathematical and technical aspects of data and algorithms is irresponsible. Leaders doing so will miss out on how bias can pose massive risks for a company and, in some cases, certain groups in society more broadly. “What is fair is not unique to machine learning (ML) / AI. The struggles of ML reveal fundamental struggles of discrimination, inequity, and justice in the world.” — Matissa Hollister, Fellow at the World Economic Forum’s Centre for the Fourth Industrial Revolution and Assistant Professor at McGill University
- Don’t forget to question the status quo. Think critically and ask the tough questions. For instance: how are we defining “fairness”? What are we sacrificing by focusing on profit maximization and how might this open us up to business risk?
- Understand that machine learning / AI may not always be the solution. Throwing AI at a problem as the solution might be like throwing a bandaid over a much deeper issue. Consider tough questions around whether it is the appropriate solution, and for whom.
Clearly, there is much work to be done, and the roles of business schools and business leaders cannot be ignored. Business schools in particular should step up to prepare our future leaders to advance inclusive AI.
We thank our incredible speakers (listed below) for these insights and recommendations that give us concrete steps to implement as we work towards inclusive AI.
- Jill Finlayson — Director | Women in Technology Initiative, UC Berkeley
- Asif Mohammad — MBA / MEng Dual Candidate | Berkeley Haas; Founder and CEO | SocratiQ, an AI startup that uses natural language processing (NLP) to drive civil and high-quality debates / deliberations online
- Maria Axente — Responsible AI & AI for Good Lead | PwC
- Kristen Itani Koue — Sr. Impact Manager | Samasource
- Fayzan Gowani — MBA Candidate | Berkeley Haas; GSI | UGBA 177 — Ethics & Artificial Intelligence
- Nitin Kohli — PhD Candidate | School of Information, UC Berkeley; Algorithmic Fairness and Opacity Working Group (AFOG); Data-Intensive Development Lab (DIDL)
- Matissa Hollister — Fellow, Centre for the Fourth Industrial Revolution | World Economic Forum; Assistant Professor | McGill University
- Reena Jana — Head of Content Strategy, Responsible Innovation (Global Affairs) | Google
- Augustine Santillan — MBA / MEng Candidate | Berkeley Haas