Advancing Responsible AI Innovation & Leadership: What does it mean to practice responsible innovation in AI, particularly for AI systems using human language?

On October 14, 2021, the Center for Equity, Gender & Leadership (EGAL) hosted four inspiring leaders in corporate innovation to discuss responsible management strategies for AI systems that rely on human language. The use of AI systems impacting various areas of our lives is rapidly expanding — AI systems are slated to add $15.7 trillion to the global economy by 2030. The opportunities this poses, as well as dangerous consequences, have risen to the forefront for corporate practitioners committed to instilling equity and inclusion within product development.

All four speakers paid particular attention to AI systems built with data supported by human language including chatbots, hate speech filters, and speech recognition software, and focused on how to best identify, mitigate, and remove biases.

The event launched EGAL’s new guide, Responsible Language in Artificial Intelligence and Machine Learning, aimed to support management practices and actions that infuse equity and inclusion into the product life cycle. Jen Gennai, Founder & Head of Responsible Innovation at Google emphasized that “tools are really important and they need to be repeatable and transferable. Toolboxes, like the one that EGAL has created, [allow] people to learn when they may not have the same level of resources or expertise that corporations or larger institutions have.”

The speakers shared their perspectives and experiences building teams and product cycles centered on responsible AI practices. Below are four key takeaways from the panel.

  1. The practice of responsible AI is every person’s job. Rachel Gillum, PhD, Global Policy Director for the Ethical and Humane use of Technology at Salesforce, emphasized that although equity and inclusion are a core pillar of her team’s work, they rely on many teams with extensive experience, backgrounds, and expertise to plan, develop, audit, and adjust AI technology. Specific teams include the Ethics by Design team, Ethical Use Policy team, and Inclusive Design & Product Accessibility team all working collaboratively to plan, develop, implement during each phase of a product life cycle. Rachel also referenced an Ethical Advisory Council that consists of “a group of external parties from all over the world, different experts and experiences…as well as frontline employees…and communities directly impacted…to scrutinize all of the decisions being made.”
  2. Disaggregate the Data. Miranda Bogen, Privacy Policy Manager, Artificial Intelligence & Machine Learning at Facebook advises engineering teams building AI systems. She recommended that teams stop building for the easiest audience, or the audience where there is the most data, and instead design and build for the audience that may be harmed or who is not clearly benefiting from the product. “Never just think about the overall performance, or the overall averages. Disaggregating data, and considering how this product is performing for different communities is critical.” Miranda also referenced the tension that exists between data and privacy and recommended that teams follow a set policy that defines the way they collect and leverage data of their users.
  3. Feedback is more important than ever. Creating a feedback loop and dialogue between companies, their customers, and the greater community is one of the best ways to mitigate bias, increase inclusivity and improve products. Jen Gennai, Founder & Head of Responsible Innovation at Google shared three main feedback channels that her team uses to solicit input, especially from marginalized and underrepresented customers. The first bucket includes feedback shared through Google’s established channels and seeks to capture reviews and customer experiences. The second is feedback captured by participatory research and is aimed at elevating voices from communities who may not use the product, but are impacted by it or those that do not know the feedback channel exists. The third involves “working directly with community groups, community leaders that can try to represent a community scale instead of expecting that one person can represent the feelings or thoughts of everyone.”
  4. Responsible AI requires a lifelong commitment to practice and learning. All four of the panelists recommended that MBA students interested in leading companies developing AI and ML technology commit to increasing their awareness, stewardship, and practice of layering ethical considerations to every aspect of their work. “I am hesitant to say there should be any [course] modules around ethics and responsibility,” said Jen Gennai at Google. “It should be built into, and integrated into your thinking, into your education. Just as we think about it along the product life-cycle, [MBA students] should be thinking about it across the business and education life cycle. These are critical muscles we are building and you’ve got to use it to make it better.”

Berkeley Haas alum and Director of Product Management at Google, Archana Kannan, shared final remarks for the session reminding audience members of the Berkeley Haas Defining Leadership Principles that guide our students, faculty, and partners on campus. “All of us have to go beyond ourselves” Archana said. “This is an early space and there is so much that needs to be shaped. We are nowhere close to completion and everyone needs to do their part to help define this in the next decade.”

--

--

Center for Equity, Gender & Leadership (EGAL)

At the heart of UC Berkeley's Business School, the Center for Equity, Gender, and Leadership educates equity-fluent leaders to ignite and accelerate change.