The BAIR Responsible & Equitable AI (RE - AI) Initiative is driving critical research, innovation and collaboration towards responsible and equitable AI.

As AI innovation continues to accelerate rapidly, so too must research examining its implications on society and methods for advancing more responsible AI. The BAIR Responsible & Equitable AI Initiative is dedicated to supporting an inclusive community for researchers across AI and social science disciplines to advance understandings around theories and practices for responsible and equitable AI. The initiative explores innovations for creating more responsible data, models and management approaches that enable us to better support a more inclusive and equitable society.


We have three areas of work:

  • (1) Research projects – We prioritize research projects that are multidisciplinary and may build from research collaborations with other organizations and groups within and outside of UCB. We explore a variety of research topics related to responsible and equitable AI design, development and deployment.
  • (2) Community of practice – The initiative seeks to build a sense of community amongst UC Berkeley researchers exploring responsible and equitable AI across campus, as well as share opportunities.
  • (3) Responsible & equitable AI convenings – Convenings allow researchers to connect with each other, as well as industry leaders, on topics of responsible and equitable AI on campus and beyond to spur knowledge sharing and collaborations.


  • Assessing linguistic bias in ChatGPT: This research examines ChatGPT’s performance for various English language varieties to understand and make transparent linguistic biases and ideologies that may be reflected in ChatGPT and related large language models.
  • Proactive Strategies for Equitable & Responsible Generative AI: While various guidance exists for technical decision makers regarding responsible AI, there's a lack of information and guidance for those making business decisions to incorporate and apply generative AI models in products and services. This research examines how decision makers are considering ways to incorporate generative AI, and paths towards utilizing generative AI models are in ways that are responsible.


Affiliated researchers & collaborators

  • Eve Fleisig, PhD Student, BAIR, UC Berkeley
  • Brandie Nonnecke, Associate Research Professor, UC Berkeley; Founding Director, CITRIS Policy Lab
  • Jessica Newman, Co-Director, UC Berkeley AI Policy Hub; Director, AI Security Initiative; Co-Director, Algorithmic Fairness and Opacity Group; Research Fellow, Center for Long-Term Cybersecurity
  • Merrick Osborne, Postdoc, UC Berkeley
  • Nataliya Nedzhvetskaya, PhD student, UC Berkeley
  • Brian Lattimore, PhD student, Stanford

Are you a UCB faculty, staff member, student, or post-doc interested in joining the community of practice? Sign up HERE.

Interested in collaborating or learning more? Contact Genevieve Smith:

| Blog | Facebook | Twitter | Faculty | Students | Alumni | Staff | Courses | Seminar | Textbooks | Software | Affiliates | Admissions | BAIR REU | Contact Us |