AAAI 2023 Workshop on Diversity in Artificial Intelligence
Artificial Intelligence - Diversity, Belonging, Equity, and Inclusion (AIDBEI)
Welcome to AAAI 2023!
The AAAI 2023 Workshop on Artificial Intelligence Diversity, Belonging, Equity, and Inclusion (AIDBEI) is a one-day virtual event at the International Conference of the Association for the Advancement of Artificial Intelligence. This workshop is the third in the series of workshops organized by Diverse In AI, an affinity group which aims to foster links between participants from underrepresented populations, which in artificial intelligence includes but is not limited to women, LGBTQ+ persons, and people of color. Diverse in AI was founded with the support and participation of our allies from Black in AI, WiML, LatinX in AI, Queer in AI, {Dis}Ability in AI, Indigenous in AI, and the Black in X Network.
Place: Virtual |
Start Time: 9:00 AM Eastern Standard Time |
End Time: 7:00 PM Eastern Standard Time(EST) |
Date: February 11, 2023 |
PMLR proceedings from last year's workshop at AAAI 2022 |
Call for Participation
This workshop is the third in the series of workshops organized by Diverse In AI, an affinity group which aims to foster links between participants from underrepresented populations, which in artificial intelligence includes but is not limited to women, LGBTQ+ persons, and people of color (e.g., Black in AI, WiML, LatinX in AI, Queer in AI). ƒMeanwhile, many service and outreach workshops such as Grace Hopper Conference provide opportunities to technologists to understand the needs of underserved populations and in turn give back to these communities. The organizers of this workshop wish to bring together these communities to strive to achieve the intersecting goals through interdisciplinary collaborations. This shall help in the dissemination of benefits to all underserved communities in the field of AI and further help in mentoring students/future technologists belonging to isolated, underprivileged, and underrepresented communities.
We invite original contributions that focus on best practices, challenges and opportunities for mentoring from underserved populations, education research pertinent to AI, AI for Good as applicable to underserved students communities. In keeping with the organizers affiliations with WIML, Black in AI, LatinX in AI, and Queer in AI (whose early presence and development occurred at NeurIPS and ICML), technical areas emphasized will include machine learning with emphasis on natural language processing (NLP), computer vision (CV), and reinforcement learning (RL).
List of Topics
- 1. Demographic studies regarding AI applications and/or students underserved populations
- 2. Reports of mentoring practice for AI students from underserved populations
- 3. Data science and analytics on surveys, assessments, demographics, and all other data regarding diversity and inclusion in AI
- 4. Survey work on potential underserved populations, especially undergraduate students from such populations
- 5. Fielded systems incorporating AI and experimental results from underserved communities
- 6. Emerging technology and methodology for AI in underserved communities
Submission Guidelines
All papers must be original and not simultaneously submitted to another journal or conference. Submissions should use the AAAI conference Author Kit.
The following paper categories are welcome:- Long papers (5 - 8 pages)
- Short papers and poster abstracts (2-4 pages)
- Contributed talks
Committees
Program Committee
- Dr. Ushnish Sengupta, Algoma University
- Dr. Maria Skoularidou, Researcher in theoretical CS+statistics
- Dr. Andrew Hundt, PhD, Johns Hopkins University
Organizing committee
- Dr. William H. Hsu, Kansas State University
- Enock Okorno Ayiku, Kansas State University
Panelists
- Dr. Sizikova, Elena- (CDRH) - Center for Devices and Radiological Health
- Dr. Ushnish Sengupta, Algoma University
- Avijit Ghosh - Ph.D. Candidate Northeastern University
- Dr. Gelyn Watkins - Black in AI
Workshop Schedule
- *All times are in EST
- 9:00 - 9:20 Workshop Opening
- 9:20 - 10:20 Session 1
- 10:20 - 10:50 Invited talk by Dr. Ushnish Sengupta
Title: Does diversity in AI change anything without changing organizational incentive systems?
Talk Abstract: AI projects demonstrate a repeating pattern of implementation with well-known biases, specifically bias in terms of gender and race. One of the solutions proposed in terms of mitigating the issues of bias in AI products is increasing the diversity of product development teams. This presentation argues that altering the diversity of AI development teams is a necessary but not sufficient condition, without a change in organizational culture. The presentation points to existing Human Resource Management theory, including Job Motivation Theory to understand the culture and incentive systems that need to change significantly in parallel with diversity of teams in AI development organizations, to mitigate bias in AI products.
- 10:50 - 11:10 Invited talk by Avijit Ghosh
Title: On the evolving tension between centralized regulation and decentralized development of Text-to-Image Models
Talk Abstract: AI Art Models such as stable diffusion and DALL.E are not immune to bias issues plaguing every sphere of ML. In response to the numerous bias and copyright issues of generative AI, there have been calls for centralized regulation against the unregulated harmful use of these models. In response, Stable Diffusion's tactic has been to completely opensource their model, claiming that bias will be an issue that will be solved once marginalized people fine tune their model with hyperlocal data. In this talk I explore the numerous ethical and copyright issues with Stable Diffusion, and how decentralized training continues to have bias problems. I end with a plea to practitioners to use this powerful technology responsibly.
- 11:10 - 11:30 Invited talk by Dr. Andrew Hundt
Title: Metrics for ‘Artificial Intelligence’ are regularly described as showing objective improvements to performance, efficiency, and generalization. We will explore ways in which such claims can suffer critical breakdowns due to the underlying subjectivity of assumptions and framing. Experiment design can prevent participants from reporting problems or lead to experiment outcomes that measure participant side-channel communication, rather than the intended metrics. Other cases can mismeasure populations. For example, we have conducted an audit which quantitatively shows how a prior robot ‘AI’ chooses ‘criminals’ based on passport photos in situations where no criminals are present, with additional biases with respect to ‘race’ and gender. We will consider the opportunities to holistically collaborate with and learn from the existing knowledge of a diverse range of people, participants, and perspectives to ensure a strong fit of methods and metrics to applications, with the hope of creating future outcomes that prove more safe, effective, and just.
- 11:30 - 11:50 Break
- 11:50 - 12:20 Panel Discussion on popularization and commercialization of AI and the risks and opportunities it poses to women, BIPOC people, LGBTQ people, and other people who are or have been excluded, discriminated against, and marginalized in the field of AI.
- 12:30 - 01:30 Session 2
- 1:30 - 2:00 The questions this time include:
(i) the ubiquity of stakeholdership and the causes and consequences of exploitation;
(ii) thoughts on exclusion and what established researchers should do to increase not only representation of people as stakeholders but representation of their well-being, interests, and buy-in as contributors to the field;
(iii) social and economic effects of the "third AI summer" and the looming (or incipient) "third AI winter". - 2:00 - 2:15 Affinity group introduction by Dr, Gelyn Watkins
Group name: Women in ML
- 2:15 - 2:30 Online social and call for mentoring program participants
Accepted Papers
-
TBD
Publication
Workshop papers will be published with the Proceedings of Machine Learning Research (PMLR). Submissions must follow the PMLR standards, and PDF versions should be submitted via EasyChair.
Contact Us
All questions about submissions should be emailed to huichen@ksu.edu, physician@ksu.edu and bhsu@ksu.edu.