Back to the MIT repository
6. Socioeconomic and Environmental3 - Other

Demographic diversity of researchers

The AI research establishment inherits patterns of under-representation that are dominant in most technical elds. In North America, large parts of professional AI research require a Ph.D., yet less than 25% of Ph.D. computer scientists are women, and fewer than 2% are Black or African American [608]. This holds globally and outside the research community: LinkedIn data suggests that only 22% of AI professionals are women [161]. Since the vast majority of AI practitioners work for private companies, limited corporate statistics on gender and racial diversity hinder a full understanding of the situation [402], but those few statistics that exist are not encouraging: only 5% of Google and 7% of Microsoft employees are Black or African American, with potentially even lower representation at the more senior levels [212, 384].

Source: MIT AI Risk Repositorymit885

ENTITY

3 - Other

INTENT

3 - Other

TIMING

3 - Other

Risk ID

mit885

Domain lineage

6. Socioeconomic and Environmental

262 mapped risks

6.1 > Power centralization and unfair distribution of benefits

Mitigation strategy

1. Prioritize the enhancement of cognitive diversity and representative inclusion within AI research and development teams. This involves implementing robust strategies for the equitable recruitment and retention of researchers from under-represented demographics to ensure a broad spectrum of perspectives influences problem formulation, data collection, and algorithmic design. 2. Mandate the collection and curation of diverse, representative training datasets. Data governance policies must ensure that the input data accurately reflects the demographic variability of the target population, thereby counteracting the risk of models inheriting and amplifying historical and societal biases from unrepresentative samples. 3. Establish a continuous AI governance framework that includes regular, comprehensive bias audits and transparent reporting. This involves implementing systematic checks, such as performance testing across different demographic groups and post-deployment monitoring, to detect and address emergent or persistent biases (algorithmic or outcome-based) throughout the AI lifecycle.