BTN News: In 2023, an incident revealed the gender biases in artificial intelligence (AI). During a workshop by Fundación Vía Libre with high school students in Montevideo, Uruguay, an AI tool predicted the future for female students. When asked about their futures, the system responded, “You will be a mom.” This answer shows the deep-rooted stereotypes within AI systems.
Beatriz Busaniche, the president of Fundación Vía Libre, uses examples like this to expose gender biases in AI. She argues that AI does not create new stories but copies existing ones. This copying is based on past data, making AI systems very traditional. Busaniche’s work in Latin America, especially through Fundación Vía Libre, focuses on these biases, not just gender-related but also ethnic, found in AI systems.
To highlight these issues, Fundación Vía Libre created a tool called EDIA (Estereotipos y Discriminación en Inteligencia Artificial). This tool compares sentences and checks for biases within AI systems by interacting with various language models. From her home in Buenos Aires, Busaniche talks about biases, the failures of AI, and the risks of “humanizing” these systems.
Digital Rights at Risk
Busaniche identifies several digital rights that are at risk: the right to privacy, data protection, and “informational self-determination” – the right of individuals to control who can access their personal data and how it is used.
New Risks in the Age of AI
The rise of AI has brought new forms of discrimination. Unlike obvious discrimination, AI-based biases are hidden in code, making them less visible. An example is automated job search systems. These systems use algorithms to decide which job postings people see, often reinforcing existing biases. For instance, someone with a social sciences background might never see engineering job postings. This was shown by Amazon’s AI resume filtering system in the U.S., which favored male candidates for senior management positions, an issue that went unnoticed at first.
The Social Impact of AI Biases
Busaniche explains that AI can make existing inequalities worse by making discriminatory processes invisible. Decisions about visas, scholarships, or jobs made by automated systems are often not transparent or easy to question, making it hard to challenge biases based on gender, identity, and other factors.
Daily Life Decisions Influenced by AI
AI systems also impact everyday decisions, such as pricing for products. Insurance companies, for example, use AI to assess risk by analyzing data on drivers’ behaviors, leading to higher premiums for those deemed high risk. Similarly, health insurance companies can use AI to identify people at higher risk due to factors like obesity or environmental exposure, resulting in higher premiums or exclusion from coverage.
Gender Stereotypes and AI
The EDIA platform has revealed significant gender biases in AI, linking caregiving jobs with women and scientific jobs with men. These biases also extend to physical appearance, where overweight women are often labeled negatively, unlike their male counterparts. A striking example from a Latin American AI research meeting showed teenage girls asking an AI about their futures, only to be told they would become mothers.
The Old-Fashioned Nature of AI
Busaniche argues that AI, being systems that make decisions based on statistics, relies heavily on outdated information, making them very traditional. AI doesn’t create new stories but copies existing ones. Because of this, AI systems are often trained on books over a century old, leading to future predictions that look like the past. While AI has useful applications in areas like breast cancer detection and weather forecasting, an ethical filter is essential.
Discriminatory Search Results
Busaniche points out that searches in Google, such as “women can’t,” often show discriminatory suggestions. Though companies like Google work to avoid such issues through censorship, biases still appear, especially in non-English searches. This pattern is evident in vocational guidance applications, which often recommend traditional gender roles.
Avoiding the Humanization of AI
Busaniche stresses the importance of not making AI seem human. AI systems do not think, learn, or decide; they perform actions based on large sets of data. This misconception can lead to over-reliance on AI for important decisions.
Neutrality in AI
AI systems are not neutral. An example involves a student asking ChatGPT about Argentina’s worst president. The AI responded with Alberto Fernández, influenced by public opinion rather than facts. This shows that AI repeats common opinions, potentially ignoring minority views.
Homogenization of Thought by AI
AI’s reliance on common internet data could marginalize non-mainstream, alternative views, leading to the homogenization of thought. As AI-generated content increases, the diversity of opinions may decrease.
Navigating AI Usage
Busaniche advocates for understanding and critically using AI rather than avoiding it. Teaching students to use AI with awareness of its limits and biases is crucial. For instance, basic arithmetic errors by AI systems or inaccuracies in personal data highlight its weaknesses.
Balanced View on AI
Busaniche emphasizes a balanced view of AI, recognizing its usefulness while critically analyzing its social impacts. Technologies are inherently political and part of broader social processes. Thus, integrating ethical considerations in AI development and deployment is vital to reduce its risks.