Digital Access Inequality
Context: A 2022 Oxfam report found that 70% of India’s population lacks reliable digital access, and 40% of mobile users don’t own smartphones.
Danger: AI tools may exclude rural and low-income communities from essential services.
Insight: Without inclusive infrastructure, AI deepens the digital divide.
Bias in Criminal Justice Algorithms
Context: AI tools used in India’s justice system risk reinforcing bias against Dalits, Adivasis, and minorities.
Danger: Historical discrimination is embedded in datasets, leading to unfair sentencing and profiling.
Insight: AI must be trained on diverse, representative data to avoid replicating systemic injustice.
Gendered AI Access
Context: Women in India use the internet 33% less than men, and only 31% of rural women are online.
Danger: AI platforms may overlook female experiences, reinforcing gender gaps in education, employment, and safety.
Insight: AI must reflect gender realities, not erase them.
Biased Hiring Tools
Context: AI hiring systems misinterpret Indian accents and cultural cues, favouring Western norms.
Danger: Qualified candidates may be excluded due to algorithmic bias.
Insight: Ethical AI must be locally trained and culturally attuned.
Surveillance & Profiling
Context: AI-powered facial recognition used in cities like Lucknow to detect harassment based on facial expressions.
Danger: Marginalised groups face increased surveillance and loss of autonomy.
Insight: AI must empower, not control, those it claims to protect.
No comments:
Post a Comment