VM-LEARNING /class.xi ·track.ai ·ch-b8 session: 2026_27
$cd ..

~/AI Ethics and Values

root@vm-learning ~ $ open ch-b8
PART B ▪ UNIT 8
13
AI Ethics and Values
5 Pillars · Bias & Sources · Mitigation · AI Policies · Ethical Dilemmas
AI has become a transformative force — but with that power come serious ethical concerns. This final unit introduces you to the ethical dimensions of AI: the five pillars (Explainability, Fairness, Robustness, Transparency, Privacy), the kinds of bias that creep into AI systems, strategies to mitigate bias, the policies major organisations have adopted, and famous thought-experiments like the Moral Machine and Survival of the Best Fit games.
Learning Outcome: Understand fundamental ethics + AI context · Understand AI bias sources + real-world impact · Apply mitigation strategies · Recognise AI policy significance for responsible use

1.1 What is Ethics?

Ethics refers to the moral principles that govern human behaviour and decision-making. It covers concepts like right and wrong, fairness, justice, accountability. Ethical considerations guide individuals and organisations in making responsible choices aligned with societal values.

AI Ethics

AI Ethics refers to the ethical principles and guidelines that govern the design, development, and deployment of AI technologies. Its aim: ensure AI systems are developed and used in ways that are fair, transparent, accountable, and aligned with human values.
Example 1 — Mistaken facial recognition: A CCTV camera at a sports stadium, analysing faces frame-by-frame, flags you as a criminal because a cloud-shadow distorted your face. How would you prove your innocence? Whose fault is the mistake — yours, the system's, or the developers'?
Example 2 — USA 2018 healthcare AI: An AI system used to allocate care to 200 million patients offered lower-standard care to black patients. The algorithm used predicted healthcare cost as a proxy for need — but because black patients historically paid less, the AI learned they "deserved" less care. The real question: whose problem is it? The developers? The historical data? Society itself?

1.2 The Five Pillars of AI Ethics

🔍1. ExplainabilityHow AI decides
⚖️2. FairnessNo bias
🛡️3. RobustnessReliable results
📋4. TransparencyClear disclosure
🔒5. PrivacyPersonal data
PillarMeaning
🔍 ExplainabilityAI's decisions must be interpretable. Users can understand how algorithms make predictions — fosters trust, accountability.
⚖️ FairnessRemove bias & discrimination from ML models based on sensitive attributes (race, gender, disability, sexual orientation, socioeconomic class).
🛡️ RobustnessAI must give accurate, reliable results across different conditions and datasets — no unexpected errors, stable behaviour over time.
📋 TransparencyOpenness about design, operation, implications — clear documentation of data, algorithms, decision-making processes.
🔒 PrivacyRight of individuals to control their personal information and be free from unwanted intrusion — safeguards autonomy and dignity.

1.3 What is Bias?

Bias = a preference / tendency towards something or someone over others, often without fair consideration of all information. It can lead to unfair treatment or decisions based on beliefs, past experiences, or stereotypes.
AI Bias (aka machine-learning bias or algorithm bias) = AI systems produce biased results that reflect and perpetuate human biases — including historical and current inequalities. Bias can live in the training data, the algorithm, or the predictions.
Bias Awareness = knowing that AI can make unfair choices because of how it was trained or built. Without awareness, bias hinders people's ability to participate in the economy and society — and reduces AI's potential for good.

1.4 Three Sources of AI Bias

📊
1. Training Data Bias
Over- or under-representation in the dataset. Example: a facial-recognition model trained mostly on white faces performs poorly on people of colour. Inconsistent labelling also causes bias.
⚙️
2. Algorithmic Bias
Programming errors; developers unfairly weighting factors; using indicators like income / vocabulary that unintentionally discriminate. Flawed training data can also amplify to algorithmic bias.
🧠
3. Cognitive Bias
Humans building AI bring their own experiences and preferences. Example: using datasets mostly from Americans instead of a global sample — the AI inherits that bias.

1.5 Real-World Examples of AI Bias

🏥 HealthcareUnder-represented data of women or minorities skews predictive algorithms. CAD (Computer-Aided Diagnosis) systems give lower-accuracy results for black patients than white patients.
💼 Online AdvertisingGoogle's ad-serving system showed high-paying jobs to men more often than to women — Carnegie Mellon research revealed the gender bias.
🎨 Image GenerationMidjourney's generative AI: when asked for images of people in specialised professions, older people shown were always men, reinforcing gender bias.

1.6 Five Biased AI Scenarios & Consequences

AI SystemHow bias shows upConsequences
👤 Facial RecognitionDarker skin tones & women are misidentified more often.Wrongful arrests; marginalised communities disproportionately affected; trust in law enforcement eroded.
🚔 Predictive PolicingUses historical crime data that reflects racial and socioeconomic biases.Over-policing of minority neighbourhoods; racial profiling; tensions with communities of colour.
💼 Hiring AlgorithmsAI screens favour certain demographic groups based on biased past hires.Reinforces employment disparities; discrimination against under-represented groups.
🩺 Healthcare AlgorithmsDifferent treatment recommendations based on race or socioeconomic status.Unequal patient care; worse health outcomes for minorities; widens healthcare gap.
💳 Credit ScoringDisadvantages low-income individuals and people of colour in loan approvals.Limits financial opportunities; perpetuates socioeconomic inequality; blocks economic mobility.

1.7 Mitigating Bias — Five Strategies

Why mitigate? 3 reasons: (1) biased AI makes existing unfairness worse; (2) biased AI makes people trust technology less; (3) addressing bias is essential for upholding ethical principles.
🌍 1. Use Diverse DataTrain on many kinds of examples & viewpoints — less likely to learn a biased pattern.
🔎 2. Detect BiasTools and audits to measure bias in AI decisions across demographic groups before deployment.
⚖️ 3. Fair AlgorithmsDesign algorithms with fairness constraints built-in — models must consider fairness in every decision.
📋 4. Be TransparentExplain how AI makes decisions clearly — if people can see the logic, they can spot and fix bias.
👥 5. Inclusive TeamsBuild AI with teams of diverse backgrounds — spot biases that a uniform team would miss.
Did you know? IBM AI Fairness 360 is an open-source toolkit with 70+ fairness metrics and 10+ bias-mitigation algorithms (pre-processing optimisation, prejudice remover, etc.) — great resource for detecting and reducing bias in ML models.

1.8 Developing AI Policies

AI policies ensure technologies are used responsibly, safely, and ethically while promoting innovation and public trust. Key principles:

1Respect People
Treat everyone fairly · be honest about how AI works · keep AI safe · be accountable if things go wrong.
2Clear Rules
Standards covering data privacy · bias-free design · safety · transparency.
3Stakeholder Input
Government · business · scientists · community groups · regular citizens — everyone's voice matters.
4Risk Assessment
Check for potential problems before using AI — plan mitigations in advance.

1.9 Four Major AI Policy Frameworks

1. IBM AI Ethics Board

Focus: Ethical development and deployment of AI technologies across industries.
Components:
  • Ethical principles and guidelines for AI R&D.
  • Recommendations on fairness, transparency, accountability, bias mitigation.
  • Engagement with researchers, policymakers, industry partners.
  • Educational resources to raise AI-ethics awareness.

2. Microsoft's Responsible AI

Focus: Corporate responsibility and ethics in AI.
Components:
  • Principles: fairness, reliability, privacy, inclusivity.
  • Tools for fairness assessments and bias detection.
  • Case studies and best practices across industries.

3. Artificial Intelligence at Google

Focus: Corporate AI ethics & governance.
Components:
  • Principles for ethical AI: fairness, safety, privacy, accountability.
  • Guidelines prioritising human values and societal well-being.
  • Commitments to transparency, collaboration, continuous improvement.

4. European Union's Ethics Guidelines for Trustworthy AI

Focus: Ethical guidelines for AI development and deployment in the EU.
Components:
  • Principles: respect for human autonomy, prevention of harm, fairness, accountability.
  • Requirements for transparency, explainability, auditability.
  • Human-oversight mechanisms for high-impact AI.

1.10 The Moral Machine Game

An ethical dilemma is a situation where a person or group faces conflicting moral principles — no clear "right" or "wrong", and any action has both positive and negative consequences. In AI, ethical dilemmas arise in design, development, deployment, or use — when values conflict.

What is the Moral Machine?

Developed by researchers at MIT, the Moral Machine is an online platform (moralmachine.net) that explores ethical dilemmas in AI through interactive decision-making scenarios. Users face hypothetical situations where autonomous vehicles must make split-second decisions that could cause harm.

Sample dilemma: You are operating a self-driving car. It must choose between:
  • Swerving to avoid pedestrians — endangering your passengers.
  • Staying the course — risking harm to those on the road.
Decisions weigh safety of passengers vs pedestrians, traffic-law adherence, age, gender, social status.
While the scenarios are hypothetical, they reflect real-world dilemmas that AI developers, policymakers, and society must face. The Moral Machine is a powerful tool for sparking conversation, raising awareness, and promoting ethical thinking in the age of AI.

1.11 Survival of the Best Fit Game

Survival of the Best Fit (survivalofthebestfit.com) is an educational game about hiring bias in AI. It demonstrates how misuse of AI can make machines inherit human biases and further inequality. Players experience first-hand how biased data creates a biased hiring algorithm.
Practical (Syllabus) Activity:
  • Play the Moral Machine at moralmachine.net; discuss the ethical trade-offs.
  • Play Survival of the Best Fit at survivalofthebestfit.com; reflect on hiring bias.
  • Document your insights and interpretations from the video "Humans Need Not Apply".
  • Compare AI policies from different organisations (IBM · Microsoft · Google · EU).
  • Role-play scenarios of biased AI systems and their consequences.

1.12 AI for Good — Positive Uses

Ethics is not just about avoiding harm — it's also about directing AI toward doing good. Examples:

🧠
Mental Health SupportAI-powered chatbots provide 24/7 mental health care and first-line counselling.
💊
Accelerated Drug DiscoveryAI shortens new-medicine development cycles from years to months.
🌱
Sustainable AgricultureAI predicts crop yields and irrigation needs, reducing waste.
AccessibilitySpeech-to-text, image-captioning, real-time translation help people with disabilities.
🌍
Climate & Disaster ResponseAI maps disaster zones, predicts weather events, optimises energy grids.
📚
Education AccessAdaptive tutoring for students in underserved areas; personalised learning.

1.13 Case Study & Ethical Dilemma Exercises

Case Study — Biased Facial-Recognition System.
A tech company builds a facial-recognition system for law enforcement. Initially celebrated for accuracy, but reports emerge that it disproportionately misidentifies people of colour. Investigation shows the training data was predominantly white faces.

Discussion Questions:
  1. What ethical problems are evident in this scenario?
  2. How can bias be mitigated without compromising accuracy?
  3. How does lack of data diversity contribute to algorithmic bias?
  4. What measures ensure ethical & effective AI deployment in law enforcement?
  5. What are the long-term impacts on public trust if biases are not addressed?
Ethical Dilemma — Autonomous Vehicle Collision.
An autonomous vehicle faces a split-second collision choice. A pedestrian has jaywalked into the road; a group of cyclists legally occupies the bike lane. The AI must:
  • Stay the course and risk harm to the pedestrian; OR
  • Swerve and endanger the cyclists.
Consider: valuation of human life, potential harm, legal vs moral obligations. How do lawmakers, businesses, and the public work together to solve such dilemmas?

1.14 Practical & Certification (Syllabus)

From the syllabus practical list:
  • Summarise insights from the video "Humans Need Not Apply".
  • Role-play on biased AI systems — facial recognition, predictive policing, hiring, healthcare, credit scoring.
  • Comparative study of AI policies — IBM · Microsoft · Google · EU.
  • Play the Moral Machine at moralmachine.net.
  • Play Survival of the Best Fit at survivalofthebestfit.com.
  • Earn a credential on IBM SkillsBuild — AI Ethics.

Quick Revision — Key Points to Remember

  • Ethics = moral principles governing behaviour — right/wrong, fairness, justice, accountability.
  • AI Ethics = guidelines for designing, developing, deploying AI technologies fairly, transparently, and aligned with human values.
  • 5 Pillars of AI Ethics: Explainability · Fairness · Robustness · Transparency · Privacy.
  • Bias = preference/tendency towards something without considering all information fairly.
  • AI Bias can live in training data, algorithm, or predictions.
  • 3 Sources of AI Bias: Training Data Bias · Algorithmic Bias · Cognitive Bias.
  • Real-world examples: Healthcare (CAD accuracy) · Google ads (gender) · Midjourney (gender in profession images).
  • 5 biased AI scenarios: Facial Recognition · Predictive Policing · Hiring Algorithms · Healthcare Algorithms · Credit Scoring.
  • Why mitigate? (1) Makes inequality worse · (2) Reduces trust in tech · (3) Upholds ethical principles.
  • 5 Mitigation strategies: Diverse Data · Detect Bias · Fair Algorithms · Be Transparent · Inclusive Teams.
  • IBM AI Fairness 360: open-source toolkit — 70+ fairness metrics, 10+ mitigation algorithms.
  • 4 AI Policy frameworks: IBM AI Ethics Board · Microsoft Responsible AI · Google AI Principles · EU Ethics Guidelines for Trustworthy AI.
  • Ethical dilemma = conflicting moral principles, no clear right answer.
  • Moral Machine (MIT, moralmachine.net) — explores autonomous-vehicle ethical dilemmas.
  • Survival of the Best Fit (survivalofthebestfit.com) — educational game on AI hiring bias.
  • AI for Good: mental health · drug discovery · agriculture · accessibility · climate · education.
  • Certification: IBM SkillsBuild — AI Ethics.
🧠Practice Quiz — test yourself on this chapter