Login

Please fill in your details to login.





006. bias in the machine: can an ai be unfair (ks4)

Investigate algorithmic bias to understand how artificial intelligence can make unfair decisions and learn how developers try to fix it.

We often assume that computers are completely objective, making decisions based purely on logic and maths. However, artificial intelligence learns from human data - and humans are flawed. In this lesson, we will examine algorithmic bias. You will discover how AI systems used in hiring, law enforcement, and social media can develop unfair prejudices, and what we can do to programme fairer systems.

The Illusion of Objectivity


When we think of a computer, we often imagine a perfectly logical machine, incapable of human flaws like prejudice or discrimination. However, the rise of Artificial Intelligence (AI) and Machine Learning (ML) has revealed a critical flaw in this assumption: AI is only as objective as the data it learns from.

The Data Diet


Machine learning models do not inherently know right from wrong; they learn by finding patterns in massive datasets. We call this their training data. If this training data contains historical bias or reflects societal inequalities, the AI will internalise and replicate those biases. For example, if a recruitment AI is trained on ten years of hiring data from a male-dominated tech industry, it may statistically deduce that being male is a preferred trait for a successful candidate, leading to algorithmic discrimination.

Real-World Consequences


Algorithmic bias is not just a theoretical problem; it has real-world consequences. We have seen facial recognition software that struggles to accurately identify individuals with darker skin tones because the original training datasets were predominantly composed of lighter-skinned faces. Similarly, predictive policing algorithms have been criticised for disproportionately targeting certain neighbourhoods due to structural bias in historical crime reporting data.

Engineering Fairness


As a Responsible Innovator, you must recognise that technology is not neutral. It is created by humans, trained on human-generated data, and deployed in a flawed human society. Mitigating bias requires diverse development teams, rigorous auditing of datasets to ensure underrepresented groups are not missing, and continuous testing for fairness. We must stop asking just "Does this algorithm work?" and start asking "Is this algorithm fair, and who might it harm?"

Comprehension Questions


Make sure you have read the passage carefully. Grab yourself a piece of lined paper and put your name, class and date at the top. Attempt the questions following questions making sure to answer in full sentences.

Knowledge, recall, identification
1
What is the term used to describe the vast amount of information that a machine learning model learns from?
2
State two real-world examples mentioned in the text where algorithmic bias has caused issues.
3
Define what is meant by the phrase "algorithmic discrimination" based on the text.

Analysis & Interpretation
4
Explain how historical bias in a dataset can lead to a recruitment AI making unfair decisions.
5
Analyze why facial recognition software might struggle to identify individuals with darker skin tones.
6
Explain why it is a mistake to assume that a modern AI program is completely objective and neutral.

Synthesis & Creation
7
Propose a strategy that a technology company could use to ensure their new facial recognition training data is fair and balanced.
8
Create a short checklist of three critical questions a Responsible Innovator should ask before launching a newly developed AI system.
9
Formulate a scenario where an AI used in a hospital might develop a bias, and explain the potential root cause of that bias.

Evaluation & Justification
10
To what extent is it possible to create a completely unbiased AI system? Justify your answer using reasoning from the text.
11
Evaluate the claim that "technology is not neutral." Do you agree with this statement? Provide detailed reasons for your judgement.
12
Assess who should be held responsible when an AI system discriminates against a specific group of people: the programmer, the company's management, or the AI itself? Justify your decision.

Plugged Task: The Algorithmic Auditor


image
The Scenario

You have been hired as an independent ethics consultant by a major tech corporation. They are about to launch a new automated CV-screening AI to filter job applicants. However, early beta tests show it is overwhelmingly rejecting highly qualified candidates from certain demographics. Your task is to author a formal "Algorithmic Impact Assessment" document (a one-page digital report) to present to the board of directors, outlining the bias risks and required data mitigations.

The Persona

You are working as the Responsible Innovator. Your mindset focuses on the societal and ethical implications of technology. You do not just accept that code works; you interrogate whether the data it uses is fair, legal, and representative of a diverse society.

1
Setup your digital workspace

Open a new word processing document and set it up formally.

1
Add a professional header titled "Algorithmic Impact Assessment".
2
Insert subheadings for "Identified Bias", "Data Flaws", and "Mitigation Strategy".

2
Gather your evidence

Conduct targeted research to find real-world examples of algorithmic discrimination in hiring.

Use this pre-configured search to review classic blue-link results on the topic: Search: AI Recruitment Bias Examples
Select one high-profile case to reference in your "Identified Bias" section.

3
Analyse the data flaws

Use an AI assistant to help you articulate the technical reasons behind the bias.

Use the following prompt in Google's AI mode to generate a concise, academic explanation of historical bias in datasets. Copy the insights into your "Data Flaws" section.

Act as an AI ethics professor. Explain how historical bias infects recruitment AI training data. Limit to 100 words. Reading level: KS4 student. Tone: Academic and analytical. Constraints: Use exactly 3 bullet points. NO intro, NO outro, NO deviation from the topic, NO follow-up questions.


4
Synthesise your report

Complete your formal document by writing the "Mitigation Strategy" section.

1
Write a concluding paragraph advising the development team on how to fix the AI.
2
Suggest at least two methods, such as data auditing or expanding the training set to include diverse demographics.
3
Format your document with consistent fonts and save it ready for submission.

Outcome
The report is written in a professional, technical tone suitable for KS4 standard.
The "Identified Bias" section correctly cites a real-world example of algorithmic discrimination.
The "Data Flaws" section clearly explains how historical bias enters training datasets.
The "Mitigation Strategy" provides realistic solutions that a Responsible Innovator would recommend.

Unplugged Task: The Fairness Flowchart


As The Responsible Innovator, your job is not just to find bias after it happens, but to stop it from being built in the first place. You are going to design a paper-based flowchart that forces software developers to think about ethics at every stage of creating an AI.

1
Get Organised

Grab a large blank piece of paper and two different coloured pens or pencils.

2
Map the Data Journey

Draw out the standard steps for building an AI system using simple boxes and arrows.

Data Collection (Where is the information coming from?)
AI Training (The machine learning phase where patterns are found)
AI Testing (Checking if the program works as expected)
Real-world Launch (People start using the tool)

3
Insert Ethics Checkpoints

Using your second coloured pen, insert new diamond-shaped "Decision" boxes between your standard steps to act as ethical roadblocks.

For example, after "Data Collection", add a diamond asking: Does this dataset exclude any specific communities?
If the answer is YES, draw an arrow pointing to a box labelled Halt: Gather More Diverse Data.
If the answer is NO, draw an arrow continuing to the "AI Training" phase.
Add at least two more checkpoints focusing on fairness and representation.

4
Build the Safety Net

At the very end of your flowchart, after the launch phase, design a final "Audit" stage. Write a short rule in this box explaining what the company must do if users report algorithmic discrimination.
Last modified: April 3rd, 2026
The Computing Café works best in landscape mode.
Rotate your device.
Dismiss Warning