AI in Employment Tribunals

Insight by: Kaine Davey

In recent years, we have seen an increase in the use of AI in the workplace and the wider legal sector. While AI tools can be beneficial and efficient, they also come with potential pitfalls. Below, we explore recent Employment Tribunal cases involving the use of AI.

Discriminatory Automated Decision Making

Case: Mjan v Uber Eats

In this case, the claimant, a Black Uber Eats driver, brought claims of indirect racial discrimination. He was required to use an app that employed AI-powered facial recognition software to register for jobs.

The claimant experienced repeated requests to resubmit photos to verify his identity. Eventually, his account was suspended due to several facial verification errors, leaving him unable to work.

He requested a human review of the decision. After consideration, Uber Eats confirmed his account would be deactivated. The claimant argued the facial recognition software was racially biased, subjecting him to more frequent checks than others.

The case was settled out of court.

Key Takeaway

This was one of the first Employment Tribunal cases involving AI. It highlights the risks employers face when relying on automated decision-making tools. AI systems can potentially carry or amplify biases, leading to discrimination.

Employers must ensure AI usage is:

  • Transparent and clearly explained to employees
  • Subject to ongoing human oversight and review
  • Designed to allow workers to appeal or request human assessment of AI decisions

AI Could Lead to an Increase in Vexatious Claims

Case: Ms M Wright v SFE Chetwode Limited & Ms K Winter

The claimant brought multiple claims, relating to whistleblowing and automatic unfair dismissal. She suggested she was subjected to detriments after she had blown the whistle related to being served moldy cheese in the company canteen.

At a hearing, Judge Atkinson struck out most of the claims, but allowed one detriment claim to continue—on the condition that the claimant pay a deposit order of £140 and costs of £1,000.

Use of AI in the Case

Ms Wright admitted to using ChatGPT to draft her statements and submissions. Judge Atkinson remarked:

“I am left with strong feeling that Ms Wright is pursuing a claim she does not understand and cannot personally justify…”

He noted that if she wished to continue, she needed to ensure she fully understood  her claim.

Key Takeaway

This case suggests that AI tools like ChatGPT may become increasingly common in preparing pleadings and witness statements, especially among unrepresented litigants. However, misuse of AI could lead to:

  • An increase in weak or vexatious claims
  • A lack of understanding and ownership of cases by one or more parties
  • Potential costs implications for those who misuse AI in litigation

While AI use didn’t ultimately affect the outcome in this case, the judge’s comments highlight the importance of responsible and informed use of such tools in employment litigation.

For more information about this article or any other aspect of people services reimagined, download our App for Apple or Android, and contact your integrated HR, employment law and health & safety team at AfterAthena today.