Algorithmic Bias Case Studies — Fixes that actually worked (and those that didn’t).

Algorithmic Bias Case Studies — Fixes that Actually Worked (and Those That Didn’t)

Algorithmic Bias Case Studies — Fixes that Actually Worked (and Those That Didn’t)

The rise of artificial intelligence and machine learning has brought about unparalleled opportunities. However, it has also introduced significant challenges, one of which is algorithmic bias. This article explores real-world instances where algorithmic biases have surfaced, the solutions that have been attempted, and their outcomes. Understanding these case studies is crucial for building fairer and more inclusive technology.

1. The COMPAS Recidivism Algorithm

COMPAS (Correctional Offender Management Profiling for Alternative Sanctions) is an algorithm used in the US to predict a defendant’s likelihood of reoffending. It has been a focal point in discussions of algorithmic bias due to its racial bias issues.

“Black defendants were often predicted to be at a higher risk of recidivism than they actually were. Meanwhile, white defendants were often predicted to re-offend at lower rates than they did.”
ProPublica

Fix Attempt: Adjusting for Fairness

  • Original Approach: The use of COMPAS was challenged due to its lack of transparency and the racial bias it introduced.
  • Adopted Solution: To mitigate bias, some jurisdictions have opted to reevaluate how risk scores are used in decision-making processes, increasing transparency and incorporating oversight committees.

Outcome: Partial Success

While transparency has improved, challenges persist, as biases reflect broader societal inequalities captured in historical data. Thus, even with refined algorithms, the overarching problem may not be entirely solvable without addressing underlying systemic issues.

2. Amazon’s Hiring Algorithm

Amazon’s automated hiring tool, developed to filter through resumes, ended up being biased against women.

“In effect, Amazon’s system taught itself that male candidates were preferable. It penalized resumes that included the word ‘women’s’ as in ‘women’s chess club captain.’”
Reuters

Fix Attempt: Rebuilding the Model

  • Original Approach: The tool was trained on resumes submitted to Amazon over a 10-year period, resulting in skewed data reflecting men’s dominance in technical roles.
  • Adopted Solution: Amazon attempted to refine the algorithm to be neutral to gender. However, after realizing inherent biases could not be fully eliminated, the project was discontinued.

Outcome: Unsuccessful Termination

This case serves as a cautionary tale of how training algorithms on biased historical data can perpetuate those biases. Amazon’s experience highlights the importance of diversity in data sets and caution in automated decision-making systems.

3. Health Care Algorithms

Healthcare algorithms, such as those used to assess patient needs, have also exhibited biases. A notable study found racial bias in an algorithm deciding the needs of chronically ill patients.

“The algorithm was less likely to refer black patients to programs that provided more personalized care, even when they were sicker than their white counterparts.”
Science

Fix Attempt: Adjusting Outcome Variables

  • Original Approach: The algorithm’s bias stemmed from using healthcare costs as a proxy for health needs. This indirectly made wealthier patients more eligible.
  • Adopted Solution: Researchers have advocated for alternative outcome variables that better capture patient needs rather than cost.

Outcome: Promising Path Forward

This approach has shown promise in initial trials, as adjusting the outcome variables more accurately reflects diverse patient needs, reducing bias in predicting healthcare necessities.

4. Google Photos Tagging Incident

In 2015, Google Photos’ image-recognition software mistakenly tagged images of black people as “gorillas.” This incident highlighted biases inherent in image-recognition technology.

“We’re appalled and genuinely sorry that this happened. We are taking immediate action to prevent this type of result from appearing.”
The Guardian

Fix Attempt: Improving Training Data

  • Original Approach: The algorithm was inadequate in recognizing diversity in human features due to a lack of comprehensive training data.
  • Adopted Solution: Google increased the diversity of its training set and continuously refined its image classification algorithms to ensure accuracy and reduce bias.

Outcome: Continuous Improvement

While there has been progress, continuous updates and rigorous checks remain necessary. This incident stresses the need for comprehensive and diverse datasets in machine learning.

Conclusion

Algorithmic bias poses significant challenges across sectors. While some solutions have shown promise, others have underlined the importance of caution, transparency, and diversity in AI development. Tackling these biases requires ongoing efforts, systemic reforms, and collaboration across industries to create fairer algorithms.

Comments

Leave a Reply