Insight: Challenging Bias in AI
Best Practice from Experts in the Field
We hosted an expert panel in February 24 at Engine Shed, Bristol, Leading voices in AI research, data ethics, and diversity shared practical insights into how we can build fairer, more inclusive AI systems.
Professor Phil Taylor on the new IsambardAI at Bristol University and what we can do to promote diversity and inclusion in STEM at the student level.
Joyann Boyce was thought-provoking as always with her keynote on "AI as a puppy" - be sure to check out her TED talk!
Dr. Rachel Dugdale brought such an important perspective from her decades of experience in AI and machine learning - particularly in how we move forward fairly and ethically.
As artificial intelligence becomes more deeply embedded in our daily lives—powering everything from streaming recommendations to hiring algorithms—the importance of challenging bias in AI has never been more urgent.
1. Build Explainability Into AI From the Start
Professor Phil Taylor, Pro Vice Chancellor for Research at the University of Bristol, emphasized the critical need for explainable AI, especially when it's deployed in safety-critical systems like healthcare or energy. As the lead behind the development of Isambard-AI—a new national AI research resource—Taylor is championing a future where advanced computing power (5,500 GPUs to be exact) is paired with robust ethical frameworks.
Best Practice:
Embed explainability and ethical training into AI education and research. Universities should incorporate fairness, bias detection, and transparent algorithm design into both undergraduate and doctoral programs.
2. Increase Representation in AI Teams and Training Data
Bias in AI is often a reflection of societal imbalances, and underrepresentation in development teams only reinforces those issues. Taylor pointed out that the University of Bristol is working to redress gender imbalances in academia, aiming for 50% female professors by 2023. They’ve also updated their promotions framework to reduce bias—with notable success (85% of recent female applicants were promoted).
Best Practice:
Proactively create inclusive academic environments. Update recruitment, progression, and curriculum practices to promote representation from underrepresented groups—especially Black academics, women in STEM, and marginalized communities.
3. Understand and Challenge Bias in Content Creation Tools
Joyann Boyce, an expert in inclusive marketing, warned of racial and gender bias in facial recognition tools and content generation platforms. She shared findings from her own experiments, showing that AI often underrepresents or misrepresents people in lower-income jobs or from marginalized backgrounds.
Best Practice:
Treat AI tools like potentially dangerous pets: interact with caution, understand their limitations, and always supervise outputs. Use diverse training datasets, and ensure content creation is reviewed through a human lens that includes lived experience.
4. Promote Transparency in AI Decision-Making
Rachel Dugdale, with over 15 years of experience in AI leadership, emphasized the need for greater transparency in how AI decisions are made—from Netflix’s show renewals to hiring algorithms. She discussed how seemingly innocuous signals (like how fast you watch a series) can lead to biased decisions.
Best Practice:
Prioritize transparency in data collection and model decision-making processes. Make it clear how and why certain decisions are being made, especially when those decisions impact people's lives.
5. Use Synthetic Data Carefully
While synthetic data can help fill gaps where real-world data is limited, it’s not without risks. Taylor suggested it as a solution in domains where real data lacks diversity. However, Boyce raised concerns, citing examples where AI-generated images of Black women were used without real representation.
Best Practice:
If using synthetic data, ensure it reflects real, diverse populations—and don’t rely on it as a substitute for actual representation or input from marginalized groups.
6. Invest in Inclusive Education and Career Pathways
To tackle the pipeline problem, Dugdale advocated for funded boot camps and apprenticeships that bypass traditional academic gatekeeping. By opening doors to adults from underrepresented backgrounds, the industry can diversify its talent base.
Best Practice:
Sponsor accessible training programs for career-switchers. Ensure that course materials and assessments reflect diverse cultural and societal perspectives.
7. Ensure AI Is Used to Support, Not Replace, Human Judgment
One of the most consistent messages was the need to use AI as a decision support tool—not a decision-maker. Boyce and others cautioned against blindly trusting outputs from systems trained on flawed or incomplete data.
Best Practice:
Maintain human oversight in AI-driven decisions, particularly in high-stakes areas like hiring, policing, and healthcare. Build systems with fail-safes and review mechanisms to prevent harm.
Final Thoughts
Bias in AI isn’t just a technical problem—it’s a social one. And solving it requires inclusive leadership, diverse data, ethical frameworks, and constant vigilance. As AI becomes more powerful and widespread, these best practices can help ensure we’re building systems that reflect the best of humanity, not the worst of our history.
Let’s keep asking: Who is at the table when AI is built—and who is missing?
Published February 2024