MiniCoDe – Minimise algorithmic bias in Collaborative Decision Making with Design Fiction 

Project Lead

Alessio Malizia, University of Hertfordshire

Co-investigators

Silvio Carta, University of Hertfordshire

Supporting Partner(s) 

WeandAI, Data Reply

Challenge

This project aimed to tackle social injustice in future algorithmic-based decision-making applications, namely, devise strategies to expose, counterbalance, and remedy bias and exclusion built into algorithms, considering fairness, transparency, and accountability.

Methods

They began by carrying out a literature review surveying existing Design Fiction methods (Lindley J., 2015, Johnson B. D., 2011) and toolkits from Google Scholar, Scopus, and Elsevier, focusing on narrative design and communication to devise an appropriate supporting strategy for the workshop facilitator. They also scoped several papers analysing the different types of bias that might inherently be embedded in algorithms and datasets that will constitute a valuable guideline to design the future experiments we will carry out. 

They used these to create an initial workshop, which was piloted with a small group of academics. After refining it, they ran a final workshop to see whether a small industry team could effectively adopt it to validate AI service designs. 

The workshops followed the structure: 

  • An inspirational narrative is prompted to participants to communicate the design brief 
  • Participants are clustered in groups, and each group starts the idea generation 
  • The ideas get refined and later enriched, then the best candidate idea selected within each group is conceptualised 
  • The resulting concept is then analysed in light of a set of ethics principles embedded in scenarios (in the form of cards) to expose its potential biases, and finally, each group 
  • Reports its findings to the others to get final feedback 

Insights

This interdisciplinary project employed a design fiction approach to developing a toolkit in a collaborative workshop session with supporting materials to be used by stakeholders to experiment with scenarios to expose potential bias and reflect on mitigation strategies at design time. 

They aimed at developing practical responses to social justice issues by experimenting with a new approach to design socio-technical systems that help meet social aspirations and goals in the form of a Design Fiction Toolkit that: 

  • Helps practically-minded developers apply social justice principles at design time during the Machine Learning development pipeline and to signal to researchers where further work is needed.  
  • Informs the discussion and recommendations to anticipate the impact of Machine Learning applications embedded in Socio-Technical Systems by involving communities such as the WeandAI network (fostering awareness and understanding of AI in the Society). 

The Toolkit responds to the needs of product managers, developers, and data scientists of ML applications (at Data Reply, for example) to mitigate bias, e.g., social, racial, etc. Companies, such as Data Reply, employing Data Scientists trained in using the Toolkit will learn to avoid bias at design time before introducing socially unjust services into society.

MiniCode from Silvio Carta on Vimeo.

Future Directions

They intend to develop the Toolkit further and adapt it to two use cases that emerged during our research: 

  • Educational – The toolkit could be easily adapted to be employed in educational scenarios, such as within Service Design Modules, as part of an ethics awareness activity by embedding elements of Design Thinking. 
  • Digital Teams – The toolkit can be refined and used by small heterogeneous teams within innovative startups interested in ethically designing new AI-based features. 

They also plan to disseminate their work by: 

  • They also plan to disseminate their work by: Submitting a Late Breaking work to the ACM CHI 2022. 
  • Submitting a proposal for an interactive workshop or demo session within the CRAFT (Critiquing and Rethinking Accountability, Fairness and Transparency) session at the ACM Conference on Fairness, Accountability, and Transparency (ACM FAccT) 2022. 
  • Engaging the public by publishing a Feature Article on Medium and the Conversation. 

They have future plans to test with companies Data Reply and Digital Catapult to introduce the toolkit as a planned case study in data analytics course at UH and Pisa (Data Analysis in the digital humanities course thought by Alessio Malizia).