不良研究所

Designing Ethical Artificial Intelligence for All

A former 不良研究所 Cognitive Science student wants AI ethics to take everyone into account

How do we design ethical AI systems? Considering the pervasive nature of AI systems, how do we make sure the experiences of people from a variety of backgrounds are accounted for in their design processes?听

Jocelyn Wong, a former 不良研究所 Honours Cognitive Science student, wanted to explore these questions. She wanted to know how the processes of designing AI systems can be audited in a way that takes into account the lived experiences of people from all walks of life.听

In a relatively new field such as AI ethics, 鈥淎 lot of the proposals that came out about auditing were about the process and what procedures should be in place, but they weren鈥檛 really thinking about the types of people are involved, and even if they did, it was always in the context of the organization,鈥 said Wong. 鈥淭here wasn鈥檛 any sort of consideration for who these people are more broadly.鈥澨

Wong was one of CDSI鈥檚 BMO Jr. Responsible AI Scholars last year: This award goes to projects that promote the responsible use of AI in a variety of different domains, including art, policy, and decision-making. She spent all of last summer researching her ideas under the supervision of 不良研究所 professors Jocelyn Maclure and AJung Moon.听

To do this, she looked at a real-life 2022 case study published in the journal AI and Ethics titled 鈥.鈥 This study followed an EBA of the pharmaceutical company AstraZeneca. After following the company鈥檚 activities for a year, the researchers found the main challenges with conducting an EBA were what they called 鈥渃lassical governance challenges,鈥 including 鈥渆nsuring harmonized standards across decentralized organizations, demarcating the scope of the audit, driving internal communication and change management, and measuring actual outcomes.鈥澨

Based on her analysis of this study, Wong concluded that many demographics are underrepresented in auditing AI systems, and that the roots of this imbalance in the technology field go deep. Her project, 鈥淣on-Technical Operationalization in AI Ethics Audits: Reconsideration of Stakeholders Through Black Feminism,鈥 argues for an intersectional approach to understanding how technology can be used as an instrument of power.听

Wong relies in particular on the Matrix of Domination framework, first articulated by the American academic Patricia Hill Collins, to analyze stakeholder dynamics in EBA.听She says her work "demonstrates how the Matrix of Domination could serve as a framework to illustrate how one鈥檚 understanding of AI ethics, and therefore participation in an ethics-based audit, is informed by their experiences within the interlocking axes of oppression." Through her research, Wong hopes to "demonstrate the value of non-technical adaptations of democratic control in conversations of AI governance."

If you鈥檙e a 不良研究所 student with an idea for an AI-related research project, start brainstorming now so you can be ready for our call for applications early in 2026.听

Learn more about past projects and how the BMO Jr. Awards can help you in your research.听听

Back to top