Eddie L. Ungless

Eddie L. Ungless

Phd Student

University of Edinburgh

I have recently passed my PhD with integrated study at the University of Edinburgh, supervised by Björn Ross, Vaishak Belle and Zachary Horne. In my thesis I address how to measure bias in natural language processing (NLP) technologies using a human-centric approach, as it is truly human behaviour that determines the real world impact of these tools. My research spans from how the public responds to biased AI technologies, to how to measure bias in a psychologically grounded fashion, to analysing the ethical risks of generative models. My linguistics and psychology background affords me a unique perspective on the topics of fairness and ethics in NLP, in that I use social science theory and rigorous experimental design to gain a nuanced understanding of NLP harms. I have experience conducting large-scale automatic evaluations of safety issues in large language models, as well as human evaluation.

Interests
  • Intersectional identity theory
  • Predictive bias
  • Algorithmic justice

Latest