May
AI Lund lunch seminar: Beyond AI Ethics Frameworks - Ethical Considerations and Responsibility in Public Sector AI
Topic: Beyond AI Ethics Frameworks - Ethical Considerations and Responsibility in Public Sector AI
When: 28 May at 12.00-13.00
Where: Online. Link by registration.
Speaker: Clàudia Figueras Julián, doctoral student at Stockholm University
Moderator: Ellinor Blom Lussi, Doctoral student at Lund University
Spoken language: English
Abstract:
As Artificial Intelligence (AI) becomes increasingly embedded in public sector services—from welfare agencies to higher education—there is growing concern about how to ensure these systems are developed and used responsibly (Dignum, 2019). Much of the focus to date has been on producing ethics frameworks and high-level principles such as transparency, fairness, and accountability. But what happens when these principles meet the realities of day-to-day work in the public sector?
In this talk, I present findings from my PhD research, which investigates how stakeholders in Swedish public organisations—such as developers, project managers, and educators—talk about and make sense of ethics and responsibility in their work with AI systems. Drawing on qualitative case studies, I explore how practitioners interpret ethical principles, the tensions they encounter when trying to apply them, and how responsibility is negotiated across technical, organisational, and emotional dimensions.
My aim with this research is to contribute to HCI and AI ethics by advancing conceptual tools that help us better understand how ethics is enacted in practice. These include the ethical stances framework (I-, We-, and They-stances) for analysing how responsibility is constructed (Popova et al., 2024); a relational and distributed view of responsibility that traces how it is shared and shifted over time and across actors—including AI systems themselves (Figueras et al., 2024); and the concepts of ethical tensions (Figueras et al., 2022) and breakages (Figueras et al., 2025), which reveal how ethics emerges through frictions and failures in real-world system use. By examining ethics not as a checklist, but as a relational and situated practice, this research offers a grounded view of Responsible AI that's directly informed by those working closest to implementation.
References
Dignum, V. (2019). Responsible Artificial Intelligence: How to Develop and Use AI in a Responsible Way. Springer, Cham. https://doi.org/10.1007/978-3-030-30371-6
Figueras, C., Farazouli, A., Cerratto Pargman, T., McGrath, C., & Rossitto, C. (2025). Promises and breakages of automated grading systems: A qualitative study in computer science education. Education Inquiry, 0(0), 1–22. https://doi.org/10.1080/20004508.2025.2464996
Figueras, C., Rossitto, C., & Cerratto Pargman, T. (2024). Doing Responsibilities with Automated Grading Systems: An Empirical Multi-Stakeholder Exploration. Proceedings of the 13th Nordic Conference on Human-Computer Interaction, 1–13. https://doi.org/10.1145/3679318.3685334
Figueras, C., Verhagen, H., & Cerratto Pargman, T. (2022). Exploring tensions in Responsible AI in practice. An interview study on AI practices in and for Swedish public organizations. Scandinavian Journal of Information Systems, 34(2), 199–232.
Fuchsberger, V., & Frauenberger, C. (2023). Doing responsibilities in entangled worlds. Human-Computer Interaction, 0(0), 1–24. https://doi.org/10.1080/07370024.2023.2269934
Popova, K., Figueras, C., Höök, K., & Lampinen, A. (2024). Who Should Act? Distancing and Vulnerability in Technology Practitioners’ Accounts of Ethical Responsibility. Proceedings of the ACM on Human-Computer Interaction, 8(CSCW1). https://doi.org/10.1145/3637434
Registration
To participate is free of charge. Please sign up at ai.lu.se/2025-05-28/registration and we will send you a link to the zoom platform.
About the event
Location:
Online - link by registration
Contact:
ellinor [dot] blom_lussi [at] lth [dot] lu [dot] se