Social Responsibility in Machine Learning
Prof. Dr. rer. nat. Marius Lindauer
Machine Learning models are no longer confined to research but are deployed in applications that permeate our lives. Examples include social media content moderation, ad selection strategies, facial recognition software for CCTV and policing as well as everyday household items like soap dispensers. The last years have shown, however, that ML models often fall short of their promised performance in practice because of systematic biases. In this seminar we will examine and critically discuss some of these tools to see how these problems come to be. Then we will move on to discuss current research on how to reduce bias in machine learning systems, how researchers can contribute to more transparent ML tools as well as how to give more agency to data subjects. Participants need no prior knowledge of research in this area, but should be ready to actively discuss the topics weekly.
We strongly recommend that you know the foundations of machine learning in order to attend the course. You should have attended at least one other course for ML in the past. Being familiar with computer vision is a plus but not necessary.
- Case studies and audits of ML tools
- Algorithmic bias reduction techniques
- Best practices for fairer ML research
The full list of papers can be found on the StudentIP course page.
The course, including your presentation, will be in English. We will have weekly sessions which one or two students will lead by introducing the weeks topic. Everyone is then expected to join the discussion. You will also submit a written report on a ML system or dataset of your choice, drawing on 2-3 papers from the seminar for its discussion (3-4 pages).