Speaker
Description
As data scientists, we manipulate data about the behaviour of users in online platforms, and this data is usually characterized by imbalances. These imbalances can be associated with different properties of the items the users interact with. One example can be the popularity of the items, meaning that popular items will get more interactions than niche ones. Another example is the way an industry is composed; e.g., most movies are produced in the US because of the Hollywood film industry, thus causing a geographic imbalance with respect to the country of production of the items.
Recommender systems produce their results by learning patterns from the data they observe. Unfortunately, when learning these patterns from imbalanced data, the produced recommendations exacerbate these imbalances, thus leading to negative consequences for the stakeholders involved in the system (e.g., the under-recommendation of niche items or of items produced outside the US).
In this talk, we will provide an overview on how data imbalances can impact a recommender system in terms of bias and fairness. Then, we will delve into two case studies, showing how a recommender system can be impacted by algorithmic bias [1] and algorithmic fairness [2] and how these issues can be mitigated thanks to algorithmic interventions.
[1] Boratto, L., Fenu, G., & Marras, M. (2021). Connecting user and item perspectives in popularity debiasing for collaborative recommendation. Information Processing & Management, 58(1), 102387.
[2] Gómez, E., Boratto, L., & Salamó, M. (2022). Provider fairness across continents in collaborative recommender systems. Information Processing & Management, 59(1), 102719.