Thesis title: Algorithmic Fairness in Practice: Implications for the Employment Life Cycle
Supervisors: Sandra Peter, Sebastian Boell, Kai Riemer
Thesis abstract:
Organisations are increasingly using algorithms to process and analyse big amounts of data to support their decision-making and predictions about the future. In doing so, they aim to improve their operations and to increase their productivity (Aghion, Jones & Jones, 2017; Chen, Chiang & Storey, 2012; Cheng et al., 2019; Kordzadeh & Ghasemaghaei, 2022; Tarafdar, Page & Marabelli, 2022). Especially within human resources (HR) operations across the employment life cycle, organisations have implemented algorithms to support processes, such as the screening of job applications, assessing their employee’s performances as well terminating employees (Gal et al., 2019; Giermindl et al., 2022; Parent-Rocheleau & Parker, 2022).
However, multiple examples have shown that algorithms can be biased and discriminate protected groups and individuals (Angwin et al., 2016, 2017; Dastin, 2018; Kordzadeh & Ghasemaghaei, 2022; Larson et al., 2016; Mehrabi et al., 2021; Selbst et al., 2019). Algorithmic fairness aims to mitigate these biases, discrimination and disadvantages towards the people that are affected by decisions which are made by an algorithm. However, because no single correct definition of fairness exists, there is still a far-ranging discussion about what algorithmic fairness is and how it can be ensured. Different perspectives on algorithmic fairness have emerged in the literature (Dolata et al., 2022). The technical perspective describes algorithmic fairness through mathematical notions of fairness (Barocas & Selbst, 2016; Dolata et al., 2022). The social perspective describes their understanding of algorithmic fairness based on the concept of equality and the concept of equity (Binns, 2018; Green, 2022; Holm, 2023). However, as algorithmic decision-making is neither a purely technical nor a purely social task, scholars are calling for viewing algorithmic fairness from a sociotechnical perspective (Dolata et al., 2022; Green, 2022; Holstein et al., 2019; Selbst et al., 2019). The sociotechnical perspective includes the engagement of the technical and human components in joint optimisation that aims to generate an effective sociotechnical system within a given context (Dolata et al., 2022; Lee, 2004; Makarius et al., 2020; Sarker et al., 2019).
Furthermore, as there is only limited research about the sociotechnical perspective on algorithmic fairness in the organisational employment context, there is no understanding how algorithmic fairness is be enacted and experienced by different stakeholders in different ways. This qualitative research project therefore investigates how algorithmic fairness is enacted in the organisational employment context by applying Orlowski’s “Technologies-in-Practice” theory.
On a theoretical level, this research contributes to the discourse about algorithmic fairness across different research disciplines by developing an “algorithmic-fairness-in-practice” theory. It will highlight how algorithmic fairness can be enacted and experienced by various stakeholders in different ways. Having this understanding is important as it shows how humans can influence the enactment of algorithmic fairness to further understand how individuals are affected by algorithmic decision-making processes in the organisational employment life cycle. On a practical level, this research project provides an overview of the status quo of the “thinking” and “doing” of algorithmic fairness in the organisational employment context. It will make companies and policy makers aware of the different perspectives of algorithmic fairness. More importantly, it will show that understanding algorithmic fairness in different ways can result in conflicts between different stakeholders.
Conferences
2024
2023