
Policy recommendations
- The draft Directive on improving working conditions in platform work (Platform Work Directive) clarifies the employment status and working conditions of platform workers. Importantly, it also focuses on regulating algorithmic management. While a step in the right direction, the chapter on algorithmic management fails to deliver the full benefits workers might have expected:by importing some GDPR rights only into the draft Platform Work Directive without fixing pre-existing shortcomings, the legislator has helped to foster an inconsistent regulatory environment that places workers in legal uncertainty and unable to exercise important rights.
- Fair and transparent algorithmic management should be guaranteed by strengthening workers’ ability to fully exercise their rights of access to their data, rectification, erasure, restriction of processing and data portability.
- The final Platform Work Directive should set limits on algorithmic management: by default; automated decisions should be presumed to be fully automated, unless the digital platform demonstrates meaningful human intervention. Recent court cases have shown that platforms often falsely claim that algorithm-based decisions were made by a human or with human involvement.
Background
Algorithmic management is a key building block of the platform business model and a key functionality in automated or semi-automated decision-making systems (ADM). In December 2021, the European Commission proposed the Directive on improving working conditions in platform work, with the aim of enhancing platform workers’ working conditions and social rights, and contributing to the sustainable growth of digital labour platforms. This Policy Brief focuses on algorithmic management, another essential part of the draft Platform Work Directive.
We argue that algorithmic management is not just a tool used by companies to organise their operations, but a game-changing management approach that impacts workers on many levels. Algorithmic management systems are not neutral, they make real-time decisions about workers, plan, allocate tasks, but also profile workers, predict their behaviour and performance and even ‘recognise’ their emotions. For workers, these decisions are difficult to understand and almost impossible to contest. This creates risks such as loss of autonomy, bias, discrimination, income unpredictability or surveillance. As the EU Commission recognises in its impact assessment, ‘it can have nefarious effects on the working conditions of people working through platforms, regardless of their employment status’ (European Commission 2021).
Using the lessons of the GDPR and the forthcoming AI Act as a framework, this Policy Brief proposes a definition of algorithmic management, analyses the main provisions regulating its use, identifies concerns related to their legal interpretation and procedural aspects, and discusses the issues associated with the exercise of the proposed new rights. It proposes constructive recommendations to improve the draft Platform Work Directive in respect of its ability to regulate, fairly and effectively, algorithmic management and bridge the gap between interpretation and implementation.
Defining algorithmic management
Algorithmic management is a central feature of the platform economy. It is gradually spreading into conventional work environments, in sectors such as banking and finance, education, healthcare, services, retail and in public services (Wood 2021). It uses automated or semi-automated decision-making systems, machine learning and other data-driven technologies and substantially relies on the processing of fine-grained workers’ data and on metadata.
Annex III (4) of the draft AI Act, on ‘Employment, workers management and access to self-employment’ describes algorithmic management without referring to it as such and classifies the practice as high-risk. The draft Platform Work Directive provides no definition of algorithmic management and refers only (Article 6) to ‘automated monitoring systems used to monitor, supervise or evaluate the work performance of platform workers through electronic means’, and ‘automated decision-making systems which are used to take or support decisions that significantly affect those platform workers’ working conditions, in particular their access to work assignments, their earnings, their occupational safety and health, their working time, their promotion and their contractual status, including the restriction, suspension or termination of their account.’
Based on the above and taking inspiration from the existing literature, we define algorithmic management as automated or semi-automated computing processes that perform one or more of the following functions: (1) workforce planning and work task allocation, (2) dynamic piece rate pay setting per task, (3) controlling workers by monitoring, steering, surveiling or rating their work and the time they need to perform specific tasks, nudging their behaviour, (4) measuring actual worker performance against predicted time and/or effort required to complete task and providing recommendations on how to improve worker performance (Kellog et al. 2019) and (5) penalising workers, for example, through termination or suspension of their accounts (Mateescu et al. 2019). Metrics might include estimated time, customer rating or worker’s rating of customer.
Analysis of the draft Platform Work Directive and proposals for improvement
The draft Platform Work Directive provides new labour rights around the use of algorithmic management but several shortcomings remain, which make these rights unclear and difficult to exercise by workers. Below, we make some proposals to ensure that workers benefit from legal certainty.
Coherence between the draft Platform Work Directive, the GDPR and the draft AI Act
The draft Platform Work Directive, the AI Act and the GDPR are closely linked. Coherence between those three instruments is therefore essential. When drafting the chapter of the draft Platform Work Directive on algorithmic management, the legislator opted to import some GDPR rights, but without addressing their shortcomings. The right to explanation and the right to meaningful human intervention established by the GDPR are notoriously difficult to implement; if nothing is done this difficulty will also be an intrinsic part of the Platform Work Directive.
Chapters 3 and 4 of the draft Platform Work Directive refer to a series of rights, namely to inform, to explanation, to review a decision and to rectify a decision, among other things, emanating from the GDPR. In Articles 15–18 and 20, the GDPR establishes a series of other rights – access, rectification, erasure, restricting of processing and portability – which should be more explicitly reinforced in the Platform Work Directive, as they are especially relevant for workers. The 2022 joint action by privacy NGO noyb and the global trade union federation UNI Global, on filing access requests to workers’ data under Article 15 GDPR on Amazon warehouses from five EU countries, shows that the GDPR and the rights it grants are essential in order to strengthen workers’ rights (Uni-Global 2022).
Article 43 (2) of the AI Act obliges AI providers to carry out a conformity assessment of high-risk AI systems. Given the intrusive nature of such systems, this conformity assessment should not be in the hands of providers themselves, but entrusted to an external authority, and should take place before the systems are put into service.
Further, Article 13 of the AI Act proposes measures on transparency and information to users of high-risk AI systems, namely platforms. Those rights should also extend to the people affected by those systems and to platform workers.
Transparency of and use of automated monitoring and decision-making systems
Transparency of algorithmic management is the first step towards genuine accountability. Article 6 requires digital labour platforms to ensure transparency by informing workers about the use of (a) automated monitoring systems, and (b) automated decision-making systems, including the categories of decisions, the parameters taken into account and the grounds for those decisions.At the same time, Article 6 prohibits the processing of workers’ personal data that are ‘not intrinsically connected to and strictly necessary for the performance of the contract’, including private conversations and data related to the worker’s health, psychological or emotional state. This prohibition should also include other categories of data, such as trade union membership and political opinion, to be consistent with GDPR Article 9. It is worth noting that this prohibition falls under the competence of national DPAs, who are not labour experts. Article 6 does not contain a clear prohibition of profiling or processing data through a fully automated system, which would be in line with the GDPR Article 22(1). Instead, it refers to a right to provide information about such practices. It also does not prohibit automatic termination and suspension of accounts. The draft Platform Work Directive establishes the right to information, as well as a specification of what kind of information regarding automated monitoring and decision-making systems is to be made available. Profiling, fully automatic data processing, automatic account termination and suspension, however, should appear in the Platform Work Directive as prohibited practices.Finally, there is a risk of conflict between draft Platform Work Directive Article 6(1)(b) and GDPR Article 22(1), as Article 6(1)(b), read in conjunction with Article 7(1), seems to suggest that any type of automatic decision-making is possible, whereas GDPR Article 22(1) prohibits them unless certain conditions are met (GDPR Article 22 (2)). Accordingly, we suggest that Article 6(1)(b) of the draft Platform Work Directive be preceded by the safeguard clause ‘without prejudice to Article 22(1) GDPR’.Incidentally, one of the three conditions under GDPR Article 22(2)(c) whereby ‘the data subject shall have the right not to be subject to a decision based solely on automated processing, including profiling’ – namely, obtaining the ‘data subject’s explicit consent’ – is not applicable to the context of employment. This should be stated in the Platform Work Directive to avoid the misuse of consent, as often happens.
Human monitoring of automated systems
Article 7 requires digital labour platforms to introduce human resources to monitor and evaluate the impact of decisions taken or supported by automated decision systems. Platforms should evaluate the risks and the safeguards for identified risks and introduce preventive and protective measures.
Furthermore, it mentions that platforms ‘shall not use automated monitoring and decision-making systems in any manner that puts undue pressure on platform workers or otherwise puts at risk the physical and mental health of platform workers’. This is the first time psychosocial risks have been explicitly mentioned in EU legislation. The draft Platform Work Directive, however, fails to address the prevention of these risks, not to mention their assessment and mitigation. Research carried out by ETUI has found that platform workers face specific psychosocial risks that can be associated with three work dimensions: social and physical isolation; work transience and boundaryless careers; and algorithmic management and digital surveillance. Those risks can trigger negative worker outcomes (Bérastégui 2021). Moreover, the expression ‘shall not use’ is meaningless as no sanction is foreseen if platforms do not comply with this prohibition.
These inconsistencies aside, there are some omissions in the draft Platform Work Directive. In order to be consistent with the GDPR, we suggest the inclusion of provisions to ensure that data processing at work is proportionate to the risks faced by the employer, and that information registered from ongoing monitoring is minimised as much as possible (as suggested by the Article 29 Working Party in its Opinion on data processing at work of 2017).
Article 7(3) requires platforms to ‘ensure [that] sufficient human resources’ are available ‘for monitoring the impact’ of automated decisions. More guidance is needed to clarify the meaning of ‘sufficient’ and ‘monitor’, and by whom this should be carried out.
Human review of significant decisions and the right to explanation
We argue that because algorithmic management is a high-risk AI application (AI Act Annex III, 4), the Platform Work Directive should put limits on it and decisions should be presumed to be fully automated, unless the digital platform demonstrates meaningful human intervention (Ekker Advocatuur 2022). This would prevent platforms from falsely claiming that a given decision was not automated but made by a human or with human involvement, as shown by the Uber drivers v Uber B.V. court case at the Amsterdam Court of Appeal on data access and transparency of automated decision-making (WEI 2022). The Platform Work Directive could clarify that decisions that lead to the deactivation of accounts, which is de facto dismissal, should never be automated.
The right to be informed of automated monitoring and decision-making systems and the right to explanation enshrined in Articles 8 and 9 have their roots in the GDPR (among others, Articles 13(2)(f), 14(2)(g), 15(1)(h) GDPR). The GDPR also provides individuals with the right not to be subject to a decision based solely on automated processing, including profiling; the right to know the existence of that processing and meaningful information about its logic, significance and consequences (Article 22(1) GDPR); the right to human intervention by the controller to express their point of view, and the right to contest the decision (Article 22(3) GDPR).
In theory, this should benefit workers. In practice, exercising these rights is very difficult (as shown in the Worker Info Exchange and App Drivers & Couriers union (ADCU) v Uber case in the United Kingdom). Even with the guidance of Data Protection Working Party 29 on automated individual decision-making and profiling, the assertion of GDPR Article 22 presents practical challenges.
A simplification of its interpretation and probably a new formulation is needed (Veale & Edwards 2018).In the draft Platform Work Directive, it is not clear what the right to review entails. It should enable workers to review what parameters have been used by the algorithm to compute or support a given decision. Article 8 should also establish that the worker has the right to be represented or accompanied by a trade union in case of human review.
Access to relevant information
Article 12 requires digital labour platforms to make certain information accessible to labour, social protection and other relevant authorities. This is less protective than the AI Act (Article 64 (2), (3)) and the GDPR (Article 58) on granting access to evidence to competent authorities, including access to source code and any documentation necessary for the fulfilment of their duties. Furthermore, access to information is particularly relevant for workers and their representatives, beyond the rights to review and to explanation.
Rights to dispute resolution and to redress
Article 13 does not properly clarify the forms of protection provided in case of infringement. The use of the expression ‘Without prejudice to Articles 79 and 82 of Regulation (EU) 2016/679’ does not help us to understand whether the forms of protection are alternative to those provided for by the GDPR or cumulative and, if so, what they consist of. Moreover, the meaning of the expression ‘effective and impartial dispute resolution’ is unclear. Finally, it is not clear why Article 77 GDPR is not included in the list of potential means of redress.
Supervision of competent authorities
The success of the European Commission’s proposal will depend on its effective enforcement in terms of how and on what authority workers can exercise their rights. Looking at how the draft Platform Work Directive has been designed, the allocation of competences looks artificial and not obvious in practice. The draft Platform Work Directive provides competences to both DPAs and labour authorities. Even though Article 19 states that cooperation between them is essential, we would like to draw attention to the complex allocation of competences and the risk of cumulative and overlapping competences. For instance, the obligations established by Articles 6, 7(1) and (3), 8 and 10 fall under the competence of national DPAs, but we believe they should fall under the competence of labour authorities.
Moreover, strategic enforcement cooperation among DPAs and labour authorities should be ensured throughout investigations.The current capacities of both authorities are already overstretched. We are doubtful that DPAs will have the ability (or willingness, in some cases) to carry out in-depth examinations of labour issues, and vice versa, as far as oversight and redress are concerned. Therefore, the EC should further clarify the allocation of competences, including cross-border considerations.
Article 19 assigns to the DPAs identified by the GDPR the task of monitoring compliance with Articles 6, 7(1), 7(3), 8 and 10 of the draft Platform Work Directive, as provided for by, among others, Chapter VII of the GDPR on cooperation and consistency. This mechanism, however, has shown its weaknesses, related mainly to the slowness of certain DPAs (acting as lead supervisory authorities) when deciding about important cross-border data processing cases. There are many reasons for this, but one of the main ones is the absence of a uniform European procedural framework requiring such authorities to decide the matter within strict deadlines.
In the absence of any indication by the GDPR, these procedural aspects are regulated by national rules, which often do not provide any time limit for the resolution of proceedings with a transnational character. This has led to an impasse which, as already noted by EDRi in a letter to the EDPB (2022), would require, among other things, regulatory changes. Consequently, the reference to the GDPR regulation made in Article 19 of the draft Platform Work Directive risks entrusting the fate of workers to a very slow procedure with very little chance of timely protection of fundamental interests. Indeed, it must be pointed out that the automated decisions regulated by the draft Platform Work Directive have the potential to terminate an employment relationship and thus deprive employees of fundamental means of subsistence. It must therefore be avoided that the absence of procedural rules in the GDPR, combined with ineffective or non-existent national rules, deprive workers of effective protection. Consequently, we suggest including in Article 19 a specific provision regulating at least the maximum time limit for resolving cross-border disputes.
Final remarks
It has taken almost 100 court cases for the EC to propose a draft Directive to improve platform workers’ working conditions (EESC 2022). In practice, the new rules will make it possible to clarify their employment status and working conditions, including essential aspects related to occupational health and safety and access to social protection.
The draft Platform Work Directive is also the first legal text to regulate algorithmic management, which is driving radical change in work organisation and work relations, nudging workers’ behaviour and impacting them in many dimensions of their lives, at work and beyond. Here also, the benefits can be significant but to guarantee genuine algorithmic accountability the legislator should not just import into the Platform Work Directive some of the rights established by the GDPR. As argued in this Policy Brief, all GDPR rights should be part of the Platform Work Directive – they should be adjusted to the employment context, and enforcement should be a priority. In its current form, the legal proposal leaves too much discretion to courts to decide on important aspects. Social partners should see this as an opportunity to negotiate agreements on technological innovation and on the lawful processing of data.
If we look at the legislative architecture that the EC is building to regulate AI in the employment context, we see a worrying sequence of missed opportunities. The AI Act, because of its internal logic, fails to provide adequate protection to workers exposed to and using AI systems. The Platform Work Directive is another chance to achieve this, but the current draft falls short of the mark, for the reasons presented in this Policy Brief. There is still space and time to improve the text, using GDPR, as described above. If, however, this opportunity is not taken, the problem will remain, grow in magnitude, and perhaps necessitate the adoption of an ad hoc piece of legislation, a suggestion already made in a previous ETUI Policy Brief.
The authors would like to thank Sarah Chandler (EDRi), Nicola Countouris (ETUI) and Stefano Rosseti, (noyb) for their comments and contribution to this Policy Brief.