What are the health & safety risks of AI?

0

The introduction of AI in the workplace brings along innovative developments but also challenges and risks for workers’ safety, health and wellbeing, finds a report commissioned by the European Agency for Safety and Health at Work

In 2020 the European Agency for Safety and Health at Work (EU-OSHA) initiated a four-year research programme on digitalisation and occupational safety and health (OSH) with the aim of supporting evidence-based policymaking by providing deeper insights into the consequences of digitalisation on workers’ health, safety and wellbeing and how these are addressed at the research, policy and practice levels, as well as by describing examples of successful practices.

The report presents findings from EU-OSHA’s project on new forms of worker management through AI-based systems (AI-based worker management, AIWM) and OSH.

The aim was to identify gaps, needs and priorities for OSH and make recommendations for policy, research and practices in order to support decision-making discussed at a high-level workshop that concluded the project.

Based on the findings of the research, there are a number of recommendations that can be used to mitigate risks to workers’ safety, health and wellbeing that are associated with the design and use of AIWM systems.

  • Making the design, development and use of AIWM systems human-centred, so that they are used to support workers and leave humans in control. This would also guarantee that the compassion, empathy and care for workers brought by humans is not replaced by computer decision-making that solely tries to increase profits for a business.
  • Ensuring workers’ participation, consultation and social dialogue. Workers should be included in the design, development and testing phases, and ex ante and ex post assessments, as well as usage of AI-based systems. The inclusion of workers at all stages of AI development and usage will contribute to making such systems trustworthy, human-centred and remaining under human control. This can also be achieved by enforcing the co-governance of AIWM systems, giving a say to workers on how AIWM is developed, acquired, introduced and used. This is key to preventing the possible risks of AIWM to OSH.
  • Fostering a holistic approach in evaluating AIWM systems encompasses including different stakeholders in the evaluation process, as well as ensuring that such systems are not evaluated in a vacuum; it also covers the effects AIWM might have on workers and society as a whole. The evaluation process should also be a dynamic process rather than a one-off exercise as AI-based systems are able to evolve through self-learning, which might lead to some systems that were safe in the past becoming dangerous for workers.
  • Improving the design, development and use of AI-based systems by making the functioning and purpose of AIWM transparent, explainable and understandable. This might be ensured by introducing more binding requirements for AIWM providers and developers to ensure that workers’ health, safety and wellbeing are already considered from the design stage. This should also go hand-in-hand with a strong enforcement policy ensuring that organisations comply with regulations.
  • Establishing a clear line of responsibility indicating who is responsible for ensuring that an AIWM system does not cause harm to workers, break the law or malfunction. This includes establishing oversight mechanisms, remedies on how the negative effect of AIWM can be mitigated, and a course of action on what to do if managers fail to govern the AIWM system. Ensuring the line of responsibility could also go beyond simply stating that an employer in general is responsible for AIWM systems by instead requiring organisations to specifically name responsible managers.
  • Improving workers’ privacy and data protection by increasing transparency about data collection and usage and introducing better reporting mechanisms on misuses of AIWM tools. More specifically, workers should have the right to edit or block algorithmic inferences, and to contest automated decisions, and they should also be ensured full freedom to refuse to give consent to collect their data by additional provisions prohibiting lay-offs or any other negative actions against workers in these cases. This can be expanded upon by ensuring workers the right to an explanation for decisions made by algorithms. This includes what private data the algorithm used, how these data were collected and how it made its decision.
  • Ensuring the right to disconnect for workers. In addition to its primary goal of guaranteeing workers the right to disconnect from work during non-working hours, it could also serve as a means to ensure workers’ privacy and personal data protection, in particular when it relates to a disproportionate amount of monitoring and surveillance not strictly necessary for a legitimate purpose.
  • There is a need for knowledge exchange, dissemination and awareness building on AIWM and how it might affect OSH. This might include creating a dialogue involving relevant stakeholders, such as representatives of workers, employers, OSH authorities, experts and AIWM tool developers. The dialogue should be open, allow all sides to express their opinions, and focus not only on what should be controlled, banned and mitigated, but also on how to ethically use AI-based tools.
  • Worker privacy and data protection can also be improved by enhancing labour inspectorates’ capacities and cooperation with national data protection authorities. This includes improving their knowledge about AIWM and how it might affect OSH, as well as providing tools to labour inspectors for closer cooperation with data protection officers on questions relating to how AIWM and similar AI-based systems affect OSH.
  • More education efforts that enhance workers’ and employers’ AI literacy by promoting qualification and skills development for AIWM applications. This would empower them to better understand AIWM systems and thereby be able to exert their right of consultation and participation in the design and implementation of such systems. Education and awareness-raising efforts should focus on ensuring that current and future AIWM systems put humans and their health, safety and wellbeing at the centre.
  • Ensuring transparency between developers of AIWM systems and deploying organisations. This includes, but is not limited to, sharing with organisations how such a tool operates, how it makes decisions, what kind of risks and negative effects it can create, its benefits and drawbacks, and so on. However, if full transparency is not possible, any agreement should include the caveat that if a system causes harm and the deploying company has no right to demand that the system be changed, the system would be shut down at once by such system developers.

The usage of AIWM systems is steadily growing across companies and economic sectors, which can be explained by the fact that they allow organisations to improve productivity and efficiency. However, the introduction of such systems in an organisation can also lead to a large array of ethical and privacy issues, as well as to OSH-related risks.

Nevertheless, if AIWM systems are built and implemented in a trustworthy and transparent way based on workers’ information, participation, consultation and trust, and on the principle of minimisation of workers’ data collection and usage, AIWM systems may also provide opportunities to improve OSH in the workplace.

Trustworthy AIWM can be built by using a human-centred and human-in-command approach, guaranteeing equal access to information of employers, managers, workers and their representatives, and the consultation and participation of workers and their representatives in the decisions taken with regard to the design, development, implementation and use of the AI-based management systems and in the decisions taken are key.

This also includes respecting human autonomy, preventing harm, ensuring fairness, and establishing the AIWM systems’ explicability. To a large extent, this can be achieved by considering workers and their health, safety and wellbeing from the very initial design phase of AIWM systems and related subsequent programming. This, in turn, will allow to ensure that when used, AI does not replace traditional human management practices but supports them.

Human-centric AI can also be further fostered by ensuring worker privacy and that the collected data is not abused by AIWM system developers or employers. There are also still some gaps as personal data, such as workers’ emotional wellbeing, can be derived using AIWM systems from public data, such as workers’ body language, facial expressions and tone of voice.

Worker privacy might be further fostered by ensuring that they have a right to an explanation of how AIWM systems that are used on them work. This includes an explanation on a number of aspects including what kind of data the systems collect, how this data is used, and how decisions are made based on this data.

 

Read the full report

Share.