Site icon Global News HQ

Deployment of AI in the Workplace in France–The Importance of Consulting With the Work Forces

Deployment of AI in the Workplace in France–The Importance of Consulting With the Work Forces


In a significant ruling on 14 February 2025, the First Instance Court of Nanterre, France ordered a company to suspend the deployment of several artificial intelligence tools until proper consultation with its Works Council has been completed.

The company started implementing new AI applications while the mandatory Works Council consultation process was still ongoing. Despite the claim that these tools were merely in a “pilot phase,” the court found that their deployment to employees constituted actual implementation rather than simple experimentation.

The court’s decision emphasizes the importance of respecting employee representation rights in the digital transformation of workplaces, especially in France. This injunctive relief ruled that the premature implementation of the AI tools constituted a “manifestly unlawful disturbance” of the Works Council prerogatives.

This case sets an important precedent for companies implementing AI technologies in France, highlighting the necessity of proper employee consultation procedures before deploying new technological tools in the workplace, in addition to the recently adopted EU AI Act.

The EU AI Act classifies AI systems into four categories: prohibited AI systems, high-risk AI systems (HRAIS), general purpose AI Models (GPAIM), and low-risk AI systems.

Obligations being based on the AI systems risk-level, the most stringent rules apply to HRAIS providers which must particularly:

  • Implement comprehensive risk management systems;
  • Ensure data governance;
  • Maintain technical documentation;
  • Guarantee transparency;
  • Enable human oversight;
  • Meet standards for accuracy, robustness, and cybersecurity;
  • Conduct conformity assessments; and
  • Cooperate with regulators.

General Purpose AI Models (GPAIM), must fulfill obligations such as issue technical documentation, comply with EU copyright rules, and provide summaries of their training data. GPAIMs posing systemic risks must also undergo model evaluations, risk mitigation, and incident reporting.

Josefine Beil also contributed to this article.



Source link

Exit mobile version