0.5 C
London
Sunday, February 2, 2025

How to Stop Spam Calls: Block Unknown Numbers Today

What Are Spam Calls and Why Are...

Elon Musk Reveals Two Professions AI Will Soon Eliminate

Which Careers Are at Risk Due to...

Activate WhatsApp’s Spy Mode to Stay Private

What Is WhatsApp's ‘Spy Mode’? WhatsApp's ‘Spy Mode’...

EU AI Act Demands Adequate AI Skills

EuropeEU AI Act Demands Adequate AI Skills

Under the new EU AI Act, organisations must ensure that employees involved in using or operating AI systems demonstrate sufficient AI proficiency. This mandate extends to both private-sector companies and public authorities, covering any form of AI deployment. The requirement takes effect in stages, giving entities time to meet the regulation’s complex demands.


Broad Scope of the AI Act

The AI Act employs a wide definition of what constitutes an AI system. It covers any machine-based system that operates with varying levels of autonomy, can adapt after deployment, and derives outputs—such as predictions, recommendations, or decisions—that may affect physical or virtual environments.

This expansive scope means that, in time, nearly all organisations using advanced software capable of machine learning or adaptive behaviour could be considered AI operators. They must then comply with the Act’s provisions, including ensuring that relevant personnel have adequate AI proficiency.


AI Proficiency: A Key Obligation

From a certain threshold date, the AI Act obligates each “provider” and “operator” of AI systems to ensure that anyone involved in running or overseeing those systems has a sufficient level of AI proficiency. According to the regulation, “AI proficiency” means the skills, knowledge, and understanding required to:

  • Operate AI systems competently and responsibly,
  • Recognise potential benefits and risks, including harm AI may cause,
  • Consider legal obligations and fundamental rights when making decisions about AI deployment.

This requirement is intentionally flexible. The law specifies that the degree of proficiency must be “adequate” in view of each individual’s role, technical background, and the nature of the AI system in question. Organisations should adopt a risk-based approach, ensuring that higher-risk AI applications—and employees managing them—meet higher proficiency standards.


High-Risk AI and the “Emergency Stop” Principle

Some AI applications are deemed high risk, such as those significantly affecting individual rights or public safety. Under the AI Act, providers and operators of high-risk AI must:

  • Appoint adequately trained personnel who can monitor the system’s functioning,
  • Equip these personnel with the authority to make informed decisions and even deactivate the AI if necessary,
  • Ensure user guides, human oversight mechanisms, and stop controls are in place.

The legislation thus envisions a synergy between human competence and robust technical safeguards, especially where AI poses greater risks. Operators lacking the required competence cannot be assigned to high-risk AI tasks.


Potential Liabilities and Sanctions

Failure to maintain adequate AI proficiency can have serious consequences:

  1. Regulatory Fines or Sanctions: The AI Act allows for fines or enforcement measures against non-compliance, potentially up to substantial percentages of annual turnover. Individual EU Member States will define precise penalties, which must be effective, proportionate, and dissuasive.
  2. Civil Liability: General legal principles hold that violating statutory duties—such as failing to provide sufficient AI proficiency—may trigger liability. If inadequate training contributes to damage or harm, the organisation (and in some cases its management) could be liable for negligence.
  3. Management Accountability: Company directors must manage affairs with the diligence of a prudent business leader. Ignoring or breaching AI competence requirements could expose them to personal liability claims.

Preparing for Compliance

Because the AI Act’s obligations are relatively broad, there is no one-size-fits-all approach. Each organisation should:

  • Identify all current and planned AI tools,
  • Classify the risk level of each system (low, limited, high),
  • Establish training programmes suited to the system’s risk profile and each team member’s role,
  • Keep clear records of competence assessments and staff training,
  • Ensure that third-party providers or subcontractors using AI systems also meet proficiency standards.

Larger enterprises may create cross-functional teams to oversee AI governance, involving experts in legal, technical, and ethical matters. Smaller businesses might rely on external advisers to fill the gap until internal expertise matures.


Conclusion

The EU AI Act, with its expansive definition of AI systems, mandates that relevant personnel develop and maintain sufficient AI proficiency. This requirement is neither optional nor trivial: poor compliance can lead to stiff penalties, liability claims, and reputational damage. By establishing robust training regimes and ensuring staff can competently manage AI technologies—particularly high-risk ones—organisations demonstrate both legal compliance and a commitment to responsible innovation.

Check out our other content

Check out other tags:

Most Popular Articles