In this article, we explain the core elements of the EU AI Act, which classifies AI systems according to risk levels and has far-reaching implications for the development and use of AI in companies.
We highlight the specific challenges for German SMEs, explain the significance of Article 4 for the qualification of employees and show how companies can effectively prepare for the new regulations.
“An AI system is a machine-based system that, for explicit or implicit objectives, infers, from the input it receives, how to generate outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments. Different AI systems vary in their levels of autonomy and adaptiveness after deployment."
A distinction can be made between three types of AI systems:
You can find more information on AI systems here in our article on how to implement AI in your organization.
In this article, we explain the core elements of the EU AI Act, which classifies AI systems according to risk levels and has far-reaching implications for the development and use of AI in companies.
We highlight the specific challenges for German SMEs, explain the significance of Article 4 for the qualification of employees and show how companies can effectively prepare for the new regulations.
“An AI system is a machine-based system that, for explicit or implicit objectives, infers, from the input it receives, how to generate outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments. Different AI systems vary in their levels of autonomy and adaptiveness after deployment."
A distinction can be made between three types of AI systems:
You can find more information on AI systems here in our article on how to implement AI in your organization.
The EU AI Act is the world's first comprehensive legal framework for regulating artificial intelligence. It categorises AI systems into four levels of risk: unacceptable, high, limited and minimal or no risk. The law bans certain high-risk AI applications, such as social scoring by governments, and imposes strict obligations on other high-risk systems, including requirements for risk assessments, transparency and human oversight. The law also introduces measures to ensure the trustworthy use of AI models for general purposes and emphasises future-proof legislation to adapt to technological changes.
Enforcement and oversight will be managed by the European Office for AI, which will ensure that AI development is in line with ethical standards and fundamental rights.
Companies must therefore ensure that they fulfil a number of requirements by 01.08.2026, including the mandate in Article 4, which requires them to improve the AI skills of their employees and relevant stakeholders.
The exact impact of the EU AI law on German SMEs varies from company to company, but two generally important areas are compliance with risk categorisation and investment in AI knowledge, as every company that uses AI systems will be affected.
For SMEs, particularly in industries such as manufacturing (e.g. engineering) and healthcare, where AI applications can be critical, compliance with the EU AI Act means implementing processes to continuously assess and document the risk levels of their AI systems. This involves not only significant investment in compliance mechanisms, but also the potential need to modify existing AI systems to meet safety and transparency standards. Measures include: Create technical documentation, ensure transparency and explainability of the AI system, ensure human oversight.
Excursus Examples of classifications
The EU AI Act classifies AI systems as ‘Unacceptable Risk’ that unknowingly manipulate behaviour, exploit social vulnerabilities, engage in social scoring, collect facial recognition in an undirected manner, detect emotions in the workplace and analyse sensitive personal characteristics such as ethnicity or sexual orientation.
High-risk AI systems within the meaning of the EU AI Act include AI systems in the areas of biometric identification, critical infrastructure management, education access and assessment, human resources management, accessibility of essential services, law enforcement, migration and border control, each of which pose significant risks to health, safety or fundamental rights.
AI systems with limited risk within the meaning of the EU AI Act include applications that require transparency obligations to inform users that they are interacting with AI, such as chatbots or artificially generated content, in order to enable an informed decision about their use.
AI systems with minimal risk within the meaning of the EU AI Act include applications such as AI-powered video games or spam filters, the use of which is largely free and unregulated in the EU as they are considered low-risk.
For SMEs, the AI skills required by Article 4 of the EU AI Act mean implementing training programmes that are tailored to the specific operational needs and existing technical skills of the workforce. The aim is to enable employees to use AI technologies effectively and safely. This requires continuous investment in educational resources and possibly an adjustment of recruitment strategies to prioritise AI skills.
“AI literacy: Providers and deployers of AI systems shall take measures to ensure, to their best extent, a sufficient level of AI literacy of their staff and other persons dealing with the operation and use of AI systems on their behalf, taking into account their technical knowledge, experience, education and training and the context the AI systems are to be used in, and considering the persons or groups of persons on whom the AI systems are to be used.”
According to Article 4 of the EU AI law, companies must ensure that their employees and stakeholders have sufficient AI skills appropriate to their tasks and the specific AI systems they use.
This means that companies must invest in targeted training programmes that provide their employees with the technical knowledge and skills they need to use AI technologies safely and effectively. The implications are far-reaching: organisations will need to allocate resources to training and ensure compliance, which could lead to significant operational changes, particularly in industries that rely heavily on AI.
Many companies have neither the expertise nor the resources to build up AI knowledge company-wide. As a first step, Triebwerk offers a comprehensive and holistic qualification programme for companies that want to provide their employees with the necessary AI skills:
We start with an analysis of the AI application areas in your company and record which AI tools are already in use and in what form (e.g. is there an AI chatbot that you use for internal processes), identify relevant stakeholders and their competence levels. Based on this, we develop a specific training plan that is orientated towards the risk classes of the AI systems used.
In the first few weeks, the training focuses on the basics of AI, including machine learning and deep learning, as well as the legal and ethical framework of the EU AI Act. The correct interpretation and risk management of AI systems are also covered.
After the basics, participants deepen their knowledge in specific application areas. This part includes practical exercises and specific scenarios from the respective area of specialisation in order to make what has been learned directly applicable.
The programme concludes with an examination of the knowledge acquired. After passing the final exam, participants receive a certificate attesting to their qualification. Regular refresher courses keep knowledge up to date.
This structured approach enables effective preparation for the requirements of the EU AI Act and supports the responsible use of AI technologies in the corporate context.