Contact us now 040 524 717 830

Who is liable when artificial intelligence is used in a company?

Wer haftet beim Einsatz von Künstlicher Intelligenz im Unternehmen
If you have any questions about this topic, please contact me by phone at 040 524 717 830 or by email to lugowski@smart-arbeitsrecht.de

The use of artificial intelligence is already a reality in the daily work of many companies. Whether it's text creation, data analysis, customer communication, or internal decision support: AI systems are intended to accelerate processes and reduce costs. At the same time, however, the legal risk is increasing. Errors, legal violations, or damages caused by AI raise a crucial question: Who is actually liable when something goes wrong?

The answer is complex. A separate AI liability regulation does not yet exist. Companies must therefore continue to adhere to the general principles of civil and labor law.


The AI itself is not liable.

Artificial intelligence is not a legal entity. It can be neither a bearer of rights nor of obligations. Liability of AI "as such" is therefore excluded. The party liable is always a natural or legal person who uses, provides, or controls the AI. In business practice, this is regularly the employer.

Primary liability lies with the employer

If the use of AI causes damage, for example through faulty content, data protection violations, or breaches of contract, the employer is initially held liable. This follows from the general organizational and entrepreneurial risk. Anyone integrating AI systems into their business must organize, manage, and monitor their use in a legally compliant manner.

Furthermore, employers must be held accountable for breaches of duty committed by their employees. According to Section 278 of the German Civil Code (BGB), a debtor is liable for the fault of their agents. When using AI, employees regularly act within the employer's sphere of responsibility, even if they violate internal guidelines. This liability only ceases when the connection to the employer's operational activities is completely absent.

This attribution of liability gains further significance through the AI Regulation. According to Article 4 of the AI Regulation, operators of AI systems have a training obligation. Employers must ensure that employees possess sufficient AI competence when working with AI systems. Failure to provide training, or inadequate training, can further weaken the company's liability position.

Employee liability is limited

Even though the employer is primarily liable, the question arises as to possible recourse against the employees. Here, the principles of internal compensation for damages, developed by case law, come into play.

Employees are not liable, or only liable to a limited extent, for work-related activities. In cases of slight negligence, liability is completely excluded. In cases of ordinary negligence, partial liability may apply. In cases of gross negligence or intent, the employee is generally fully liable. This limitation of liability is based on a discretionary application of Section 254 of the German Civil Code (BGB) and takes into account the fact that the employer structurally determines the working conditions and risks.

In addition, Section 619a of the German Civil Code (BGB) applies to employment relationships. Unlike in general contract law, the employee's fault is not presumed. The employer bears the burden of proof and must demonstrate that the employee is responsible for a breach of duty. In practice, therefore, claims for recourse are often difficult to enforce.

Breaches of duty in the handling of AI

Typical breaches of duty in connection with AI can lie in both the operation and the monitoring of the systems. While prompting is considered the actual act of use, employees regularly also have a duty to monitor and supervise the results. However, this duty is limited to obvious errors. A complete guarantee of the content of AI results cannot generally be demanded of the employee.

The decisive factor is the individual performance capacity of each employee. The Federal Labor Court has clarified that the employee must perform to the extent that they are personally capable. An objective ideal is not the determining factor.

Manufacturer liability and new product liability

The liability of AI providers is becoming increasingly important. For a long time, it was disputed whether software should even be considered a product under product liability law. This uncertainty is significantly reduced by the EU's new Product Liability Directive. In the future, software will explicitly fall under the definition of a product. Manufacturers will then be liable not only for initial defects but also for damages resulting from omitted or faulty updates.

This becomes particularly relevant for companies when they develop or operate their own AI systems, such as internal chatbots or automated decision-making systems. In such situations, they themselves can fall into the role of the liable manufacturer.

Failed AI Liability Directive and open legal situation

The originally planned AI Liability Directive, which was intended to regulate liability issues in cases of violations of the AI Regulation, was withdrawn by the EU Commission. Among other things, it envisaged easing the burden of proof for injured parties and a presumption of causation in certain cases of breach of duty. These regulations have not yet entered into force.

For the time being, the existing principles of civil and labor law will remain in effect. For those who have suffered damages, this means high demands on the burden of proof and the establishment of causality. For companies, however, this does not mean peace of mind, but rather increased uncertainty, as future legislative amendments are always possible.

Liability of management and board of directors

Liability risks don't just affect the company itself. The governing bodies of legal entities can also be held personally liable. Managing directors of a GmbH (limited liability company) and board members of an AG (stock corporation) are obligated to ensure proper organization and compliance with the law. This obligation also includes the legally compliant use of AI systems, including training and compliance structures.

If organizational deficiencies lead to damages, the company risks internal liability. While the company's governing body may, under certain circumstances, invoke the business judgment rule, this rule does not apply to violations of law, but only to discretionary business decisions within legal limits.

Practical tip: Actively manage liability risks

The use of AI in business remains challenging from a liability perspective. A clear legal framework for liability is still lacking. Companies should therefore create their own structures to minimize risks. This includes, in particular, internal AI guidelines, clear responsibilities, employee training, and ongoing review of the systems used.

Anyone using AI should master it not only technically, but also legally. Because ultimately, it is not the AI that is liable, but the person who uses or is responsible for it.

FAQs – Frequently Asked Questions Liability in the context of artificial intelligence in the employment relationship

Who is liable if artificial intelligence causes damage in a company?
Can companies exempt themselves from AI liability through a disclaimer?
Is the employer also liable for errors made by employees when using AI?
When are employees liable for damages caused by AI?
Who bears the burden of proof in cases of employee breaches of duty?
What role does the AI regulation play in liability?
Are AI manufacturers liable for damages caused by faulty systems?
What happens if employees misuse AI or accept results without checking them?
Can managing directors or board members be held personally liable?
How can companies reduce liability risks when using AI?