Skip to main content
Foto van immersieve kunst ervaring met projectie

What does the European AI Act mean for cultural organizations?

The European AI Regulation (AI Act) is groundbreaking legislation aimed at regulating the use of artificial intelligence (AI) within the European Union. This law ensures the safety and fundamental rights of European citizens. 

In this article, we delve into what the European AI Act entails and what this law means for cultural organizations. Additionally, we provide you with some tips and tools to prepare your organization for the AI Act.

7 min.3 feb `25

What does the European AI Act entail?

The details and exact implications of the AI Act will become clear after the implementation of the law. The law came into effect on February 1, 2025. We focus on the main points of what is currently known.

The core of the European AI Act is that it classifies AI systems into different risk levels.

  1. Unacceptable risk:

    AI systems that pose a clear threat to the safety, rights, and freedoms of people. These systems are completely prohibited. Examples include AI systems that use subliminal techniques to manipulate behavior or systems for social scoring.

  2. High risk:

    AI systems that pose a significant risk to safety or fundamental rights. These systems are subject to strict requirements, such as extensive documentation, transparency, and oversight. Examples include AI systems used in critical infrastructures, such as healthcare and law enforcement.

  3. Limited risk:

    AI systems that pose a low risk. These systems are subject to minimal requirements and, for example, lighter transparency obligations. Examples include chatbots and emotion recognition systems. Developers of these systems are required to ensure that users are aware they are interacting with AI.

  4. Minimal or no risk:

    AI systems that pose no significant risks to safety or fundamental rights and are therefore not regulated. These applications are found in everyday uses, such as spam filters, games, and website recommendations.

General Purpose AI (GPAI)

In addition to the above risk levels, the AI Act addresses AI systems developed for ‘general purposes’ and not for a specific task or goal. This includes well-known and widely used systems: language models like ChatGPT; image generators like DALL-E and Midjourney. The AI Act imposes specific requirements on these AI systems: they have specific transparency obligations and must comply with copyright laws.

The AI Act is being implemented gradually. (opens in new tab) Since February 2, 2025, AI systems in the highest risk level have been banned. From August 2, 2025, the rules for GPAI models apply. In August 2026, oversight of high-risk AI systems begins. By August 2027, the AI Act will be fully in force.

The responsibility for complying with the rules primarily lies with the developers of AI systems. However, users also have an obligation to take measures to ensure safety and fundamental rights. These differ by risk level.

Digitale render van kabel
Digital artwork

The risk level of AI systems in cultural organizations

Cultural organizations are increasingly using AI to improve their offerings, enhance visitor experiences, and perform work more efficiently. Think of AI-driven tours, personalized experiences, and improving grant applications using ChatGPT. Most of these applications fall into the low-risk category or the minimal or no-risk category. This means they have minimal or no requirements.

Actions for low-risk AI systems

For AI systems classified as low risk, cultural organizations can take the following actions:

  • Inform users: Ensure visitors know when they are interacting with an AI system. This can be done through clear notifications or information boards.
  • Minimal documentation: An overview of the AI models used within the organization and their purpose.
  • Optional guidelines or code of conduct: Develop internal guidelines for the use of AI, focusing on ethical use and respect for user privacy.
  • Feedback mechanisms: Implement systems that allow users to provide feedback on the AI systems.

By following these steps, cultural organizations can comply with the AI Act. At the same time, they can position themselves as advocates for responsible AI use. Innovative applications of (generative) AI are still possible within the AI Act.

Actions for high-risk AI systems

Some applications may fall under high risk, especially if they process sensitive data or have a significant impact on the rights and freedoms of individuals. For example, an AI system that uses facial recognition to identify visitors may be considered high risk and must meet stricter requirements.

Preparing for the European AI Act

To prepare for the European AI Act, cultural organizations can take the following steps:

Inventory AI systems: Create an overview of all AI systems used within your institution and determine their risk level.

Transparency and documentation: Ensure you are transparent with users about the use of AI systems. This means users must be informed when they are interacting with an AI system and how their data is being used.

Security and privacy: Implement security measures to protect user data. This includes regularly updating systems and conducting risk analyses.

1. Complete the Cyber Checklist (opens in new tab)

2. Ensure you work GDPR-compliant (opens in new tab)

3. Update your Terms and Conditions if necessary

Training and awareness: AI literacy among employees is essential for responsible and critical use. Organizations must ensure that users of AI systems have the necessary knowledge and skills to use them. Therefore, enhance the AI skills of your employees so they are aware of the implications of using AI systems.

Awareness is crucial for successful implementation.

1. Follow the learning program ‘Starting with AI: the basics for smarter work’ (opens in new tab)

2. Or start with the National AI course (opens in new tab)

Make agreements on handling AI: Discuss ethical positions within the organization and determine how to address them. Develop guidelines or a code of conduct to set clear boundaries for employees.

1. DEN is developing a discussion tool in 2025 to guide organizations in this process

2. Knowledge Center Data & Society (opens in new tab) offers various tools for this purpose

Now available: Draft an AI policy for your organization

AI is changing the way we work and create. With DEN's step-by-step plan (opens in new tab), you can develop a future-proof AI policy in eight chapters, with clear steps and inspiring examples. This way, you comply with the European AI Act and ensure clarity within your organization.