How to Write an AI Policy for Your Organization
DEN has developed a practical guideline for AI policy specifically for the cultural sector. This step-by-step guide is a tool for both small and large cultural organizations to systematically create future-proof AI policies. After all, every organization is different and can make its own choices within the framework of the law. The guideline consolidates relevant regulations, ethical considerations, concrete examples, and reflection questions. This allows every cultural organization to determine what they want to do with AI, why, and under what conditions.
What Should Be Included in an AI Policy?
A good AI policy begins with the organization's vision and core values. The use of AI applications is truly valuable only when it contributes to the strategy. Follow these steps to write your policy:
Step 1
Read the guidance document for basic information to determine your policy. It covers the following themes:
- Reliability
- Copyright and portrait rights
- Sustainability and the environment
- Privacy and data
- Transparency
- Diversity and inclusion
- Responsibility
- Impact on work
Use the guidance document and discuss these topics with your colleagues to determine your stance on them.
Step 2
With a good representation of your organization, fill in the working document for each topic. Get inspired by the three examples (focused on the cultural sector) for each topic, review the AI policy checklist in the document, complete your policy, and ensure it is comprehensive with the reflection questions.
The guidance document then describes how you can introduce and implement this AI policy within your organization. Good luck drafting your AI policy!
Why is drafting an AI policy so important? Read below.
Why Is AI Policy Especially Important Now?
Technological developments are moving at lightning speed. In 2023, generative AI became widely available with the launch of ChatGPT 3.5. This development quickly had a significant impact on how we create and consume content. Suddenly, we had access to a tool that could answer virtually any question—whether correct or not. We started seeking answers using AI tools instead of search engines, drafting (concept) texts with AI, and generating images ourselves.
Organizations encountered unexpected issues: from ethical dilemmas to data breaches. For example, in 2023, Samsung banned the use of generative AI by employees after sensitive company information was leaked via ChatGPT.
Additionally, new AI-related legislation is emerging. The European AI Regulation, known as the AI Act, came into effect in August 2024. This law will impose new obligations in phases over the coming years. Among other things, it requires organizations to handle certain AI systems transparently and to invest in AI literacy for employees.
Society is also calling more strongly for guidelines. People are concerned about the implications of AI for their privacy, the spread of disinformation, the reinforcement of existing biases, and job displacement. Cultural organizations bear a societal responsibility: the public expects museums, venues, archives, and libraries to handle new technology carefully. With a proactive AI policy, you show as a cultural organization that you are prepared, mitigate risks, and seize opportunities in a way that aligns with your cultural goals.
Do you have no knowledge yet about the ethical issues and risks of AI? Read the basic article on AI and ethics here to learn how to approach this.
Draft Your AI Policy
Get started right away with practical tools. Use a step-by-step plan to draft an AI policy that aligns with your organization's vision and values.









