AI Policy in companies: how to prepare your organisation for the EU AI Act
Artificial intelligence (AI) is evolving at high speed and has become embedded in day-to-day business operations. From generative AI tools and chatbots to systems used for recruitment, customer analytics and decision-making, more and more organisations rely on AI systems, often without full clarity on the legal and organisational obligations that come with them.
With the EU AI Act, that is changing fundamentally. Organisations are expected to make informed choices about the use of AI and to actively manage associated risks. This article explains how to develop a practical, workable and legally robust AI policy, so your organisation is prepared for the EU AI Act and continues to comply with existing rules such as the General Data Protection Regulation (GDPR).
What is an AI policy and why is it necessary?
An AI policy is a set of internal rules that defines how, why and under what conditions AI may be used within the organisation. It provides guidance for employees and helps management maintain oversight and control as technology develops.
AI is no longer limited to IT departments or large technology companies. Many AI features are built into existing software, such as CRM systems, HR tools and marketing platforms. Employees also frequently experiment with public AI tools on their own initiative. Without clear guardrails, this can lead to privacy violations, discrimination, lack of transparency or flawed decision-making.
The EU AI Act and the GDPR impose clear responsibilities on organisations. These include obligations relating to risk management, data use, human oversight and transparency. A well-designed AI policy helps translate those requirements into everyday practice.
The legal framework: EU AI Act, GDPR and employment law
The EU AI Act follows a risk-based approach. AI systems are classified into different categories, ranging from minimal risk to unacceptable risk. For high-risk AI uses, such as systems for recruitment and selection, credit scoring or other decisions with significant effects on individuals, strict requirements apply.
These requirements cover, among other things, risk management and documentation, the quality and provenance of data, transparency regarding how the system works and what its limitations are, and effective human oversight with the ability to intervene. Certain AI practices are prohibited outright, including specific forms of manipulative AI and social scoring.
In addition, the GDPR remains fully applicable. Where AI systems process personal data, core principles such as data minimisation, lawfulness, security and restrictions on automated decision-making are particularly relevant. Employment law and consumer protection rules may also apply, for example in HR-related AI or customer-facing AI applications. An AI policy connects these legal requirements to daily business operations.
Purpose and scope of a strong AI policy
An effective AI policy is not a theoretical document; it is a practical compass for anyone working with AI. It should explain why the organisation uses AI, what it aims to achieve, what risks arise, and how employees are expected to use AI tools responsibly.
Defining the scope is crucial. The policy should specify which departments it applies to, such as HR, marketing, customer service, finance, operations and research and development. It should also clarify which types of systems fall under the policy, including purchased AI software, internal models, generative AI tools, chatbots, scoring tools and recommendation systems. Finally, it should address whether and under what conditions individual experimentation with public AI tools is allowed.
Key components of an AI policy
An AI policy should start with clear definitions, aligned with the broad concept of AI under the EU AI Act, but written in language employees can understand. Staff should be able to recognise when they are using an AI system that falls within the policy. Practical examples by domain, such as HR, customer interaction and internal processes, help bring this to life.
The policy should then distinguish between permitted AI use, restricted use subject to conditions and prohibited use. Prohibited use includes AI practices classified as unacceptable risk under the EU AI Act. For limited-risk use cases, the policy may impose conditions such as transparency obligations or prior approval. Permitted use can be linked to safeguards such as a risk assessment, a DPIA and additional technical and organisational measures.
Governance is another core element. The policy should clearly define who is ultimately responsible for AI compliance, who is authorised to select or implement new AI applications and who oversees adherence and incident handling. Vendor selection and supplier management also matter. Organisations should assess whether providers can meet the requirements of the EU AI Act and ensure those obligations are properly reflected contractually.
Data, privacy, security and transparency
Because AI depends on data, the policy should define what data may or may not be processed through AI systems. It should address data minimisation, anonymisation or pseudonymisation where appropriate, retention periods and separating training data from production data. For high-risk systems, a combined assessment is often needed that considers both the EU AI Act and the GDPR.
AI systems and the data they rely on must be properly secured. The policy should describe how access rights are organised, how use is logged and monitored, and how incidents and data breaches are handled.
The EU AI Act requires transparency when individuals interact with AI systems or when content is generated by AI. The policy may therefore require that employees, customers and other stakeholders are informed whenever AI is used, including key characteristics and limitations.
Human oversight, bias and decision quality
For AI systems with significant impact on individuals, human oversight is essential. The policy should specify when human oversight or human decision-making is mandatory and how this oversight is implemented in practice. It is also advisable to periodically test AI systems for bias, error rates and unintended consequences, especially in areas such as HR and customer onboarding.
Training and AI literacy
The EU AI Act requires organisations to promote AI literacy. An AI policy should therefore include a training framework with a baseline level for all employees and more advanced training for specific roles such as HR, IT, data teams and management. Regular updates are necessary to keep pace with technological and legal developments.
From initial inventory to a mature AI policy
A workable AI policy is typically developed in phases. First, the organisation identifies which AI applications are in use, including AI features embedded in existing software and tools adopted by employees. Next, these applications are classified by risk. A legal and organisational risk assessment follows, after which the AI policy is drafted and aligned with existing privacy, information security and HR frameworks. The policy is then implemented in processes, contracts and systems. Finally, training, communication, monitoring and periodic updates ensure the policy remains effective over time.
Conclusion
The EU AI Act makes it clear that ad hoc or unstructured experimentation with AI is no longer sustainable. Organisations that invest early in a well-designed AI policy reduce legal risk and build trust with employees, customers and regulators.
Would you like to know whether your organisation is ready for the EU AI Act, or do you need support drafting or implementing an AI policy? Contact Law & More. We are happy to assist.
FAQ
Is an AI policy mandatory under the EU AI Act?
The EU AI Act does not explicitly require organisations to have a document titled “AI policy”. In practice, however, an AI policy is essential to demonstrate compliance with the obligations imposed by the AI Act and the GDPR, such as risk management, human oversight, transparency and AI literacy.
Which organisations are subject to the EU AI Act?
The EU AI Act applies to virtually all organisations that develop, place on the market or use AI systems within the European Union. This includes not only technology companies, but also employers, service providers and organisations that use AI in HR, marketing, customer interaction, finance or decision-making processes.
Does the EU AI Act apply if we only use standard off-the-shelf software?
Yes. Even where AI functionalities are embedded in third-party software, the organisation using the system remains responsible for its use. Relying on a vendor does not remove the user’s obligations under the EU AI Act and the GDPR.
What is the difference between low-, limited- and high-risk AI systems?
The EU AI Act classifies AI systems based on the level of risk they pose to fundamental rights and interests of individuals. High-risk AI includes systems used for recruitment and selection, employee evaluation, creditworthiness assessments or access to essential services. These systems are subject to significantly stricter requirements.
Do all AI applications need to be assessed in advance?
In practice, yes. Organisations should inventory and assess AI applications before deployment and classify them according to risk. For high-risk AI, a thorough assessment is required, often combined with a Data Protection Impact Assessment under the GDPR.
How does an AI policy relate to the GDPR?
The EU AI Act and the GDPR complement each other. While the AI Act focuses on governance, risk management and the functioning of AI systems, the GDPR regulates the processing of personal data. An effective AI policy integrates both frameworks and ensures consistent compliance.
Is a Data Protection Impact Assessment always required when using AI?
Not always, but frequently. If an AI system processes personal data and is likely to result in a high risk to individuals, a DPIA is mandatory under the GDPR. In the case of high-risk AI under the EU AI Act, a DPIA is often unavoidable in practice.
May AI systems make autonomous decisions about employees or customers?
Only under strict conditions. The GDPR restricts fully automated decision-making, and the EU AI Act requires meaningful human oversight for high-risk AI systems. In many cases, a human must be able to intervene, review or override AI-driven decisions.
Can an AI policy restrict employees’ use of public AI tools?
Yes. One of the key purposes of an AI policy is to define whether and under what conditions employees may use public AI tools. This typically includes rules on entering confidential information, personal data or sensitive business information.
Who is responsible for compliance with the AI policy?
The AI policy should clearly allocate responsibility for AI compliance. Ultimate responsibility usually lies with senior management or the board, with important roles for legal, compliance, IT and HR. Without clear governance, effective oversight is unlikely.
What are the risks if an organisation does not have an AI policy?
The absence of an AI policy increases the risk of non-compliance with the EU AI Act and the GDPR. This may result in substantial fines, enforcement measures, reputational damage and potential civil liability. It also makes it more difficult to demonstrate responsible AI governance to regulators.
How often should an AI policy be reviewed?
An AI policy should not be treated as a static document. Regular reviews are necessary, particularly when new AI systems are introduced, legislation or regulatory guidance changes, or incidents occur. Annual review is often considered a minimum.
Is AI literacy required for all employees?
The EU AI Act requires organisations to take measures to promote AI literacy. This does not mean every employee must become a technical expert, but they should understand what AI is, how it is used within the organisation and what risks are involved.
When is it advisable to seek legal advice?
Legal advice is particularly advisable when deploying high-risk AI systems, when there is uncertainty about the lawfulness of specific applications, or when questions arise regarding enforcement, audits or liability. Early legal review can prevent costly corrective action later.
