The EU AI Act represents a bold and forward-looking initiative aimed at regulating the development and deployment of AI within a structured, risk-based framework. It seeks to ensure that AI systems—especially those deemed high-risk—are safe, transparent, and respectful of fundamental rights, while also fostering innovation and public trust. As the legislative process unfolds, both developers and users of AI technologies will need to keep the evolving regulatory requirements in mind. For us, the most important requirement is clearly transparency. That is why we are looking at what compliance might look like for 4 practical use cases of AI.
1. Analyzing customer data internally to make smarter marketing & sales decisions
AI Use Case: A company uses AI to segment customers, predict behavior, and personalize ads.
Transparency Requirements:
✅ Inform users their data is being processed by AI for marketing decisions.
✅ Explain AI-driven profiling (e.g., if AI segments users for targeted ads).
✅ Provide opt-out options for AI-based decisions that significantly impact users.
✅ Avoid biased AI models that could discriminate against certain customer groups.
Example of Compliance:
✔️ Display a privacy notice: “We use AI to analyze browsing behavior and recommend personalized offers.”
✔️ Provide an explanation for targeted ads: “You received this ad because of your purchase history and preferences.”
✔️ Offer an opt-out setting for AI-based profiling.
🔹 Sources:
EU AI Act, Article 50 (Transparency for AI interacting with humans)
GDPR, Article 22 (Rights regarding automated decisions)
2. Offering AI-based forecasting to end-users based on their own data
AI Use Case: A SaaS platform lets users upload data, and AI generates sales forecasts.
Transparency Requirements:
✅ Inform users they are receiving AI-generated forecasts.
✅ Explain how AI processes their data (e.g., “Forecasts are based on past trends and machine learning predictions.”)
✅ Provide accuracy disclaimers (since AI predictions can be wrong).
✅ Allow users to review, modify, or delete input data before AI processing.
Example of Compliance:
✔️ Add a disclaimer on AI predictions: “This is an AI-generated forecast and should be used alongside human analysis.”
✔️ Provide transparency settings so users can view AI logic (e.g., “Why was this forecast generated?”).
✔️ Allow data deletion requests in compliance with GDPR.
🔹 Sources:
EU AI Act, Article 13 (Obligations for high-risk AI)
GDPR, Right to Explanation, Articles 22, 13(2)(f), 14(2)(g) (for automated processing)
3. AI-generated transactional documents (e.g., contracts)
AI Use Case: A legal-tech company provides an AI-powered tool that generates contracts.
Transparency Requirements:
✅ Disclose that the contract was AI-generated.
✅ Explain AI’s role in contract generation (e.g., “This contract was created based on AI-trained templates.”).
✅ Warn users about AI limitations and recommend human legal review.
✅ Ensure traceability (e.g., AI-generated vs. human-edited clauses).
Example of Compliance:
✔️ Add a watermark on AI-generated contracts: “This document was generated using AI and may require legal review.”
✔️ Provide a human-readable summary of key contract terms.
✔️ Allow users to modify AI-generated content before finalization.
🔹 Sources:
EU AI Act, Article 14 (Human oversight for high-risk AI)
GDPR, Article 5 (Data accuracy and accountability)
Though AI is extremely useful for generating the contracts’ text, it may have challenges generating PDFs at scale. If you want to know how to solve this issue, learn how to generate documents at scale from the experience of Rendin real estate management.
4. Using an AI chatbot for customer support
AI Use Case: A company uses an AI chatbot for handling FAQs and support tickets.
Transparency Requirements:
✅ Clearly inform users they are speaking to AI.
✅ Allow human escalation if AI is unable to resolve the issue.
✅ Provide transparency on AI limitations (e.g., “I can answer FAQs but not process refunds”).
✅ Log chatbot interactions for accountability.
Example of Compliance:
✔️ Display an initial disclaimer: “Hi! I’m an AI assistant. I can answer your questions, but you can ask for a human anytime.”
✔️ Show confidence levels in AI responses (e.g., “I’m 80% sure this is the correct answer. Would you like me to connect you with an agent?”)
✔️ Allow users to request a transcript of their chat history.
🔹 Sources:
EU AI Act, Article 52 (Transparency obligations for AI chatbots)
GDPR, Article 12 (User rights to clear communication)
To see a practical example, check out our PDF Generator API homepage. To help assist people with scalable document generation issues, we have enabled a customer support chat that uses AI to solve issues. It is also useful to find no-code document generation integrations.
Key Takeaways
✔️ Be upfront about AI usage – Users must know when AI is involved.
✔️ Explain AI-generated outcomes – Show how AI makes decisions.
✔️ Provide human oversight – Allow human intervention when necessary.
✔️ Ensure fairness and accountability – Prevent biased AI decisions.
This article is inspired by a StartupDay 2025 seminar led by Maarja Lehemets, Maarja Pild-Freiberg, and Aleksander Tsuiman, top experts in AI and GDPR compliance. Maarja and Maarja, both key figures at TRINITI, specialize in data protection law, while Aleksander, Head of Product Legal & Privacy at Veriff, drives AI-driven compliance solutions. If you would like to further dive into AI compliance, we definitely recommend connecting and reaching out to them on Linkedin.
Disclaimer: This article is written on educational purposes only and can not be used as professional legal advice. When in need, always consult professionals.