AI chatbots on your website: do they trigger new compliance obligations?
A chatbot that answers product questions is one thing. A chatbot that screens leads, takes bookings, or makes underwriting decisions is another. The compliance surface depends on what the bot does, not on what model powers it.
A website chatbot in 2026 sits at the intersection of three frameworks that did not all exist when the deployment patterns settled. GDPR governs the personal-data side. ePrivacy governs the storage side. The EU AI Act adds transparency obligations and, for some chatbots, conformity assessment. None of these individually is novel; their combination on a single feature is.
GDPR — what chatbots collect
Every chatbot collects at minimum:
- The text the user types (often containing PII volunteered for support).
- A session identifier linking utterances within a conversation.
- Timestamps, page URL, referrer.
- Sometimes: an explicit email or name field the user filled in to start the chat.
All of this is personal data under Article 4. The lawful basis depends on the chatbot’s purpose: a support chatbot operates under contract (6(1)(b)) or legitimate interest (6(1)(f)); a lead-generation chatbot operates under consent (6(1)(a)) because the marketing purpose triggers the EDPB’s direct-marketing position.
Article 22 — automated decision-making. Article 22 grants the right not to be subject to a decision based solely on automated processing that produces legal effects or similarly significantly affects the data subject. A chatbot answering “what time do you open?” does not trigger this. A chatbot that pre-qualifies loan applicants, screens job applicants, or routes users to different prices based on profile does. When Article 22 is triggered, the controller must offer a human review path on request.
The EU AI Act — Article 50 transparency
The EU AI Act (Regulation (EU) 2024/1689, phased application 2024–2027) imposes tiered obligations based on risk.
For chatbots, the relevant tier is Article 50 (transparency obligations for AI systems intended to interact directly with natural persons). Effective from 2 August 2026, providers must ensure the user is informed they are interacting with an AI system unless this is obvious from the context. The disclosure must be:
- Clear and visible. A line on the chat widget’s opening message.
- Provided before interaction. Not buried in terms of service.
- Accessible in the language of the user. Same locale as the chat UI.
The disclosure can be as simple as “Hi! I’m an AI assistant. Ask me about [topic].” Most modern chatbot products ship this by default but a custom-built bot needs to be checked.
The higher-risk tiers of the AI Act apply when the chatbot performs functions in Annex III (recruitment, credit scoring, biometrics, education, essential services). Most SMB marketing or support chatbots are not Annex III. Confirm against your bot’s actual function — “help our customers” can mean very different things.
ePrivacy — the storage question
A chatbot widget typically loads a script that:
- Sets a session cookie or localStorage entry to maintain chat state.
- Stores conversation history client-side for the “continue chat” UX.
- Fingerprints the user for fraud prevention or analytics.
All of this is “storage of information on terminal equipment” under ePrivacy 5(3). The exception for “strictly necessary for the service the user requested” covers the session state once the user has opened the chat. It does not cover the script loading before the user clicks the chat button.
The compliant pattern: the chat widget loads on click, not on page load. Sites that auto-pop the chatbot are firing storage on every visitor, including ones who never engage. That is ePrivacy 5(3) consent territory.
Cross-border data transfers
Most chatbot platforms (Intercom, Drift, Zendesk, HubSpot, Tidio) are US-based. EU customer chat transcripts transferred to US servers re-engage the Schrems II / DPF analysis. The current state (DPF in force) means transfers are lawful provided the vendor is DPF-certified, but the calculus shifts if Schrems III invalidates DPF.
Vendors who host in the EU (Userlike, Crisp, Tidio EU region, Intercom EU workspace) reduce this surface. The trade-off: feature set is typically smaller and integrations with US-hosted CRMs may re-introduce a transfer at the CRM-write step.
The LLM-backed chatbot question
Chatbots powered by GPT-4 / Claude / Gemini introduce a second layer: the prompt-plus-context sent to the LLM provider is itself a transfer. EU customer messages flowing to OpenAI’s servers are subject to the same transfer analysis as any other US-bound data.
Two mitigation patterns:
- Use an EU-region LLM endpoint. Anthropic (eu-west-1), OpenAI (eu-region beta), Mistral, Aleph Alpha. Eliminates the transfer.
- Get explicit consent for the LLM hand-off. Disclose in the chatbot intro and the privacy policy that messages are processed by a third-party LLM provider, named explicitly, with the DPF status disclosed. This is what CNIL has signaled it expects for the LLM era.
Practical 2026 checklist
- Does the bot identify itself as AI? Required by AI Act Article 50 from August 2026.
- Does the bot make any decision with legal or significant effect?If yes, set up a human-review escalation path for Article 22.
- Does the bot load before user interaction? If yes, gate it behind consent.
- Is conversation data stored beyond the session? If yes, disclose retention period in the privacy policy and offer deletion.
- Is the bot powered by an LLM hosted outside the EU? If yes, ensure DPF certification of the provider and disclose the transfer.
- Add the chatbot vendor and the LLM provider to your subprocessors page.
Veracly’s scope
Veracly fingerprints common chatbot widgets (Intercom, Drift, Zendesk, Tidio, HubSpot, Crisp, Userlike) and flags pre-consent loading as a tracking violation with a chatbot subtype. The AI Act Article 50 disclosure check is on the roadmap for the v1.1 rule pack (scheduled for Q3 2026, after Article 50 entry into application). Until then, Article 50 is a manual-review item on the report’s accessibility statement section.
See also: Is Google Analytics 4 illegal in Europe? · Tracking pixel audit
Common questions
Does the EU AI Act apply to my website chatbot?
It depends on the function. The AI Act's transparency obligations (Article 50) apply to all AI systems that interact directly with humans — including chatbots — requiring the user to be told they are interacting with AI. The higher-tier obligations (risk management, conformity assessment) apply only to "high-risk" systems, which most marketing chatbots are not.
Do chatbots trigger GDPR Article 22?
Only if the bot makes a "decision based solely on automated processing" that produces a legal or similarly significant effect on the user. A bot that books a meeting or answers FAQs does not. A bot that screens loan applications, sets insurance premiums, or rejects job candidates does.
Is chat history personal data?
Almost always. Chat transcripts are textual data linked to a session identifier; combined with timestamps, IP addresses, or any explicit identifier the user shares, they become personal data under GDPR Article 4. Treat chat history as you would email logs — retention policy, access control, deletion process.
See where your site stands.
Run a free Veracly scan and get a multi-jurisdiction report — EAA, GDPR, ADA, UK Equality Act, AODA — with copy-paste developer fixes.
Run a free scan