In this guide:
- Why AI chatbots create CIPA section 631(a) exposure
- The two 2025 cases that define the compliance line
- The four chatbot configurations and their CIPA risk profiles
- What CIPA compliance requires for AI chatbot deployments
- California SB 243 adds a disclosure layer on top of CIPA
- Frequently asked questions
For how chatbots sit alongside pixels, session replay, and GTM in one liability map, see the full AdTech risk map for eCommerce.
Does a website AI chatbot violate CIPA? It depends on two things: whether the chatbot vendor uses conversation data for its own commercial purposes, and whether users consented to the interception before the conversation began. AI chatbot wiretap claims under CIPA have grown from two matters in 2021 to thirty in 2025, making them the fastest-growing category of privacy litigation targeting website operators. Every business running a customer service chatbot, a sales chat tool, or an AI-powered virtual assistant on its website needs to understand what the law now requires — and why the answer is not as simple as adding a chat disclaimer.
Two 2025 court decisions establish the legal landscape clearly. They produced opposite outcomes from similar facts, and the difference between them is the entire compliance story. Understanding why one business won and one lost is the fastest path to understanding what your chatbot deployment requires.
Why AI chatbots create CIPA section 631(a) exposure
CIPA section 631(a) prohibits the real-time interception of communication contents by an unauthorized third party. When a user types a message into a website chatbot, they are communicating. The message content — what they typed, what they asked, what they disclosed — constitutes the contents of that communication. If the chatbot vendor's infrastructure captures those messages in real time and transmits them to the vendor's servers, the three-party interception structure that section 631(a) was written to address is present.
This is not a speculative theory. It is the same structural analysis courts applied to session replay tools, advertising pixels, and website analytics — and courts have consistently found it viable at the pleading stage for chatbots as well. The question is not whether the structure is present. It is whether the specific facts of the deployment defeat it.
Three factors make AI chatbots particularly high-risk under this framework.
The contents capture is more direct than for any other tracking tool. A user typing into a chatbot is communicating substantive messages — questions, complaints, personal information, health concerns, purchase intent. This is not behavioral metadata or URL data that courts must characterize as contents. It is the plaintext content of a communication, captured as it is typed and transmitted to a third-party vendor's servers before the conversation is complete.
AI training data use creates the independent use problem. The tape recorder exception — the primary CIPA defense for third-party tools — requires that the vendor use collected data only to serve you, not for its own commercial purposes. Most AI chatbot vendors use conversation data to improve their models, train their AI systems, and develop their products. This independent data use is often disclosed in standard vendor terms, and it is precisely the factor courts have found defeats the tape recorder defense.
The AI improvement carve-out is in nearly every standard agreement. Unlike session replay vendors where enterprise DPAs can often be negotiated to remove independent data use, AI chatbot vendors have structural commercial reasons to retain conversation data for model training. The product improves because of the data. Negotiating a complete independent-use restriction for an AI chatbot vendor is more difficult than for a passive recording tool, and may not be available at non-enterprise price tiers.
The two 2025 cases that define the compliance line
Taylor v. ConverseNow Technologies — CIPA claim survives (N.D. Cal. Aug. 2025)
ConverseNow provides AI-powered voice assistants that restaurants use to handle customer phone calls and drive-thru orders. When plaintiff Eliza Taylor called a Domino's Pizza location to place an order, her call was redirected — without her knowledge — to ConverseNow's AI assistant rather than a Domino's employee. ConverseNow's own marketing materials stated that the system processes millions of live conversations monthly, and that caller data is used to improve its ordering platform, advertisements, products, and services.
The court denied ConverseNow's motion to dismiss. Two findings drove the outcome. First, ConverseNow was a third-party interceptor, not an agent: the plaintiff believed she was speaking to Domino's, but her call was rerouted to a separate company's infrastructure without consent. Second, and critically, ConverseNow used the conversation data for its own commercial purposes — AI model improvement, advertising, product development — which defeated any argument that it was merely acting as Domino's agent. A vendor that uses your customers' communications to improve its own product is not your tape recorder. It is an independent party with its own interests in the conversation.
Thomas v. Papa John's International — CIPA claim dismissed (9th Cir. June 2025)
Papa John's used FullStory session replay software to monitor user interactions on its website. The plaintiff alleged Papa John's was liable under CIPA section 631(a) because FullStory, as a third party, intercepted the communications. The Ninth Circuit affirmed dismissal on a different basis: the party exception. A website operator cannot eavesdrop on its own communications with a visitor. Only a third party can eavesdrop. Papa John's, as a party to the communication, was not liable under section 631(a) for using technology to monitor its own sessions.
The dismissal turned on the specific pleading: the plaintiff failed to allege that Papa John's aided and abetted FullStory's independent violation of CIPA. The court was clear that if the plaintiff had alleged FullStory was acting as an independent eavesdropper — using the data for FullStory's own purposes — and that Papa John's knowingly enabled that, the claim could proceed.
What the two cases establish together
The compliance line is the vendor's data use. A chatbot vendor that uses your customers' conversations only to serve you — with no independent data use for its own model training, product improvement, or advertising — can potentially be characterized as your agent rather than a third-party eavesdropper. A chatbot vendor that uses conversation data for its own commercial purposes — including AI model improvement — is an independent party, and the tape recorder defense is unavailable.
For most AI chatbot vendors, the second description is accurate. The AI improves because of the data. That is the product. And that is the CIPA problem.
The four chatbot configurations and their CIPA risk profiles
Live chat with third-party routing
A chat widget that routes messages through a third-party vendor's servers before delivering them to your team. Byars v. Sterling Jewelers established that this architecture — vendor servers standing between the user and the recipient — satisfies the section 631(a) interception structure. The tape recorder defense is available if the vendor DPA prohibits independent data use and the AI training carve-out has been removed. Risk: high without correct vendor contract. Manageable with enterprise DPA.
AI chatbot with conversation data used for model training
The ConverseNow fact pattern. The vendor's AI improves because it trains on your customers' conversations. Even if the vendor is characterized as providing a service to you, its independent commercial use of the data defeats the agent characterization under the Javier standard. Risk: high. The tape recorder defense is likely unavailable regardless of contract structure unless the AI training carve-out is completely and verifiably removed.
Embedded AI chatbot deployed solely on your behalf
A chatbot vendor with a fully executed enterprise DPA, explicit independent-use restriction, and verified AI training opt-out. The vendor processes conversation data only to provide the chatbot service to you, with no independent commercial use. This is the Papa John's / Graham v. Noom structure applied to chatbots. Risk: lower, but depends entirely on whether the contractual restrictions are actually honored and enforceable. Requires verification, not just a signed agreement.
In-house built chatbot on your own infrastructure
A chatbot built and operated entirely on your own servers, with no third-party vendor receiving conversation data. The party exception applies: you cannot eavesdrop on your own communications with your visitors. Risk: lowest for section 631(a). However, consent architecture is still required for any analytics or behavioral tracking tools that run alongside the chatbot, and privacy policy disclosure of the chatbot's data practices is still required.
What CIPA compliance requires for AI chatbot deployments
Three requirements apply simultaneously. All three must be met for a defensible compliance posture.
Prior consent before the chatbot initializes. The same prior consent standard that applies to session replay tools and advertising pixels applies to chatbots. The chatbot must not load and begin capturing inputs until a user has affirmatively consented to it. In practice, this means the chatbot initialization script must be gated behind a consent mechanism — not displayed immediately on page load, not initialized before the banner resolves. A user who types their first message before consenting has had that message intercepted before prior consent was received.
Vendor contract review for independent data use restrictions. Before deploying any AI chatbot, review the vendor's standard terms for: AI model training carve-outs, product improvement data use, benchmarking data use, and anonymized data sharing. Each of these is a form of independent data use that can defeat the tape recorder defense. For enterprise deployments, negotiate the removal of these carve-outs explicitly. For standard or free-tier deployments, treat the tape recorder defense as unavailable and rely on the consent mechanism as the only protection.
Privacy policy disclosure specifically covering the chatbot. The chatbot vendor must be named in the privacy policy. What data the chatbot captures, how it is used, and whether it is shared with the vendor's AI training infrastructure must be disclosed accurately. A privacy policy that describes your analytics and advertising tools but omits the chatbot is a policy-to-practice mismatch — consistently cited in demand letters as evidence of bad faith and as a basis for the independent CIPA exposure of non-disclosure.
California SB 243 adds a disclosure layer on top of CIPA
Effective January 1, 2026, California's SB 243 imposes specific requirements on AI companion chatbots — chatbots designed for extended personal interaction, emotional support, or companionship. SB 243 requires disclosure that the user is interacting with an AI, active safety protocols when users express self-harm, and restrictions on content delivered to minors. It includes a private right of action.
SB 243 applies to a specific category of chatbot — companion AI — rather than all customer service chatbots. But the broader regulatory trend it represents is significant: California is building layered requirements for AI systems that interact with consumers, with disclosure obligations, consent requirements, and private rights of action that compound rather than replace CIPA's existing structure. A business that satisfies every SB 243 disclosure requirement can still face a CIPA wiretap claim if its chatbot captures conversation data before consent is received.
CIPA and SB 243 are independent obligations. CIPA addresses the timing and consent of interception. SB 243 addresses disclosure about the nature of the AI system. Both apply. Neither satisfies the other.
Frequently asked questions
Does CIPA apply to all AI chatbots on websites?
CIPA section 631(a) applies to website chatbots when the chatbot vendor's infrastructure captures communication contents in real time through third-party servers. Whether a specific chatbot deployment creates actionable CIPA exposure depends on whether the vendor qualifies as a third-party eavesdropper or as the website operator's agent. The key factors are whether the vendor uses conversation data for its own commercial purposes (including AI model training) and whether prior consent was obtained before the chatbot captured any messages.
Does the party exception protect me from CIPA liability for my own chatbot?
The party exception — established in Thomas v. Papa John's — means a website operator cannot eavesdrop on its own communications with a visitor. If the chatbot is built and operated entirely on your own infrastructure, with no third-party vendor receiving conversation data, the party exception applies to you as the website operator. If a third-party vendor's infrastructure is involved and that vendor has independent data use capabilities, you may be liable for aiding and abetting the vendor's section 631(a) violation even if you cannot eavesdrop on your own communications directly.
My chatbot vendor says our agreement makes them a data processor. Does that protect me?
Processor classification alone is insufficient. The tape recorder defense requires that the vendor be contractually prohibited from using conversation data for any independent commercial purpose — including AI model training, product improvement, and benchmarking. A DPA that classifies the vendor as a processor but contains carve-outs for AI improvement or system enhancement does not support the defense. Read the specific data use restrictions, not just the processor designation heading.
What about live chat tools that use AI for suggestions but are handled by human agents?
The same analysis applies. If the chat platform routes messages through third-party servers and the platform vendor has the capability to use conversation data for its own purposes — including training AI suggestion models on the message content — the exposure exists regardless of whether the ultimate response is from a human or an AI. The interception occurs at the point of message capture, not at the point of response generation.
Does adding a chat disclaimer in the chatbot window satisfy the CIPA consent requirement?
No. A disclaimer displayed inside the chatbot window after the chatbot has initialized does not constitute prior consent. CIPA requires that consent be obtained before interception begins. If the chatbot initializes on page load — even before the user has opened the chat window — and the vendor's script has already begun capturing browsing behavior or is ready to capture typed inputs, the interception structure is present before any disclaimer is shown. Prior consent requires a technical architecture that prevents the chatbot from initializing until consent is received, not a notice displayed within the interface the chatbot already controls.
What this means for your chatbot deployment
The growth of AI chatbot CIPA claims from two matters in 2021 to thirty in 2025 is not a coincidence or a legal technicality. It is the plaintiff's bar recognizing that chatbots create the most direct section 631(a) fact pattern of any website technology: a third-party vendor capturing the plaintext contents of user communications in real time, frequently using those communications to improve its own AI product, on millions of websites that have no consent architecture covering the chatbot.
The compliance program for chatbots is the same as for every other high-risk tracking tool, with one additional layer: vendor contract review specifically for AI training data use. Pre-consent blocking, GPC detection, server-side consent records, and a privacy policy that accurately names the chatbot vendor and describes its data practices are all required. The AI training carve-out review is the additional step that distinguishes chatbot compliance from session replay compliance.
The infrastructure answer
The free PieEye compliance scan identifies whether your chatbot is initializing before consent is received — the most common failure mode and the one that makes every subsequent message an unconsented interception regardless of what the vendor contract says.
For the complete technical architecture for gating chatbot initialization behind prior consent, the CIPA compliance guide covers the implementation requirements alongside every other high-risk tracking tool in your stack.
Run a free PieEye compliance scan — it takes minutes, requires no code changes to initiate, and tells you exactly what a plaintiffs' attorney's scanning tool would find if it looked at your website today.