On 9 March 2026, a federal court in San Francisco said something the press has had a year to digest and still hasn't. Comet, Perplexity's shopping agent, was accessing Amazon with the user's permission but without authorization by Amazon. The same day, the UK CMA published the first regulator's framework for businesses deploying agents in consumer-facing roles. The two documents land on opposite sides of the same gap. This is a field note on that gap.
Consent stopped being a one-party question the moment your software started walking through someone else's door.The shift the agentic web represents · 2026
Most coverage of agent law has collapsed two distinct legal events into one. If you are running a compliance review on an agentic product right now, these are the three facts that change the shape of the question.
When a customer installs Comet, opens ChatGPT Agent mode, or hands credentials to Claude computer use, they are giving the agent provider a contractual right to act on their behalf. The provider's terms of use cover this leg. That part of the consent chain is uncontroversial.
When the agent then hits Amazon, Booking.com, or any platform you have an account on, it is presenting your credentials at a door governed by terms you accepted with the platform. The agent provider has signed nothing with that platform. The Ninth Circuit will spend the next year deciding whether that matters.
California AB 316 became operative on 1 January 2026. It removes "the AI acted autonomously" as a usable civil defence in the state where most of the major agent providers are incorporated. The CMA's 9 March 2026 guidance applies the same rule in the UK: the business answers for the agent. That carve-out is the loudest signal in the rollout.
Most reporting on Amazon v. Perplexity treats the consent question as binary: did the user agree, yes or no. The court's order pulls a different shape apart. There are three legally distinct contracts in every agent interaction, two of them entered into long before the user opens their assistant. The gap between layer 2 and layer 3 is where the case sits.
The legal architecture for agents did not appear in March 2026. The product launches happened across 2024 and 2025; the state laws started commencing in January 2026; the first major court order arrived in March. Each step matters because it shows what the providers agreed to when, and where regulators have already drawn lines.
Sources: Amazon.com Services LLC v. Perplexity AI Inc., N.D. Cal., preliminary injunction order, 9 March 2026 · CMA, Complying with consumer law when using AI agents, 9 March 2026 · DRCF, The Future of Agentic AI, 31 March 2026 · OpenAI ChatGPT Agent documentation · Anthropic computer use documentation · California AB 316 · Texas HB 149 (TRAIGA) · EU Regulation 2024/1689 · EU PLD 2024/2853.
The Perplexity blog post of 4 November 2025 ("Bullying is not innovation") and Amazon's complaint filed the same day are not arguing about the same product. They are arguing about which leg of the consent chain controls. The diff below is the same pattern that ran underneath Camera Roll Cloud, applied to agents: a narrow promise to the user against a broad term that governs the rest of the system.
Most of these numbers come from one of two places: the Amazon complaint and Judge Chesney's order, or peer-reviewed and industry data on how organisations are actually deploying agents. The pattern is consistent — deployment is moving faster than the controls.
Warnings Amazon says it sent Perplexity between November 2024 and August 2025 before suing. The cease-and-desist trail was a material factor in the unauthorized-access finding.
Time between Amazon shipping a technical block on Comet (August 2025) and Perplexity shipping a software update to bypass it. The persistence-after-block pattern shaped the irreparable-harm finding.
Maximum penalty under the UK Digital Markets, Competition and Consumers Act 2024 for breaches of consumer law via agents. The CMA can impose this directly, without court proceedings, since the DMCCA's commencement.
Organisations surveyed in 2026 that lack a working kill switch for agents they have deployed. 63% cannot enforce purpose limitations; 55% cannot isolate AI from the broader network. 100% have agentic AI on the roadmap.
Adversarial rejection rate across 600 attack attempts in Tallam's reference implementation of Invocation-Bound Capability Tokens — a proposed cryptographic floor for agent delegation. The protocol is not yet a standard.
Median verification latency for IBCT delegation in the same reference implementation. Fast enough to inline at every tool call. Slow enough that, when multiplied by the number of platforms a single agent might touch, agents stay measurably below human reaction time.
Share of agent actions that proceeded autonomously under Chen's six-check Boundary Engine prototype; 6.1% escalated to a human; 14.4% blocked. The "fail-and-report" pattern is becoming the dominant academic recommendation for human-in-the-loop agent governance.
In Camera Roll Cloud the loudest signal was a state-level carve-out: Illinois and Texas excluded under BIPA and CUBI. For agents the pattern is bigger and louder. The United States is regulating agents at the state level while the federal executive is actively trying to preempt those laws. The EU and UK are building parallel frameworks that bind on different bases. The map looks coherent only if you read it by jurisdiction — and even then, only barely.
Business deploying an agent is responsible for what the agent does, as if it were an employee. 10% of worldwide turnover ceiling. The CMA can enforce directly without court proceedings.
CMA, FCA, ICO and Ofcom co-sign a cross-regulatory paper defining a five-level autonomy spectrum and seven categories of compliance risk. Not policy. Read as policy.
The ICO's draft statutory Code of Practice on AI and automated decision-making, following the Data (Use and Access) Act 2025. Article 22 GDPR equivalent for agent-driven decisions.
Providers of AI systems that interact with natural persons must disclose the interaction. The provision does not cover agent-to-platform interactions where no natural person is on the receiving end — that gap is documented in the academic literature.
Software, including AI, is explicitly captured as a "product" under strict liability. AI Act compliance becomes the de facto safety benchmark for defectiveness. Transposition deadline is the hardest date on the regulatory calendar this year.
The older spine of agent regulation in the EU. Solely automated decisions producing legal effects already require a lawful basis, the right to obtain human intervention, and the right to contest. Pre-existing law that survives every newer framework.
Removes "the artificial intelligence acted autonomously" as a defence in civil litigation. The most quoted line on agent liability in 2026 is now also operative law in the state where most agent providers are incorporated.
The California Comprehensive Computer Data Access and Fraud Act was the second cause of action in Amazon v. Perplexity, alongside the federal CFAA. California has become the de facto venue for agent-trespass litigation.
Texas HB 149: governance, transparency, manipulative-AI prohibitions. Colorado: annual impact assessments for high-risk AI. Two more state regimes that an agent product cannot ignore by 30 June 2026.
The 1986 federal anti-hacking statute is doing the legal work at the platform layer. hiQ v. LinkedIn (9th Cir., 2022) governs public-page scraping; Amazon v. Perplexity is about logged-in account access, a category the CFAA still reaches.
The December 2025 executive order directs Commerce to identify conflicting state AI laws and stands up an "AI Litigation Task Force" tasked with challenging them. Preemption against the state-level patchwork is the political counter-pressure to AB 316, TRAIGA, the Colorado AI Act, and the rest.
None directly addresses the user-agent-platform problem yet. Each has the surface area. India's DPDP Act, in force, is the most obvious gap in the global agent-regulation map — and the next likely addition to the Library on this site.
The architectural choice is the policy choice. Where each agent provider places the identification — client-side, network-layer, or nowhere — determines who carries the legal exposure when the agent walks into someone else's house. The three cards below are not rankings. They are different theories of who is responsible.
Signature-Agent: https://chatgpt.com. Public key directory at a well-known URL.Sources: Anthropic computer use & Claude-User documentation · OpenAI ChatGPT Agent allowlisting (HTTP Message Signatures, RFC 9421) · Amazon.com Services LLC v. Perplexity AI Inc., N.D. Cal., preliminary injunction order, 9 March 2026 · Perplexity blog "Bullying is not innovation," 4 November 2025.
The DRCF's foresight paper sets out a five-level autonomy spectrum for agentic systems. The legal exposure scales with where on the spectrum a product sits — and the user-agent-platform gap only emerges at the level where the agent is genuinely choosing what to do next.
Source: Digital Regulation Cooperation Forum (CMA · FCA · ICO · Ofcom), The Future of Agentic AI foresight paper, 31 March 2026. The spectrum is the DRCF's; the legal-exposure mapping along the bottom is this site's reading. The user-agent-platform gap addressed in §01 emerges meaningfully from Level 3 onwards — once the agent is choosing what to do next, the user can no longer be said to have authorized each action.
Like Camera Roll Cloud, this story has a polite version that runs in trade press and a fuller version sitting underneath. These are the three framings that have been doing the most work to obscure the substance.
The CMA's scenario is the business deploying an agent to its own customers (chatbots, refund handlers, marketing). The Amazon scenario is the consumer running an agent against someone else's platform (shopping, booking, account management). The legal exposure is structurally different. So is the self-test.
Five categories of source: peer-reviewed scholarship (top), the case law, the regulator publications, the primary law, and the news. The peer-reviewed layer is what differentiates this piece — most agent commentary cites only practitioner pieces and trade press. The legal-academic and computer-science literatures have already converged on the core diagnosis: the user-to-agent step works, the agent-to-platform step doesn't.
Cohen, Kolt, Bengio, Hadfield & Russell argue that governance frameworks must address AI systems that cannot be safely tested in deployment. Long-term planning agents — DRCF Levels 4 and 5 — can recognise test environments. The paper's regulatory proposal is compute-level restraint. Co-authored by a Turing Award laureate (Bengio), the standard-textbook author on AI (Russell), and Anthropic's senior advisor on governance (Hadfield). The most authoritative academic peg for this entire field note.
Kolt's paper, forthcoming in Notre Dame Law Review Vol. 101, brings two analytic frameworks to agent governance: the economic theory of principal-agent problems and the common-law doctrine of agency relationships. It is the seminal legal-academic statement of the problem this field note deconstructs: when one party (the principal) relies on another (the agent) to act on their behalf, information asymmetry and authority structures become the central question. Almost every later legal paper on agents cites this one.
Tallam (2026) formalises authorization propagation as a workflow-level property and identifies three sub-problems — transitive delegation, aggregation inference, and temporal validity — plus seven structural requirements for authorization architectures. The reference implementation of Invocation-Bound Capability Tokens hits 0.049ms verification latency and 100% adversarial rejection across 600 attack attempts. The technical floor that the policy debate doesn't yet have.
South et al. define authenticated delegation as the verification that (a) an interacting entity is in fact an AI agent, (b) acting on behalf of a specific human user, and (c) granted the necessary permissions for specific actions. Proposes a method for expressing flexible, natural-language permissions for agents and transforming them into auditable, fine-grained access-control rules. The companion technical paper to Kolt's legal frame.
A 2026 survey paper that names the specific phenomenon this field note dissects: service-level delegation blurs authorization boundaries, and "trust-authorization mismatch" arises when agents over-trust peers or services and perform actions beyond their intended scope. Multi-agent configurations introduce new pathways for prompt-infection-style propagation and attacks targeting communication channels. The clearest taxonomy yet of the agent-web threat surface.
Proposes a framework that profiles agentic plan→act→observe→reflect loops and maps risks onto structured taxonomies extended with agent-specific vulnerabilities. Continuous-governance primitives: semantic telemetry, dynamic authorization, anomaly detection, interruptibility. Provenance and accountability reinforced through cryptographic tracing. Relevant to every CMA-guidance question about human oversight and remediation at scale.
Nissen, Neumann, Mikusz, Gianni, Clinch, Speed & Davies (CHI '19). The pre-LLM paper that asked the question this field note asks: what happens to informed consent when individuals delegate consent decisions to systems acting on their behalf? Empirically grounded; concludes that the standard consent model breaks at the point of delegation. The closest thing in the HCI literature to a foundational reference for the user-to-agent leg of the stack.
Doctrinal and functional analysis of Article 50(1)'s scope and gaps. Identifies the central limitation that this field note relies on: the provision excludes interactions between AI systems with no human intermediation. The paper argues that without further interpretative guidance, Article 50 will remain "a formalistic gesture rather than a substantive guarantee." Crucial for understanding why the EU's 2 August 2026 transparency deadline doesn't close the agent-to-platform gap.
Judge Maxine M. Chesney's preliminary injunction order, 9 March 2026, in the U.S. District Court for the Northern District of California. The first major US case to test whether the Computer Fraud and Abuse Act applies to AI agents acting at user direction on third-party platforms. The court's finding that Comet accessed Amazon "with the Amazon user's permission, but without authorization by Amazon" is the legal pivot that this entire field note rests on.
On 18 March 2026, Circuit Judges Eric Miller and Patrick Bumatay administratively lifted Judge Chesney's preliminary injunction, with the order in force only until the appellate court rules on the merits. Perplexity's argument: under the CFAA the only "access" was by users of the Comet browser, not by Perplexity. The case is genuinely unsettled — the district court's framing is not yet binding precedent.
The first guidance from a major consumer-protection authority anywhere in the world specifically addressing AI agents in consumer-facing roles. Four operational principles: transparency, compliance by design, human oversight, swift remediation. The business deploying the agent is responsible for what it does, including where a third party designed or supplied it. Enforcement via the DMCCA: up to 10% of worldwide turnover, imposed by the CMA without court proceedings.
Cross-regulatory paper from the Digital Regulation Cooperation Forum — CMA, FCA, ICO and Ofcom together. Defines the five-level autonomy spectrum used in §07 of this page, catalogues seven categories of compliance risk, and distinguishes "amplified" from "novel" risks. Carries a polite disclaimer that it should not be read as policy. Should be read as policy.
Transparency obligations for AI systems that interact with natural persons, generate synthetic content, perform emotion recognition, or produce deep fakes. Applicable from 2 August 2026. The Commission consultation on implementation guidelines closes 3 June 2026. The core gap for this field note: Article 50 obliges providers when there is a natural person on the other side — not when the agent is talking to another platform.
The technical reference for OpenAI's network-layer agent identification. Every outbound request from ChatGPT Agent is signed under RFC 9421 with a Signature-Agent header set to https://chatgpt.com, with public keys discoverable at a well-known URL. Recognised by Akamai, Cloudflare and HUMAN AgenticTrust as a verified bot. The cleanest technical primary source on the network-attested approach in §06.
Anthropic's developer documentation for computer use. The framing is consistent with §06's Approach A: computer use is a client-side tool; screenshots, mouse actions, keyboard inputs and files are captured and stored in the developer's environment, not Anthropic's. Developers must inform end users of risks and obtain consent before enabling the feature. The policy choice is structural, not just textual.
Operative 1 January 2026. Removes the use of "the artificial intelligence acted autonomously" as a usable defence in civil litigation arising from harm caused by AI. Most directly relevant to agent products incorporated in California, but as a cause-of-action rule, available to any plaintiff bringing suit in California courts against a defendant whose AI did harm.
Directive (EU) 2024/2853. Software, explicitly including AI, is captured as a "product" under strict liability. Member-state transposition deadline 9 December 2026. AI Act compliance becomes the de facto safety benchmark when courts assess defectiveness. The hardest date on the EU regulatory calendar for any business shipping AI agents into European markets this year.
Todd Bishop's reporting on the 9 March 2026 ruling. The cleanest single-piece summary of the dispute's procedural history, including Amazon's claim of at least five cease-and-desist warnings starting November 2024 and Perplexity's software-update-in-24-hours pattern after the August 2025 technical block.
Matt G. Southern's reading of what the order changes for platforms and SEO. The most useful framing of why the court "treated user consent and platform authorization as two separate requirements" — and why that wording is the line that matters when platforms write the next version of their terms.
Cooley's structured walk-through of the CMA guidance's four principles — transparency, compliance by design, human oversight, swift remediation — and what each implies in practice. Includes the operational point most internal compliance teams miss: forward-looking remediation is not a cure for, or a defence against, previous breaches.
The clearest unpacking of the DRCF foresight paper as it lands on operational compliance teams. Source for the §04 figures: 63% of organisations cannot enforce purpose limitations on agents; 60% cannot terminate misbehaving agents; 55% cannot isolate them from broader networks. 100% have agentic AI on the roadmap. The gap between deployment ambition and operational reality.