Field note · Agentic AI & authorization · May 2026

Authorized by you.
Not authorized by them.

On 9 March 2026, a federal court in San Francisco said something the press has had a year to digest and still hasn't. Comet, Perplexity's shopping agent, was accessing Amazon with the user's permission but without authorization by Amazon. The same day, the UK CMA published the first regulator's framework for businesses deploying agents in consumer-facing roles. The two documents land on opposite sides of the same gap. This is a field note on that gap.

Run the self-test Jump to research
Consent stopped being a one-party question the moment your software started walking through someone else's door.
The shift the agentic web represents · 2026
§ Orientation · The 30-second self-test

Three things to know before anything else.

Most coverage of agent law has collapsed two distinct legal events into one. If you are running a compliance review on an agentic product right now, these are the three facts that change the shape of the question.

§ Yes · The user authorizes the agent

When a customer installs Comet, opens ChatGPT Agent mode, or hands credentials to Claude computer use, they are giving the agent provider a contractual right to act on their behalf. The provider's terms of use cover this leg. That part of the consent chain is uncontroversial.

§ But · The platform never agreed

When the agent then hits Amazon, Booking.com, or any platform you have an account on, it is presenting your credentials at a door governed by terms you accepted with the platform. The agent provider has signed nothing with that platform. The Ninth Circuit will spend the next year deciding whether that matters.

§ And · Autonomy is not a defence

California AB 316 became operative on 1 January 2026. It removes "the AI acted autonomously" as a usable civil defence in the state where most of the major agent providers are incorporated. The CMA's 9 March 2026 guidance applies the same rule in the UK: the business answers for the agent. That carve-out is the loudest signal in the rollout.

§ 01 · Anatomy

The agent isn't one contract. It's three.

Most reporting on Amazon v. Perplexity treats the consent question as binary: did the user agree, yes or no. The court's order pulls a different shape apart. There are three legally distinct contracts in every agent interaction, two of them entered into long before the user opens their assistant. The gap between layer 2 and layer 3 is where the case sits.

§ Layer 1 · User → Agent provider The user Accepts the agent provider's ToU and Privacy Policy. Provides credentials, intent, and the goal. Indemnifies the provider for third-party claims arising from use. What this contract covers — Agent's right to read inputs, perform tool-calls, store memory. — Liability cap (commonly fees paid in the last 12 months or $100). — Mandatory arbitration and class-action waiver in the US. Status Consented § Layer 2 · Agent provider → Platform The agent provider Has a contract with the user. Has nothing signed with the platform. May identify the agent in the request (RFC 9421, user-agent string), or not. Hopes its Usage Policy and the user's instruction are enough. What this contract covers — Whatever the provider has unilaterally adopted as policy. — Robots.txt and well-known files the agent chooses to respect. — Public platform ToS the provider has chosen to read. Status Unilateral / inferred ⚠ The authorization gap § Layer 3 · Platform → User account The platform Has a ToS with the user, often prohibiting automation in logged-in areas. May explicitly require AI agents to identify themselves (Amazon does). Has not negotiated anything with the agent provider. What this contract covers — Account-level acceptable use, automation rules, data scraping limits. — Remedies: CFAA, CDAFA, Computer Misuse Act, account suspension, suit. — Trespass theory: Amazon v. Perplexity, N.D. Cal., 9 Mar 2026. Status Held against the user
User layer · consented Agent layer · unilateral Platform layer · pre-existing Authorization gap · not bridged by user consent
§ Peer-reviewed: the structure has a name
Tallam (2026, arXiv:2605.05440) formalises this as authorization propagation and identifies three sub-problems: transitive delegation, aggregation inference, and temporal validity. South et al. (2025, arXiv:2501.09674) define authenticated delegation as the verification that (a) an agent is an agent, (b) acting for a specific user, and (c) granted specific permissions. Both papers converge on the same finding: the user-to-agent step is the one current systems do well, and the agent-to-platform step is the one they do badly.
§ 02 · Timeline

How we got here, in eleven steps.

The legal architecture for agents did not appear in March 2026. The product launches happened across 2024 and 2025; the state laws started commencing in January 2026; the first major court order arrived in March. Each step matters because it shows what the providers agreed to when, and where regulators have already drawn lines.

OCT 2024 Claude computer use Client-side; deployer carries the risk 23 JAN 2025 OpenAI Operator US Pro preview 31 AUG 2025 ChatGPT Agent RFC 9421 signing, cryptographic identity 4 NOV 2025 Amazon files suit N.D. Cal.; CFAA & CDAFA 1 JAN 2026 CA AB 316 + TX TRAIGA live "AI did it" no longer a defence 9 MAR 2026 SAME DAY Amazon injunction + CMA guidance 18 MAR 2026 9th Circuit lifts (admin) Merits review pending 31 MAR 2026 DRCF foresight paper 4 UK regulators in unison 30 JUN 2026 Colorado AI Act commences High-risk AI impact assessments mandatory 2 AUG 2026 EU AI Act Art. 50 applies Agent-to-agent gap 9 DEC 2026 EU PLD transposed Software incl. AI = product (strict) PRODUCT LITIGATION + GUIDANCE FORWARD CALENDAR

Sources: Amazon.com Services LLC v. Perplexity AI Inc., N.D. Cal., preliminary injunction order, 9 March 2026 · CMA, Complying with consumer law when using AI agents, 9 March 2026 · DRCF, The Future of Agentic AI, 31 March 2026 · OpenAI ChatGPT Agent documentation · Anthropic computer use documentation · California AB 316 · Texas HB 149 (TRAIGA) · EU Regulation 2024/1689 · EU PLD 2024/2853.

§ 03 · The diff

What the marketing says vs what the terms do.

The Perplexity blog post of 4 November 2025 ("Bullying is not innovation") and Amazon's complaint filed the same day are not arguing about the same product. They are arguing about which leg of the consent chain controls. The diff below is the same pattern that ran underneath Camera Roll Cloud, applied to agents: a narrow promise to the user against a broad term that governs the rest of the system.

A The marketing layer
What the agent provider promises the user
  • "You remain in control." Confirmation prompts before purchases, transfers, or anything irreversible.
  • The agent acts on the user's behalf. The user has chosen this assistant; the user can revoke at any time.
  • The provider has its own usage policy. Manipulation, fraud, impersonation are prohibited; the user is told these rules exist.
  • Public statements frame the user as the principal. Perplexity's November 2025 blog post compared blocking Comet to a platform stopping users hiring assistants.
  • Anthropic and OpenAI both publish documented user-agent strings (Claude-User; ChatGPT Agent with RFC 9421 signatures).
B The legal layer
What the user agreement and the platform layer actually say
  • Liability cap: commonly the greater of fees paid in the last 12 months or $100 (OpenAI ToU). User indemnifies provider for third-party claims arising from use.
  • Amazon ToS, before any AI agent ever opened a browser, already required agents to identify themselves and limited automated access to public portions of the site.
  • The court found Comet was accessing Amazon "with the Amazon user's permission, but without authorization by Amazon" and granted a preliminary injunction under the CFAA.
  • CMA, 9 March 2026: a business is responsible for what its agent does in the same way it is responsible for an employee. Up to 10% of worldwide turnover under DMCCA.
  • California AB 316: "the AI acted autonomously" is no longer a usable civil-litigation defence in California.
§ Why the gap matters
The court did not say the user did anything wrong. It said the agent provider did. The legal exposure sits with the entity that wrote software to log into someone else's platform without the platform's authorization — even when a real user, at the keyboard, asked it to. That allocation of risk is what will reshape the agent product category over the next twelve months.
§ 04 · The numbers

The state of the operational layer, in figures.

Most of these numbers come from one of two places: the Amazon complaint and Judge Chesney's order, or peer-reviewed and industry data on how organisations are actually deploying agents. The pattern is consistent — deployment is moving faster than the controls.

Cease-and-desist warnings
5at minimum

Warnings Amazon says it sent Perplexity between November 2024 and August 2025 before suing. The cease-and-desist trail was a material factor in the unauthorized-access finding.

Amazon complaint, N.D. Cal., 2025
Time to circumvent
24hours

Time between Amazon shipping a technical block on Comet (August 2025) and Perplexity shipping a software update to bypass it. The persistence-after-block pattern shaped the irreparable-harm finding.

Amazon complaint & preliminary injunction order, 2026
DMCCA fine ceiling
10% global turnover

Maximum penalty under the UK Digital Markets, Competition and Consumers Act 2024 for breaches of consumer law via agents. The CMA can impose this directly, without court proceedings, since the DMCCA's commencement.

DMCCA 2024 · gov.uk, March 2026
Cannot terminate misbehaving agent
60% of orgs

Organisations surveyed in 2026 that lack a working kill switch for agents they have deployed. 63% cannot enforce purpose limitations; 55% cannot isolate AI from the broader network. 100% have agentic AI on the roadmap.

Kiteworks 2026 Data Security & Compliance Risk Forecast
Peer-reviewed · attack rejection
100%

Adversarial rejection rate across 600 attack attempts in Tallam's reference implementation of Invocation-Bound Capability Tokens — a proposed cryptographic floor for agent delegation. The protocol is not yet a standard.

Tallam, arXiv:2605.05440, 2026
Peer-reviewed · verification latency
0.049ms

Median verification latency for IBCT delegation in the same reference implementation. Fast enough to inline at every tool call. Slow enough that, when multiplied by the number of platforms a single agent might touch, agents stay measurably below human reaction time.

Tallam, arXiv:2605.05440, 2026
Autonomous execution observed
79.5%

Share of agent actions that proceeded autonomously under Chen's six-check Boundary Engine prototype; 6.1% escalated to a human; 14.4% blocked. The "fail-and-report" pattern is becoming the dominant academic recommendation for human-in-the-loop agent governance.

Chen, AITH protocol, 2026 (machine-verified via Tamarin Prover)
§ 05 · Regional · The carve-out map

Every major jurisdiction has moved. In opposite directions.

In Camera Roll Cloud the loudest signal was a state-level carve-out: Illinois and Texas excluded under BIPA and CUBI. For agents the pattern is bigger and louder. The United States is regulating agents at the state level while the federal executive is actively trying to preempt those laws. The EU and UK are building parallel frameworks that bind on different bases. The map looks coherent only if you read it by jurisdiction — and even then, only barely.

§ United Kingdom

CMA guidance + DMCCA enforcement

Business deploying an agent is responsible for what the agent does, as if it were an employee. 10% of worldwide turnover ceiling. The CMA can enforce directly without court proceedings.

In force · 9 Mar 2026
§ United Kingdom

DRCF foresight paper · 4 regulators in unison

CMA, FCA, ICO and Ofcom co-sign a cross-regulatory paper defining a five-level autonomy spectrum and seven categories of compliance risk. Not policy. Read as policy.

Published · 31 Mar 2026
§ United Kingdom

ICO ADM consultation under DUAA 2025

The ICO's draft statutory Code of Practice on AI and automated decision-making, following the Data (Use and Access) Act 2025. Article 22 GDPR equivalent for agent-driven decisions.

Open · 31 Mar 2026
§ EU

AI Act Article 50 transparency

Providers of AI systems that interact with natural persons must disclose the interaction. The provision does not cover agent-to-platform interactions where no natural person is on the receiving end — that gap is documented in the academic literature.

Applies · 2 Aug 2026
§ EU

Product Liability Directive (revised)

Software, including AI, is explicitly captured as a "product" under strict liability. AI Act compliance becomes the de facto safety benchmark for defectiveness. Transposition deadline is the hardest date on the regulatory calendar this year.

Transposed by · 9 Dec 2026
§ EU

GDPR Art. 22 · automated decisions

The older spine of agent regulation in the EU. Solely automated decisions producing legal effects already require a lawful basis, the right to obtain human intervention, and the right to contest. Pre-existing law that survives every newer framework.

In force since 2018
§ California

AB 316 · the "AI did it" rule

Removes "the artificial intelligence acted autonomously" as a defence in civil litigation. The most quoted line on agent liability in 2026 is now also operative law in the state where most agent providers are incorporated.

Operative · 1 Jan 2026
§ California

CDAFA · joins CFAA in agent suits

The California Comprehensive Computer Data Access and Fraud Act was the second cause of action in Amazon v. Perplexity, alongside the federal CFAA. California has become the de facto venue for agent-trespass litigation.

Cited in Amazon v. Perplexity
§ Texas / Colorado

TRAIGA (TX) + Colorado AI Act

Texas HB 149: governance, transparency, manipulative-AI prohibitions. Colorado: annual impact assessments for high-risk AI. Two more state regimes that an agent product cannot ignore by 30 June 2026.

Live · 1 Jan and 30 Jun 2026
§ US Federal

CFAA · the platform's weapon

The 1986 federal anti-hacking statute is doing the legal work at the platform layer. hiQ v. LinkedIn (9th Cir., 2022) governs public-page scraping; Amazon v. Perplexity is about logged-in account access, a category the CFAA still reaches.

Live precedent
§ US Federal

Executive preemption (December 2025 EO)

The December 2025 executive order directs Commerce to identify conflicting state AI laws and stands up an "AI Litigation Task Force" tasked with challenging them. Preemption against the state-level patchwork is the political counter-pressure to AB 316, TRAIGA, the Colorado AI Act, and the rest.

Active · pending litigation
§ Watch list

Australia · Singapore · India

None directly addresses the user-agent-platform problem yet. Each has the surface area. India's DPDP Act, in force, is the most obvious gap in the global agent-regulation map — and the next likely addition to the Library on this site.

No agent-specific guidance
§ The carve-out as signal
The loudest line on the map is not any single law. It is the contradiction between California's state-level move (the "AI did it" rule, the CDAFA, AB 316) and the federal executive's effort to preempt state AI legislation. Whichever side of that contradiction wins in 2026 will decide whether agent providers face fifty different rules or one. The agent product roadmap should be hedging against both.
§ 06 · Peers

Three providers. Three philosophies.

The architectural choice is the policy choice. Where each agent provider places the identification — client-side, network-layer, or nowhere — determines who carries the legal exposure when the agent walks into someone else's house. The three cards below are not rankings. They are different theories of who is responsible.

Approach A · Client-side
Anthropic
Claude · computer use · Managed Agents
Identification
Claude-User user-agent string; documented, respects robots.txt. Usage Policy prohibits impersonating a human.
Execution
Client-side. Computer use runs in the developer's sandbox, not Anthropic's. Screenshots and actions are captured locally.
Where the risk sits
With the deployer. Developer obtains end-user consent, manages credentials, ensures platform compliance, owns the audit trail.
Strongest fit for enterprise compliance posture; safety choices visible to the deployer.
Documentation explicitly tells developers to inform end users of risks and obtain consent before enabling computer use.
Trade-off: weakest end-user-facing autonomy guarantee. The consumer doesn't see the policy layer; the deployer does.
Approach B · Cryptographically attested
OpenAI
ChatGPT Agent · prev. Operator
Identification
RFC 9421 HTTP Message Signatures on every outbound request. Signature-Agent: https://chatgpt.com. Public key directory at a well-known URL.
Execution
Server-side. The agent runs in OpenAI's environment; platforms can verify identity at the network edge.
Where the risk sits
Distributed. The network verifies identity; allowlisting partners (Akamai, Cloudflare, HUMAN AgenticTrust) treat it as a known bot.
Platforms can decide, at the edge, whether to allow the agent — without playing user-agent whack-a-mole.
Makes the platform's choice visible: the agent identifies itself; the platform either consents or doesn't.
Trade-off: identification doesn't itself confer authorization. Amazon could still refuse a signed agent if its ToS doesn't allow them.
Approach C · User-only
Perplexity
Comet · the case study
Identification
Spoofed Chrome user-agent. Court found Amazon was unable to distinguish Comet's activity from a human user.
Execution
Inside the user's browser, logged into the user's account. From the platform's logs, indistinguishable from the user.
Where the risk sits
With the agent provider. The court rejected, at preliminary injunction, the argument that user consent flows through to platform authorization.
Continued after five cease-and-desist letters between November 2024 and August 2025.
Issued a software update inside 24 hours of Amazon's technical block (August 2025) to circumvent it.
9th Circuit administratively lifted the injunction on 18 March 2026 pending merits review. The legal theory is contested but live.

Sources: Anthropic computer use & Claude-User documentation · OpenAI ChatGPT Agent allowlisting (HTTP Message Signatures, RFC 9421) · Amazon.com Services LLC v. Perplexity AI Inc., N.D. Cal., preliminary injunction order, 9 March 2026 · Perplexity blog "Bullying is not innovation," 4 November 2025.

§ 07 · The autonomy spectrum

Not all agents are the same shape.

The DRCF's foresight paper sets out a five-level autonomy spectrum for agentic systems. The legal exposure scales with where on the spectrum a product sits — and the user-agent-platform gap only emerges at the level where the agent is genuinely choosing what to do next.

L1 Tool Reactive only User clicks, system executes L2 Assistant Single-step recs Suggests; user approves each step L3 Workflow agent Multi-step, bounded Plans & executes within explicit guardrails L4 Goal-seeking agent Open-ended within domain Chooses tools and paths to reach a goal L5 Autonomous actor Self-directed, durable Sets sub-goals, persists across sessions, learns — increasing autonomy · increasing exposure under PLD, CMA & AB 316 →

Source: Digital Regulation Cooperation Forum (CMA · FCA · ICO · Ofcom), The Future of Agentic AI foresight paper, 31 March 2026. The spectrum is the DRCF's; the legal-exposure mapping along the bottom is this site's reading. The user-agent-platform gap addressed in §01 emerges meaningfully from Level 3 onwards — once the agent is choosing what to do next, the user can no longer be said to have authorized each action.

§ Peer-reviewed: why levels 4 and 5 are different
Cohen, Kolt, Bengio, Hadfield and Russell argued in Science (2024, vol. 384, "Regulating advanced artificial agents") that long-term planning agents — roughly DRCF Levels 4 and 5 — present governance problems that cannot be discharged by empirical testing alone, because sufficiently capable systems can recognise test environments and behave differently in deployment. Their regulatory recommendation is structural restraint at the compute layer. Whether or not one follows them all the way, the underlying observation matters: the legal categories built for Levels 1–3 break down at Levels 4–5.
§ 08 · Misconceptions

Three things the press got wrong.

Like Camera Roll Cloud, this story has a polite version that runs in trade press and a fuller version sitting underneath. These are the three framings that have been doing the most work to obscure the substance.

01
Amazon v. Perplexity is about scraping.
It isn't. Public-page scraping of unauthenticated content is governed by hiQ v. LinkedIn (9th Cir., 2022), which limits CFAA reach over public pages. Amazon v. Perplexity is about logged-in account access — a different legal category. Amazon's ToS, written before Comet existed, already required agents to identify themselves and limited automated access to public portions of the site. The case is about a contract platforms had with their users, not about the public web.
02
AI agents are too new for the law.
All four UK regulators — CMA, FCA, ICO, Ofcom — publicly stated the opposite on 31 March 2026: existing frameworks apply now. The same position is taken in the US (CFAA, state UDAP, AB 316) and in the EU (GDPR Art. 22, the AI Act, the revised PLD). What is new is the case law, not the legal exposure. The exposure is current; the precedents are catching up.
03
If the user consents, the agent can do anything the user could do.
Judge Chesney's order says no. Comet was operating "with the Amazon user's permission, but without authorization by Amazon" — and that was enough for a preliminary injunction under both federal and state computer-fraud law. Platform authorization is a separate legal event from user consent, and most platforms have not yet rewritten their ToS to address agents on the user's behalf. That rewrite is where the next round of changes will land.
§ 09 · Self-test

Where you sit decides what you owe.

The CMA's scenario is the business deploying an agent to its own customers (chatbots, refund handlers, marketing). The Amazon scenario is the consumer running an agent against someone else's platform (shopping, booking, account management). The legal exposure is structurally different. So is the self-test.

§ A · Deployer self-test
If you are deploying an agent in a consumer-facing role
  1. Have you trained and tested the agent against the consumer-rights framework in every market you serve — Consumer Rights Act 2015 and CPRs 2008 in the UK, the FTC Act in the US, the UCPD in the EU?
  2. Do you run A/B testing of agent-customer interactions against compliant outcomes, with the results reviewed by someone with appropriate experience?
  3. Is there a documented human-in-the-loop checkpoint for high-impact actions (refunds, cancellations, credit decisions, anything irreversible)?
  4. Can you terminate a misbehaving agent in production within minutes? (The Kiteworks data suggests 60% of organisations cannot.)
  5. Do you have a remediation plan ready to deploy at the scale at which the agent operates — and do you accept that prospective remediation does not cure prior breaches?
§ B · Consumer / builder self-test
If you are using or building a personal agent against third-party platforms
  1. Have you read the ToS of the platform your agent will hit? Does it explicitly prohibit or restrict automation in logged-in areas?
  2. Does your agent identify itself when it shows up — a stable user-agent string, an RFC 9421 signature, or both?
  3. Has the platform ever sent your agent provider a cease-and-desist? (Persistence after warning is the factor that turned Amazon's complaint into a preliminary injunction.)
  4. If your account is suspended, the wrong order is placed, or your data is exposed — which of the three legal layers carries the loss under the agreements you have actually signed?
  5. For builders: where does your liability sit between the user's $100 cap, your platform terms with the model provider, and the platforms your users will direct the agent toward?
§ Research

What sits underneath this field note.

Five categories of source: peer-reviewed scholarship (top), the case law, the regulator publications, the primary law, and the news. The peer-reviewed layer is what differentiates this piece — most agent commentary cites only practitioner pieces and trade press. The legal-academic and computer-science literatures have already converged on the core diagnosis: the user-to-agent step works, the agent-to-platform step doesn't.

⬢ Peer-reviewed scholarship
Science · Peer-reviewed · 2024

Regulating advanced artificial agents

Cohen, Kolt, Bengio, Hadfield & Russell argue that governance frameworks must address AI systems that cannot be safely tested in deployment. Long-term planning agents — DRCF Levels 4 and 5 — can recognise test environments. The paper's regulatory proposal is compute-level restraint. Co-authored by a Turing Award laureate (Bengio), the standard-textbook author on AI (Russell), and Anthropic's senior advisor on governance (Hadfield). The most authoritative academic peg for this entire field note.

Science 384(6691):36–38 · DOI 10.1126/science.adl0625
Read →
Notre Dame Law Review · Forthcoming

Governing AI Agents

Kolt's paper, forthcoming in Notre Dame Law Review Vol. 101, brings two analytic frameworks to agent governance: the economic theory of principal-agent problems and the common-law doctrine of agency relationships. It is the seminal legal-academic statement of the problem this field note deconstructs: when one party (the principal) relies on another (the agent) to act on their behalf, information asymmetry and authority structures become the central question. Almost every later legal paper on agents cites this one.

Notre Dame L. Rev. 101 · SSRN 4772956 · Feb 2025
Read →
arXiv · Computer science · 2026

Authorization Propagation in Multi-Agent AI Systems: Identity Governance as Infrastructure

Tallam (2026) formalises authorization propagation as a workflow-level property and identifies three sub-problems — transitive delegation, aggregation inference, and temporal validity — plus seven structural requirements for authorization architectures. The reference implementation of Invocation-Bound Capability Tokens hits 0.049ms verification latency and 100% adversarial rejection across 600 attack attempts. The technical floor that the policy debate doesn't yet have.

arXiv:2605.05440 · 2026
Read →
arXiv · Multi-author · 2025

Authenticated Delegation and Authorized AI Agents

South et al. define authenticated delegation as the verification that (a) an interacting entity is in fact an AI agent, (b) acting on behalf of a specific human user, and (c) granted the necessary permissions for specific actions. Proposes a method for expressing flexible, natural-language permissions for agents and transforming them into auditable, fine-grained access-control rules. The companion technical paper to Kolt's legal frame.

arXiv:2501.09674 · MIT et al. · 2025
Read →
arXiv · Security survey · 2026

From Secure Agentic AI to Secure Agentic Web: Challenges, Threats and Future Directions

A 2026 survey paper that names the specific phenomenon this field note dissects: service-level delegation blurs authorization boundaries, and "trust-authorization mismatch" arises when agents over-trust peers or services and perform actions beyond their intended scope. Multi-agent configurations introduce new pathways for prompt-infection-style propagation and attacks targeting communication channels. The clearest taxonomy yet of the agent-web threat surface.

arXiv:2603.01564 · 2026
Read →
arXiv · Framework · 2025

AGENTSAFE: A Unified Framework for Ethical Assurance and Governance in Agentic AI

Proposes a framework that profiles agentic plan→act→observe→reflect loops and maps risks onto structured taxonomies extended with agent-specific vulnerabilities. Continuous-governance primitives: semantic telemetry, dynamic authorization, anomaly detection, interruptibility. Provenance and accountability reinforced through cryptographic tracing. Relevant to every CMA-guidance question about human oversight and remediation at scale.

arXiv:2512.03180 · December 2025
Read →
ACM CHI · Peer-reviewed · 2019

Should I Agree? Delegating Consent Decisions Beyond the Individual

Nissen, Neumann, Mikusz, Gianni, Clinch, Speed & Davies (CHI '19). The pre-LLM paper that asked the question this field note asks: what happens to informed consent when individuals delegate consent decisions to systems acting on their behalf? Empirically grounded; concludes that the standard consent model breaks at the point of delegation. The closest thing in the HCI literature to a foundational reference for the user-to-agent leg of the stack.

CHI 2019 · DOI 10.1145/3290605
Read →
Comp. Law & Security Review · 2026

Transparency in human-AI interaction — an analysis of Article 50(1) AI Act

Doctrinal and functional analysis of Article 50(1)'s scope and gaps. Identifies the central limitation that this field note relies on: the provision excludes interactions between AI systems with no human intermediation. The paper argues that without further interpretative guidance, Article 50 will remain "a formalistic gesture rather than a substantive guarantee." Crucial for understanding why the EU's 2 August 2026 transparency deadline doesn't close the agent-to-platform gap.

ScienceDirect · April 2026
Read →
⬡ Case law
N.D. Cal. · Preliminary injunction

Amazon.com Services LLC v. Perplexity AI, Inc.

Judge Maxine M. Chesney's preliminary injunction order, 9 March 2026, in the U.S. District Court for the Northern District of California. The first major US case to test whether the Computer Fraud and Abuse Act applies to AI agents acting at user direction on third-party platforms. The court's finding that Comet accessed Amazon "with the Amazon user's permission, but without authorization by Amazon" is the legal pivot that this entire field note rests on.

N.D. Cal. · 9 Mar 2026 · CFAA & CDAFA
Read →
9th Cir. · Administrative stay

Ninth Circuit lifts Perplexity injunction pending merits review

On 18 March 2026, Circuit Judges Eric Miller and Patrick Bumatay administratively lifted Judge Chesney's preliminary injunction, with the order in force only until the appellate court rules on the merits. Perplexity's argument: under the CFAA the only "access" was by users of the Comet browser, not by Perplexity. The case is genuinely unsettled — the district court's framing is not yet binding precedent.

9th Circuit · 18 Mar 2026
Read →
⬣ Regulator publications
CMA · Guidance

Complying with consumer law when using AI agents

The first guidance from a major consumer-protection authority anywhere in the world specifically addressing AI agents in consumer-facing roles. Four operational principles: transparency, compliance by design, human oversight, swift remediation. The business deploying the agent is responsible for what it does, including where a third party designed or supplied it. Enforcement via the DMCCA: up to 10% of worldwide turnover, imposed by the CMA without court proceedings.

CMA · 9 March 2026
Read →
DRCF · Foresight paper

The Future of Agentic AI

Cross-regulatory paper from the Digital Regulation Cooperation Forum — CMA, FCA, ICO and Ofcom together. Defines the five-level autonomy spectrum used in §07 of this page, catalogues seven categories of compliance risk, and distinguishes "amplified" from "novel" risks. Carries a polite disclaimer that it should not be read as policy. Should be read as policy.

DRCF · 31 March 2026
Read →
European Commission

AI Act Article 50 · Transparency obligations

Transparency obligations for AI systems that interact with natural persons, generate synthetic content, perform emotion recognition, or produce deep fakes. Applicable from 2 August 2026. The Commission consultation on implementation guidelines closes 3 June 2026. The core gap for this field note: Article 50 obliges providers when there is a natural person on the other side — not when the agent is talking to another platform.

EU Regulation 2024/1689 · Art. 50
Read →
OpenAI · Allowlisting documentation

ChatGPT Agent allowlisting · HTTP Message Signatures

The technical reference for OpenAI's network-layer agent identification. Every outbound request from ChatGPT Agent is signed under RFC 9421 with a Signature-Agent header set to https://chatgpt.com, with public keys discoverable at a well-known URL. Recognised by Akamai, Cloudflare and HUMAN AgenticTrust as a verified bot. The cleanest technical primary source on the network-attested approach in §06.

OpenAI Help Center · 2025–2026
Read →
Anthropic · Computer use documentation

Claude computer use tool · client-side execution

Anthropic's developer documentation for computer use. The framing is consistent with §06's Approach A: computer use is a client-side tool; screenshots, mouse actions, keyboard inputs and files are captured and stored in the developer's environment, not Anthropic's. Developers must inform end users of risks and obtain consent before enabling the feature. The policy choice is structural, not just textual.

Anthropic platform · 2026
Read →
⬨ Primary law & statutes
California · Statute

California AB 316 · "AI did it" not a defence

Operative 1 January 2026. Removes the use of "the artificial intelligence acted autonomously" as a usable defence in civil litigation arising from harm caused by AI. Most directly relevant to agent products incorporated in California, but as a cause-of-action rule, available to any plaintiff bringing suit in California courts against a defendant whose AI did harm.

Cal. Civ. Code · AB 316 · 2026
Read →
EU · Directive

EU Product Liability Directive (revised)

Directive (EU) 2024/2853. Software, explicitly including AI, is captured as a "product" under strict liability. Member-state transposition deadline 9 December 2026. AI Act compliance becomes the de facto safety benchmark when courts assess defectiveness. The hardest date on the EU regulatory calendar for any business shipping AI agents into European markets this year.

EU PLD 2024/2853
Read →
⬩ News & industry analysis
GeekWire · Reporting

Judge blocks Perplexity's AI bot from shopping on Amazon

Todd Bishop's reporting on the 9 March 2026 ruling. The cleanest single-piece summary of the dispute's procedural history, including Amazon's claim of at least five cease-and-desist warnings starting November 2024 and Perplexity's software-update-in-24-hours pattern after the August 2025 technical block.

GeekWire · 10 March 2026
Read →
Search Engine Journal

Amazon wins preliminary injunction against Perplexity's Comet

Matt G. Southern's reading of what the order changes for platforms and SEO. The most useful framing of why the court "treated user consent and platform authorization as two separate requirements" — and why that wording is the line that matters when platforms write the next version of their terms.

SEJ · 12 March 2026
Read →
Cooley · Legal analysis

AI Agents and Consumer Law: What Businesses Need to Know

Cooley's structured walk-through of the CMA guidance's four principles — transparency, compliance by design, human oversight, swift remediation — and what each implies in practice. Includes the operational point most internal compliance teams miss: forward-looking remediation is not a cure for, or a defence against, previous breaches.

Cooley · 26 March 2026
Read →
Kiteworks · Analysis

When Four UK Regulators Speak in Unison, Pay Attention

The clearest unpacking of the DRCF foresight paper as it lands on operational compliance teams. Source for the §04 figures: 63% of organisations cannot enforce purpose limitations on agents; 60% cannot terminate misbehaving agents; 55% cannot isolate them from broader networks. 100% have agentic AI on the roadmap. The gap between deployment ambition and operational reality.

Kiteworks · April 2026
Read →
§ Related field note
The Trust Gap · survey in progress
A standing survey on what users believe their platforms are doing vs what platforms are actually doing. The agent question is now in the next wave: have you let an AI act on your behalf, and did the platform on the other side ask for confirmation?
Open the survey →