Who Holds the Keys to Your AI's Memory? Three Custody Models for the Agentic Era
As AI agents gain access to enterprise data, the question of who controls encryption becomes a strategic decision, not just a security one.
Jay Arora
March 2026
The short answer
AI data custody determines who can decrypt your agent's memory, conversation history, and extracted insights. Three models exist: self-custody (you hold the keys, maximum sovereignty), assisted end-to-end encryption (the vendor stores encrypted data but cannot read it), and managed custody (the vendor can decrypt under strict controls). The right choice depends on your threat model, recovery requirements, and regulatory obligations.
The data access problem nobody planned for
The Thales 2026 Data Threat Report, conducted by S&P Global's 451 Research, surfaced a striking number: only 34% of organizations know where all their data resides. That's two-thirds of enterprises that have lost track of their own data — right as they're granting AI agents broad access to it.
Sébastien Cano, SVP of cybersecurity at Thales, framed the implication directly: 'Insider risk is no longer just about people. It is also about automated systems that have been trusted too quickly.' The report found that enterprises are embedding AI into daily workflows while granting these systems access that frequently operates with fewer security controls than those applied to human employees.
This is the custody problem. It's not just about encrypting data at rest or in transit — those are table stakes. The question is: when an AI agent processes your data, who holds the decryption keys? Who can read the agent's memory? Who can access the extracted decisions, the conversation history, the institutional knowledge that the agent has accumulated? And when something goes wrong — or a legal request arrives — who decides what gets disclosed?
Why custody matters more for agents than for humans
When a human employee handles sensitive data, they operate within organizational controls: access reviews, need-to-know policies, HR oversight. They process information in their heads, and what they 'remember' is limited by human cognition and isn't trivially extractable.
AI agents are different in three ways that make custody critical. First, agents accumulate — their memory grows over time, potentially containing months or years of decision history, customer data, strategic discussions, and proprietary analysis. Second, agents are extractable — unlike human memory, agent memory can be queried, exported, and exfiltrated in seconds. Third, agents are compellable — when a legal request arrives, a vendor that holds decryption keys may be legally obligated to produce agent memory contents. Under managed custody, your agent's institutional knowledge may be subject to disclosure in ways human memory never was.
Three models, three threat profiles
The custody landscape for AI data is settling into three models, each with distinct properties for recovery, disclosure, and operational complexity.
Self-custody: maximum sovereignty, maximum responsibility
In a self-custody model, the user generates and holds encryption keys locally. The service provider stores only ciphertext and cannot decrypt content under any circumstances — not for support, not for legal compliance, not for account recovery. This is the Signal model applied to AI agent memory.
The tradeoff is absolute: if you lose your keys and your recovery materials, your data is permanently gone. No vendor support ticket can fix it. But the security property is equally absolute: no subpoena, no breach of the vendor's systems, and no rogue employee can expose your agent's memory. For organizations handling defense, intelligence, or highly regulated data, self-custody may be the only acceptable model.
Assisted E2EE: the pragmatic middle ground
Assisted end-to-end encryption stores encrypted data on the vendor's infrastructure, but the vendor holds only wrapped keys — they cannot decrypt your content without your participation. Recovery is possible through pre-configured mechanisms: recovery codes, recovery keys, or trusted contacts. This is the model Apple uses for Advanced Data Protection.
The key property here is that the vendor can facilitate recovery but cannot unilaterally access your data. If a legal request arrives, the vendor can produce only service metadata (account creation dates, login timestamps, IP addresses) and redacted receipts — not your agent's memory or decision history. For most organizations, this balances sovereignty with operational reality.
Managed custody: maximum recoverability, minimum sovereignty
Under managed custody, the service provider can decrypt content under strict internal controls. This is how most SaaS products work today. Standard account recovery works. Administrators can restore access. Enterprise compliance workflows — retention holds, eDiscovery, audit exports — function as expected.
The tradeoff: under valid legal compulsion, the vendor may be obligated to produce plaintext content. Your agent's accumulated knowledge — every decision it extracted, every strategic discussion it recorded — may be disclosable. For organizations in industries with frequent litigation or regulatory inquiry, this is a material risk that most custody evaluations underweight.
The regulatory forcing function
The DHS AI Strategy for 2025-2026 now explicitly requires encryption key custody to stay within jurisdiction for federal data. The EU AI Act, effective since mid-2025, requires transparency and accountability when AI processes personal data. Several US states — Texas, California, Illinois, Colorado — are enforcing AI statutes in 2026 that mandate disclosures about training data sources and algorithmic logic.
These regulations share a common thread: they assume someone can explain what the AI did with the data, and someone is accountable for that explanation. Custody models determine who that 'someone' is. Under self-custody, only you can provide the explanation. Under managed custody, the vendor may be compelled to provide it for you — and the explanation may not align with what you'd choose to disclose.
What to ask your agent infrastructure vendor
The custody conversation with any AI agent vendor should start with five questions. Who generates the encryption keys, and where are they stored? Can the vendor decrypt customer data, and under what circumstances? What happens to agent memory during account recovery — is there a cooling-off period with restricted actions? What exactly gets disclosed under a legal request in each custody tier? And finally: can you export your agent's memory in a portable, encrypted format that works without the vendor's systems?
If the vendor can't answer these questions clearly, or if the answer to 'can you decrypt customer data' is 'yes, but we have policies' — that's managed custody with extra steps. Policies are not encryption. And in the agentic era, the difference between 'we choose not to read your data' and 'we cannot read your data' is the difference between a promise and a mathematical guarantee.
Related terminology