Attack Surface Wired · Feb 21, 2026

OpenAI expands ChatGPT memory to reference all past conversations by default

What it breaks: Cross-session memory persistence creates a retrievable store of sensitive context that no input validation or session boundary control was designed to protect. Every future interaction can surface content from any prior session with no write-policy controls employees or admins can inspect. Watch for: Memory is opt-out not opt-in. Verify your ChatGPT Enterprise tenant configuration and confirm whether persistent memory scope is addressed in your acceptable use policy before the next audit cycle.

Attack Surface BleepingComputer · Feb 14, 2026

Researchers demonstrate prompt injection via MCP tool descriptions that persist across agent sessions

What it breaks: MCP tool descriptions are treated as trusted configuration by the agent, not as untrusted input. An attacker who controls a connected tool can embed instructions that replay into every session that loads that tool, with no input validation layer between description and execution. Watch for: Audit which MCP servers your agents have loaded and whether tool descriptions are reviewed before onboarding. Tool metadata is now an attack surface, not just functionality documentation.

Attack Surface Dark Reading · Feb 10, 2026

Enterprise survey: 68% of employees use AI tools not approved by their security team

What it breaks: Shadow AI creates agent deployments with no identity, no audit trail, and no access controls reviewed by security. Each unapproved tool is a non-human identity with potentially broad access operating entirely outside your governance model. Watch for: Your NHI inventory almost certainly undercounts active agents. Run a discovery pass against OAuth grants and API key issuance logs before assuming your agent surface is mapped.

Attack Surface The Register · Jan 29, 2026

Researchers show Gemini for Workspace can exfiltrate email content via indirect prompt injection in calendar invites

What it breaks: Calendar invites are user-generated content treated as context, not commands. When an agent reads a malicious invite and acts on embedded instructions, the attack surface extends to every external input the agent processes, not just what users directly submit. Watch for: If your Workspace AI rollout hasn't scoped which data sources the agent can read and act on, indirect injection via calendar, email, and Drive is an unreviewed attack vector in your current deployment.

Attack Surface SecurityWeek · Jan 22, 2026

Malicious NPM packages found embedding LLM prompt instructions in documentation strings read by coding agents

What it breaks: Coding agents that read documentation to understand package usage are now a supply chain attack vector. Instructions embedded in README files or docstrings are processed the same way as developer-authored context, with no distinction between documentation and command. Watch for: Review whether your coding agent deployment scans packages before processing their documentation. Supply chain trust models built for human developers do not extend to agents that execute instructions embedded in that content.

Trust Model Reuters · Feb 26, 2026

FBI formally attributes Bybit $1.5B hack to North Korea's Lazarus Group

What it breaks: The attribution confirms the attacker operated inside trusted signing infrastructure for weeks before execution. Cold storage and multi-sig assume honest operators. They do not account for a persistent adversary who has already compromised the signing environment above the hardware layer. Watch for: If your custody threat model ends at the hardware wallet, review the full signing ceremony including software, devices, and human processes that interact with keys before they reach hardware.

Trust Model Financial Times · Feb 12, 2026

UK FCA opens review into accountability frameworks for AI agents executing financial decisions without per-action human sign-off

What it breaks: Existing accountability models assume a human is accountable for each decision. AI agents executing chains of actions under a single authorization break the one-decision-one-accountable-person structure that audit and regulatory frameworks were built around. Watch for: Map whether your current accountability documentation covers probabilistic multi-step agent decisions or only the initial authorization event that launched the agent.

Audit Gap SecurityWeek · Feb 24, 2026

ServiceNow Virtual Agent found executing privileged actions without per-action identity verification after passing enterprise security review

What it breaks: Security reviews validated vendor security posture and API permission scoping. They did not check whether each individual agent action carried verified identity at execution time. Deployment-time authentication is not authorization at execution. Watch for: Add one question to your SaaS AI agent review checklist: does the agent verify identity at each action, or only at session initiation? Most current reviews only check the latter.

Audit Gap Dark Reading · Feb 17, 2026

Audit firm survey finds enterprises have on average 17x more non-human identities than formally inventoried

What it breaks: Access reviews and privilege audits are built on the assumption that the identity inventory is complete. A 17x gap means the majority of active credentials and agent identities are operating entirely outside your governance and review cycle. Watch for: Before scoping your next access review, run a discovery pass against OAuth grants, API keys, and service account issuance to close the gap between what you think exists and what is actually active.

Audit Gap The Register · Feb 8, 2026

Financial services firms report AI agent decisions cannot be reconstructed for regulatory audit due to absent context logging

What it breaks: Traditional audit trails log what action was taken and by whom. For AI agents, the audit-relevant question is what context, memory, and retrieved information drove the decision. Most agent deployments log output but not the input context that produced it. Watch for: Verify that your logging captures the full context window at decision time, not only the final action. Output logs without context reconstruction will not satisfy regulatory exam requirements.

Audit Gap BleepingComputer · Jan 15, 2026

Amazon Q Business found executing natural-language instructions embedded in config files, bypassing code-layer security scanning entirely

What it breaks: Code scanning tools look for malicious payloads in executable content. Natural-language instructions in configuration files are not executable code and pass every scanner. The attack surface shifted from code to human-readable text that AI agents interpret as commands. Watch for: If your AI-assisted development security relies on code scanning as the primary defense, add a review step for natural-language content in configuration, documentation, and comment fields that agent tooling reads during execution.

Control Failure SecurityWeek · Feb 5, 2026

Researchers demonstrate privilege escalation in GitHub Actions AI agent allowing production repo write access via chained tool calls

What it breaks: Individual tool permissions are reviewed in isolation. When agents chain multiple tool calls, the cumulative effective permission can exceed what any single call would allow. Least-privilege scoping at the tool level does not prevent escalation through sequences of permitted actions. Watch for: Review whether your agent deployment has controls that evaluate the cumulative effect of chained actions, not only the permission of each individual call. Sequence-level authorization is a different control from per-call scoping.

No items match this filter.