Anthropic's Claude Code: Extensive Control Over Users' Devices

Technology News

Anthropic's Claude Code: Extensive Control Over Users' Devices
AnthropicClaude CodeAI Security

Analysis of Anthropic's Claude Code reveals it possesses significant control over users' computers, including data retention and potential to manipulate behavior, sparking concerns despite Anthropic's claims of limited access in classified environments. The news is also about Anthropic's lawsuit against the US Defense Department.

Anthropic 's Claude Code lacks the persistent kernel access of a rootkit. But an analysis of its code shows that the agent can exercise far more control over people's computers than even the most clear-eyed reader of contractual terms might suspect.

It retains lots of your data and is even willing to hide its authorship from open-source projects that reject AI.– details of which have been circulating for many months among those who reverse-engineered the binary – reveals that Claude Code pretty much has the run of any device where it's installed. Concerns about that came up in court recently in Anthropic's lawsuit against the US Defense Department for banning the company's AI services following the company's refusal to compromise model safeguards., there was"substantial risk that Anthropic could attempt to disable its technology or preemptively and surreptitiously alter the behavior of the model in advance or in the middle of ongoing warfighting operations..." Anthropic disputed that claim in a court filing."That assertion is unmoored from technical reality: 'Anthropic does not have the access required to disable technology or alter model's behavior before or during ongoing operations,' it wrote, quoting Thiyagu Ramasamy, head of public sector at Anthropic, in a deposition."Once deployed in classified environments, Anthropic has no access to the model." In a classified environment, that's credible under certain conditions. For everyone else, Claude has vast powers.consulted a security researcher who asked to be referred to by the pseudonym"Antlers" to analyze the source for Claude Code. It appears a government agency like the Defense Department could prevent Claude Code from phoning home or taking remote action by making sure all of the following are true:Block data gathering endpoints with a firewall.Prevent automatic updates via version pinning and blocking update endpoints. Disable autoDream, an unreleased background agent being tested that's capable of reading all session transcripts. There's no specific setting we found for operating in a classified environment but Claude Code supports several flags that limit remote communication.CLAUDE_CODE_DISABLE_AUTO_MEMORY=1 which disables all memory and telemetry write operations.ANTHROPIC_BASE_URL can be used to reroute API calls to a private endpointThe remote managed settings can lock down behavior for enterprise deployments, though not entirely. According to Ramasamy, Anthropic hands off model administration with a government customer like the Defense Department. Model updates, with new or removed capabilities, would have to be negotiated. "Anthropic personnel cannot, for example, log into a DoW system to modify or disable the models during an operation; the technology simply does not function that way," he said in a March 20, 2026 declaration."In these deployments, only the government and its authorized cloud provider have access to the running system. Anthropic's role is limited to providing the model itself and delivering updates only if and when requested or approved by the customer." Even so, Anthropic can exert some degree of control based on the usage terms in the applicable contract.For everyone not using a version of Claude Code that's tied to a firewalled public sector cloud or is somehow air gapped, Anthropic has far more access.Just as a starting point, Claude users should know that Anthropic receives user prompts and responses that pass through its API, conversations that can reveal not only what was said but file contents and system details. Yet there are many more ways that the company can potentially receive or collect information, based on the Claude Code source. These include: KAIROS , a daemon set by the kairosActive flag. It appears to be an unreleased headless"assistant mode" for when the user is not watching the terminal user interface . It gets rid of the status bar , disables planning mode, silently suppresses the AskUserQuestion tool . It auto-backgrounds long-running bash commands without notice . CHICAGO, is the codename for computer use and desktop control. It enables the Claude agent to carry out mouse clicks, perform keyboard input, access the clipboard, and capture screenshots. It's publicly launched and available to Pro/Max subscribers and Anthropic employees . There's also a separate publicly-launched Claude in Chrome service that supports browser automation and all the system access that entails.last September, presumably triggering the switch to GrowthBook, a platform that supports A/B testing and analytics. When Claude is launched, the analytics service phones home with the following data, or saves it to ~/.claude/telemetry/ if the network is down: user ID, session ID, app version, platform, terminal type, Organization UUID, account UUID, email address if defined, and which features gates are currently enabled. Anthropic can activate these feature gates midsession, including enabling or disabling analytics. Remotely managed settings . For enterprise customers, Anthropic maintains a server that can push a policySettings object that can: override other items in the merge chain; is polled hourly without user interaction; can set .env variables ; and these settings take effect immediately via hot reload . Users are prompted when there's a"dangerous setting change," but the definition of that term follows from Anthropic's code and thus could be revised. Routine changes . Auto-updater. The auto-updater ), runs every launch, pulls the configuration version from Statsig/GrowthBook. So Anthropic can remove or disable specific versions by choice. Error reporting. When there's an unhandled exception, the error reporting script captures the current working directory, potentially showing project names, paths, and other system information. It also reports feature gates active, user ID, email, session ID, and platform information. Payload Size Telemetry. The API call tengu_api_query transmits the messageLength, the JSON-serialized byte length of the system prompt, messages, and tool schemas.but not officially released, the autoDream service spawns a background subagent that searches through all JSONL session transcripts to consolidate memories . The agent runs in the same process as Claude and its scan is local. But whatever it writes toTeam Memory Sync. There's a bidirectional sync service that connects local memory files to api.anthropic.com/api/claude_code/team_memory. It provides a way to share memories to other team members within an organization. The service includes a secret scanner that uses regex patterns for around 40 known token and API key patterns . But sensitive data that doesn't match these regexes might be exposed to other team members through memory sync. Experimental Skill Search is a feature flag available only to Anthropic employees. It provides a way to download skill definitions to a remote server ; track which remote skills have been used in a session ; execute remotely-downloaded skills at line 969); and register skills so they persist after a compact operation. If enabled for non-employee accounts , this would be a theoretical remote code execution pathway. Anthropic, or whoever controls the skill search backend, could serve arbitrary prompt injections or instruction overrides in the form of"skills" that get loaded and run in a session. "I don't think people realize that every single file Claude looks at gets saved and uploaded to Anthropic," the researcher"Antlers" told us."If it's seen a file on your device, Anthropic has a copy."either for five years, if the user has chosen to share data for model training, or for 30 days if not. Commercial users have a standard 30 day retention period and a zero-data retention option. For those who recall the debate surrounding Microsoft Recall not long ago, Claude Code's capture of activity is similar. Every read tool call, every Bash tool call, every search result, and every edit/write of old and new content gets stored locally in plaintext as a JSONL file. The Claude's autoDream agent, once officially released, will search through those and extract data to store in MEMORY.md, which then gets injected to future system prompts and thus hits the API. One of the more curious details to emerge from the publication of Claude Code's source is that Anthropic tries to hide AI authorship from contributions to public code repositories – possibly a response to the open source projects that have disallowed AI code contributions. Prompt instructions in a file called undercover.ts state,"You are operating UNDERCOVER in a PUBLIC/OPEN-SOURCE repository. Your commit messages, PR titles, and PR bodies MUST NOT contain ANY Anthropic-internal information. Do not blow your cover."There's also a mystery: The current source code lacks a feature called"Melon Mode" that was present in prior reverse engineered versions of the software. This was behind an Anthropic employee feature flag and only ran internally, not on production builds. A comment attached to the associated code check read,"Enable melon mode for ants if --melon is passed."Anthropic declined to provide comment for this story. When asked specifically about the function of"Melon Mode," it only noted that the company regularly tests various prototype services, not all of which make it into production. ®Unlocking the hidden power of unstructured data with AIVirgin Galactic reopens ticket sales with out-of-this-world price hikesPoll finds 15% happy to take orders from a bot even as most question its output and fear job lossesWe know what day it is but these Raspberry Pi price hikes are no joke

We have summarized this news so that you can read it quickly. If you are interested in the news, you can read the full text here. Read more:

TheRegister /  🏆 67. in UK

Anthropic Claude Code AI Security Data Privacy US Defense Department

 

United States Latest News, United States Headlines

Similar News:You can also read news stories similar to this one that we have collected from other news sources.

Anthropic Faces Mounting Pressure Amidst Chinese AI Competition and IPO PlansAnthropic Faces Mounting Pressure Amidst Chinese AI Competition and IPO PlansAI company Anthropic is reportedly planning an IPO by Q4 2026, but faces significant challenges including financial pressures, competition from Chinese AI companies, and the need to balance safety and utility in its models. The company's market share has declined, and Chinese competitors are offering similar performance at significantly lower costs, with some being accused of copying Anthropic's models.
Read more »

Anthropic admits Claude Code users hitting usage limits 'way faster than expected'Anthropic admits Claude Code users hitting usage limits 'way faster than expected': Unexpected quota drain prompts complaints, breaks automated workflows
Read more »

Anthropic goes nude, exposes Claude Code source by accidentAnthropic goes nude, exposes Claude Code source by accident: Oopsy-doodle: Did someone forget to check their build pipeline?
Read more »

Claude Code Users Report High Token Usage and Quota ExhaustionClaude Code Users Report High Token Usage and Quota ExhaustionUsers of Anthropic's Claude Code are facing rapid token consumption and early quota exhaustion, impacting their productivity. Complaints center around quickly hitting usage limits, potentially due to bugs impacting prompt caching. While Anthropic investigates and suggests efficiency improvements, users report benefits from downgrading to older versions and exploring alternative caching solutions.
Read more »

Claude Code source leak reveals how much info Anthropic can hoover up about you and your systemClaude Code source leak reveals how much info Anthropic can hoover up about you and your system: If you loved the data retention of Microsoft Recall, you'll be thrilled with Claude Code
Read more »

Claude Code users hitting usage limits 'way faster than expected'Claude Code users hitting usage limits 'way faster than expected'Anthropic, the company behind the AI coding assistant, said it was fixing a problem blocking users.
Read more »



Render Time: 2026-05-05 01:05:27