Hacker News

202

The Claude Code Source Leak: fake tools, frustration regexes, undercover mode

The name "Undercover mode" and the line `The phrase "Claude Code" or any mention that you are an AI` sound spooky, but after reading the source my first knee-jerk reaction wouldn't be "this is for pretending to be human" given that the file is largely about hiding Anthropic internal information such as code names. I encourage looking at the source itself in order to draw your conclusions, it's very short: https://github.com/alex000kim/claude-code/blob/main/src/util...
by peacebeard1774982577
> Sometimes a regex is the right tool.

I’d argue that in this case, it isn’t. Exhibit 1 (from the earlier thread): https://github.com/anthropics/claude-code/issues/22284. The user reports that this caused their account to be banned: https://news.ycombinator.com/item?id=47588970

Maybe it would be okay as a first filtering step, before doing actual sentiment analysis on the matches. That would at least eliminate obvious false positives (but of course still do nothing about false negatives).

by layer81774986060
> "Anti-distillation: injecting fake tools to poison copycats"

Plot twist: Chinese competitors end up developing real, useful versions of Claude's fake tools.

by Reason0771774984402
There are now several comments that (incorrectly?) interpret the undercover mode as only hiding internal information. Excerpts from the actual prompt[0]:

  NEVER include in commit messages or PR descriptions:
  - The phrase "Claude Code" or any mention that you are an AI
  - Co-Authored-By lines or any other attribution

  BAD (never write these):
  - 1-shotted by claude-opus-4-6
  - Generated with Claude Code
  - Co-Authored-By: Claude Opus 4.6 <…>
This very much sounds like it does what it says on the tin, i.e. stays undercover and pretends to be a human. It's especially worrying that the prompt is explicitly written for contributions to public repositories.

[0]: https://github.com/chatgptprojects/claude-code/blob/642c7f94...

by mzajc1774983991
I don't understand the part about undercover mode. How is this different from disabling claude attribution in commits (and optionally telling claude to act human?)

On that note, this article is also pretty obviously AI-generated and it's unfortunate the author didn't clean it up.

by ripbozo1774982514
I'm amazed at how much of what my past employers would call trade secrets are just being shipped in the source. Including comments that just plainly state the whole business backstory of certain decisions. It's like they discarded all release harnesses and project tracking and just YOLO'd everything into the codebase itself.
by causal1774983529
> The multi-agent coordinator mode in coordinatorMode.ts is also worth a look. The whole orchestration algorithm is a prompt, not code.

So much for langchain and langraph!! I mean if Anthropic themselves arent using it and using a prompt then what’s the big deal about langchain

by simianwords1774982695
> Anti-distillation: injecting fake tools to poison copycats

Does this mean `huggingface.co/Jackrong/Qwen3.5-27B-Claude-4.6-Opus-Reasoning-Distilled` is unusable? Had anyone seen fake tool calls working with this model?

by armanj1774986038
The feature flag names alone are more revealing than the code. KAIROS, the anti-distillation flags, model codenames those are product strategy decisions that competitors can now plan around. You can refactor code in a week. You can't un-leak a roadmap.
by saadn921774984167
Can someone clarify how the signing can't be spoofed (or can it)? If we have the source, can't we just use the key to now sign requests from other clients and pretend they're coming from CC itself?
by stavros1774984282
>Claude Code also uses Axios for HTTP.

Interesting based on the other news that is out.

by pixl971774980128
> 250,000 wasted API calls per day

How much approximate savings would this actually be?

by marcd351774985881
I am curious about these fake tools.

They would either need to lie about consuming the tokens at one point to use in another so the token counting was precise.

But that does not make sense because if someone counted the tokens by capturing the session it would certainly not match what was charged.

Unless they would charge for the fake tools anyway so you never know they were there

by motbus31774984062
Anyone else have CI checks that source map files are missing from the build folder? Another trick is to grep the build folder for several function/variable names that you expect to be minified away.
by seanwilson1774982525
> The obvious concern, raised repeatedly in the HN thread: this means AI-authored commits and PRs from Anthropic employees in open source projects will have no indication that an AI wrote them. It’s one thing to hide internal codenames. It’s another to have the AI actively pretend to be human.

I don’t get it. What does this mean? I can use Claude code now without anyone knowing it is Claude code.

by simianwords1774982104
Come on guys. Yet another article distilling the HN discussion in the original post, in the same order the comments appear in that discussion? Here's another since y'all love this stuff: https://venturebeat.com/technology/claude-codes-source-code-...
by mmaunder1774985701
>This was the most-discussed finding in the HN thread. The general reaction: an LLM company using regexes for sentiment analysis is peak irony.

>Is it ironic? Sure. Is it also probably faster and cheaper than running an LLM inference just to figure out if a user is swearing at the tool? Also yes. Sometimes a regex is the right tool.

I'm reading an LLM written write up on an LLM tool that just summarizes HN comments.

I'm so tired man, what the hell are we doing here.

by viccis1774986088
by 1774985831
Guys I’m somewhat suspicious of all the leaks from Anthropic and think it may be intentional. Remember the leaked blog about Mythos?
by simianwords1774982293
Undercover mode is the most concerning part here tbh.
by OfirMarom1774981649