NHacker Next
  • new
  • past
  • show
  • ask
  • show
  • jobs
  • submit
New Claude Code programmatic usage restrictions (twitter.com)
SyneRyder 2 days ago [-]
Ouch. I've just been building a tool to go through my historic usage. I'm only on the Max 5x plan, and I only use about 40% of my weekly usage allowance. But it looks like even that usage would now cost me $1000/month of API usage under the new plan. That's a 10x price increase.

At least we've got clarity now? But a lot of my value comes from "claude -p" usage, either scheduled tasks while I'm asleep, or responding to incoming emails / voicetexts. Even the email replies will barely fit in $100/month. I'm not going to pay $1000 / month, so I guess it really is time for me to look at the competition and move my programmatic usage to them.

Man, I love the Claude models, and the whole idea of constitutional AI. We built a lot of tools & infrastructure together, but kept a lot of logs as well. I'll be really sad if I mostly have to move on now.

coldtea 2 days ago [-]
>responding to incoming emails / voicetexts.

You need an AI for that?

SyneRyder 2 days ago [-]
I'm sending the emails and voicetexts to Claude, they're incoming on my machine but from me.

When I'm away from my computer and out walking, I'll often think of a task for Claude, or I might bounce an idea back and forth with Claude via voice messages. I wrote a small Go program to watch my email and launch Claude via "claude -p" when it sees an email from myself addressed to it.

Claude also has a different "character" when collaborating over email, it feels more like a colleague. Hard to describe, but email almost feels like a better interaction UI than the chat window.

I had been starting to train Claude to see how it might go on customer service (eg maybe it could reply to my customers while I'm asleep), but at current Anthropic API costs I think that might still be too expensive.

genxy 2 days ago [-]
claude -p loads a lot less garbage into the context.
spoiler 2 days ago [-]
I'm 99% sure my old boss was pasting Slack messages in and out of ChatGPT. Some people are feral with this AI bullshit
arm32 2 days ago [-]
How else am I going to rapidly cognitively decline?
Schiendelman 22 hours ago [-]
Couldn't you do this with cowork without API usage?
SyneRyder 21 hours ago [-]
I don't think so, because I don't think you can trigger Cowork from an external program?

(I could be wrong!)

I'm not using the regular email connection methods, I don't want to give Anthropic complete access to my email account. I do a ton of deterministic checks first from a Go program that actually checks each email, to avoid lethal trifecta attacks. The model technically has no access to email at all. I only give a prompt with the necessary info, and access to a custom MCP reply tool that can only email me.

Basically I'd want Cowork within my external loop, and Anthropic wants to own the loop instead. (Unless I've missed a way to do it.)

----

EDIT: Also, to the person who just tried to lethal trifecta me - nice try, but you just demonstrated all the exact reasons Cowork / claude-code needs to be within the external loop of a deterministic program. This is why you don't just dump external input straight into context, or give the model direct access to everything. We're going to see a lot more of this, not just as more people use agents, but as more hosted webmail systems decide they need to add their own homebrew AI models into everyone's systems. And seriously, German servers really need to start tightening up their security.

Schiendelman 18 hours ago [-]
What on earth is a lethal trifecta??

Could you write a bit of local code (you said Go?) to dump the email you want acted on to a local file, then schedule Claude to check that file periodically?

SyneRyder 1 hours ago [-]
Lethal Trifecta is the term Simon Willison (simonw) coined for the triple combination of giving an AI access to private data, context including external inputs (prompt injection risk), and tools for external output (enabling exfiltration):

https://simonwillison.net/2025/Jun/6/six-months-in-llms/#ai-...

https://simonwillison.net/tags/lethal-trifecta/

EDIT: As for scheduling, Claude Desktop / Cowork only allows scheduling a task to run once an hour. That doesn't allow immediate responses to email or voicemail. Leaving my previous reply below though.

---

I can put emails in a text file easily for Claude to read, but scheduling 60 Claudes a minute to open a file that is usually empty... that's the kind of usage Anthropic is cracking down on. Claude doesn't enjoy spinning up with nothing to do either.

If I could set a 1 - 5 minute schedule in Claude Desktop, and create a hook tool that runs on SessionStart to checks the email, and can cancel the session before it starts if there's no email to react to, that might work. But I'd rather have my tiny email daemon in the background with 0% CPU and tiny RAM usage, than the Claude Desktop behemoth constantly at 3% CPU idle and eating up 500MB RAM unnecessarily. But still, thanks for the idea, it might save me money in June!

vova_hn2 2 days ago [-]
XCancel (alternative Twitter frontend) link: https://xcancel.com/ClaudeDevs/status/2054610152817619388

I think that this is much better than the previous situation with total lack of clarity on what is allowed and what isn't.

TomGarden 2 days ago [-]
This sucks. I use Claude -p over tailscale to code over voice when I’m on the go for accessibility reasons, and most of the time I do the same while at the computer. Running through $200 in API pricing takes no time. Oh well, time to switch providers I guess.
deaux 1 days ago [-]
Out of curiosity, why do you use `claude -p` for that over remote control? I use that for similar work.
TomGarden 1 days ago [-]
The biggest difference is that mine is audio-first - it reads everything out over Android tts by default, and runs a computer-side parakeet + Silero VAD server for transcription (My eyes struggle with small screens, though I use it text only occasionally). It's like a voice assistant but with Claude Code. I also made a custom GUI the with shortcuts and stuff, making saying "end conversation" actually end the conversation etc.

Maybe something similar can be done with tmux still, I'm definitely going to explore it

deaux 1 days ago [-]
Ah so you use it because the STT you can run on your computer are a lot better than what you can run on your phone?

I use on-device STT with Claude Code's built-in remote control feature to do what you do without needing claude -p, but I guess I don't use it for large enough quantities of text where on-device STT quality becomes a big issue.

TomGarden 1 days ago [-]
The big thing for me is the TTS, custom UI and persistent background mode! ie it switches turns automatically etc, no need to touch screen or keep screen on.

The STT on Gboard is very solid, so if that covers your use case you're good!

khoirul 2 days ago [-]
Switched to Codex a few days ago and not regretting it. Claude Code with the $20 subscription has been bad lately. Burning through quota in no time, even when sticking to older Sonnet models.
rickdg 2 days ago [-]
Guess we're no short of reasons to stick to Codex.
stusmall 2 days ago [-]
Does anyone know if this will impact ACP invoking Claude? IE using Claude from zed. I assume not but looking for confirmation
bhu8 2 days ago [-]
It would unfortunately impact it. ACP uses Claude SDK and is developed by a third-party.
1 days ago [-]
a34729t 2 days ago [-]
So basically local LLMs are rapidly improving to the point where they can handle many of the automation or local coding use cases on reasonable hardware (say $5k or less). What's the edge for frontier model providers here?
johntash 1 days ago [-]
frontier models are still way better than local models from what I've seen. To get close to them with large context windows and decent performance, you need more than a reasonable machine imo.

I'm hoping local llms start rapidly improving even more though.

2 days ago [-]
2 days ago [-]
2001zhaozhao 2 days ago [-]
Inb4 future Claude developer workflow (REQUIRED to save 90% of token $):

- The AI gives human prompts to copy-paste into Claude Code

- Human copy prompts into Claude Code

- The AI reads output from Claude Code

LoganDark 2 days ago [-]
I use `claude -p` interactively -- I understand why they put it under this new umbrella, but having to open the fullscreen interface each time to not be counted as a programmatic tool is a little disappointing.
nikolay 2 days ago [-]
Goodbye! Codex is better anyway!
eagle10ne 2 days ago [-]
When AOL was released, they marketed unlimited, how have times changed with Claude limits.
potsandpans 2 days ago [-]
Just stop using claude. It's easy. Grab pi, some provider with open weights, cheaper inference or a more permissive subscription plan, (openai, Alibaba, deepseek, what have you) and never look back.
kreidema 2 days ago [-]
This is annoying because tools like conductor use the SDK. So this will either be the end of conductor for me or I switch to codex. Interesting dilemma.
Kim_Bruning 2 days ago [-]
They're definitely aiming their sights at people who automate things. Which is to say: programmers.

Which is interesting, since you'd also think that programmers would be their primary customers.

coldtea 2 days ago [-]
You're not a customer of a business when you cost them $2 for each $1 they make out of you. At best you're their VC subsidised target demographic.
Kim_Bruning 2 days ago [-]
If so, then they don't actually have a product. Which -I guess- is what you're saying. I'm worried you might be right. Even though Claude is otherwise really good.
SaucyWrong 2 days ago [-]
I’d say there is a product there, what remains to be seen IMO is whether the market will bear whatever the price of that product ends up being once Anthropic are finished changing their terms, pricing, and rules of engagement every several weeks…
Kim_Bruning 2 days ago [-]
I'm definitely nervous to be a customer. Which is probably enough signal by itself, isn't it? :-/
harpooned 2 days ago [-]
rip conductor :(

codex W

andrewstuart 2 days ago [-]
Can someone explain in plain English please.
martinald 2 days ago [-]
Currently, if you use claude -p (non interactive mode) in for example CI/CD, you can use your included subscription tokens.

They are now changing it to be:

You get $20/$100/$200 of "credit" that can be used for claude -p. Problem is, once you are out of that it is the normal API rates (outrageously expensive).

SaucyWrong 2 days ago [-]
“All of your favorite Claude harnesses will get dramatically more expensive starting on June 15”
0xking 1 days ago [-]
[flagged]
StackTopherFlow 2 days ago [-]
The anthropic enshittification continues.
Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact
Rendered at 18:13:38 GMT+0000 (UTC) with Wasmer Edge.