"You aren’t simply giving the agent instructions, you are changing how it operates."
Why is the AI generated "Mic Drop" everywhere now.
KaseKun 14 hours ago [-]
haha interesting - the article is 99% written by me, but i had gemini review it and sharpen up the send off because it felt weak.
I guess this goes to show that even a subtle touch of an LLM can undermine authenticity.
edit: i've removed that line. I don't like to edit articles after publish (call me old fashioned, but i try to be honest and transparent), in this case though the line adds nothing and your call-out has taught me a good lesson: shit human writing is better than "good" AI writing.
bazmattaz 5 hours ago [-]
I personally don’t think there is anything wrong with this. To the critiques I would say; this is the world we live in now. There are LLMs capable of essentially perfect writing skills. We need to get used to seeing a lot more content either written by or finished by LLMs.
The best practise for writing docs with LLMs in my opinion, which you have done, is to write as much as you can first then feed that into an LLM for context, and then work with the LLM to finalise it. Maybe half the time is spent writing and half the time is spent going back and forward polishing the doc.
Finally I think it’s important to give the LLM very clear writing guidelines based on your own writing style. I did this by feeding Claude around 20 of my handwritten docs and asked it to analyse my writing style and then add thy to its Claude.md. After a free round of iterations you can get great results!
KaseKun 13 minutes ago [-]
Good advice. After seeing the capability of the skills for frontend design by impeccable crew ( https://impeccable.style ) i am tempted to make my own `/blog-polish` skill or similar
I quite like it, it’s a perfect signal for me to stop reading.
ticulatedspline 18 hours ago [-]
the simple answer would be that "AI is in use everywhere"
Though I'd love to see an analysis of pre-gpt writing to see if it was more prevalent than we remember but lacked the acute sensitivity to it.
There's also the potential that AI started it but people read AI stuff and organically propagate AI tropes in their own words because it's part of the writing they consume.
DonsDiscountGas 14 hours ago [-]
Grandiose language but it's not wrong. The short article was worth reading IMHO
mlazos 15 hours ago [-]
Honestly people spoke like this before, just on LinkedIn. Now that ai trained on it we have LinkedIn.. everywhere. Welcome to hell.
nextaccountic 16 hours ago [-]
"injecting messages, not prompts"
SyneRyder 21 hours ago [-]
I thought this was worth the quick read. Just as the article says at the start, I thought skills were essentially the same as pasting a long Markdown prompt document into the Claude Code window, or having Claude read the prompt file. But it seems if you invoke the skill, CC handles it quite differently, eg it's special cased for how it survives compaction.
Changed my mental model of using Skills a bit anyway.
KaseKun 14 hours ago [-]
I was stubbornly of the same mindset, but had friends and colleagues that raved about skills, i thought it was hype cycle context management - i'm happy to be proven wrong
itmitica 12 hours ago [-]
[dead]
EnPissant 19 hours ago [-]
> 4. persisting context across compactions
> LLMs forget things as their context grows. When a conversation gets long, the context window fills up, and Claude Code starts compacting older messages. To prevent the agent from forgetting the skill’s instructions during a long thread, Claude Code registers the invoked skill in a dedicated session state.
> When the conversation history undergoes compaction, Claude Code references this registry and explicitly re-injects the skill’s instructions: you never lose the skill guardrails to context bloat.
If true, this means that over time a session can grow to contain all or most skills, negating the benefit of progressive disclosure. I would expect it would be better to let compaction do its thing with the possibility of an agent re-fetching a skill if needed.
I don't trust the article though. It looks like someone just pointed a LLM at the codebase and asked it to write an article.
KaseKun 14 hours ago [-]
Author here,
> It looks like someone just pointed a LLM at the codebase and asked it to write an article.
Not entirely true. I pointed an LLM at the codebase to get me to the right files for understanding skills, and to map out the dependencies and lifecycles - Then I spent quite a bit of time reading the code myself and writing about it.
An AI review at the end of the writing (to "sharpen" the language) unfortunately brought in a couple of AI fingerprints (note the "mic drop" comment above)
edit: write -> right (its 8am)
KaseKun 23 hours ago [-]
A technical breakdown of how agent skills are parsed, rendered, injected, and refreshed in your Claude Code working session.
Rendered at 11:26:41 GMT+0000 (UTC) with Wasmer Edge.
I guess this goes to show that even a subtle touch of an LLM can undermine authenticity.
edit: i've removed that line. I don't like to edit articles after publish (call me old fashioned, but i try to be honest and transparent), in this case though the line adds nothing and your call-out has taught me a good lesson: shit human writing is better than "good" AI writing.
The best practise for writing docs with LLMs in my opinion, which you have done, is to write as much as you can first then feed that into an LLM for context, and then work with the LLM to finalise it. Maybe half the time is spent writing and half the time is spent going back and forward polishing the doc.
Finally I think it’s important to give the LLM very clear writing guidelines based on your own writing style. I did this by feeding Claude around 20 of my handwritten docs and asked it to analyse my writing style and then add thy to its Claude.md. After a free round of iterations you can get great results!
realistically, though, i quite like writing. The other article i've posted ( https://www.dardar.co/articles/your-data-agent-is-wrong ) is 100% me, but as a consequence it feels kinda preachy and verbose in places haha
Though I'd love to see an analysis of pre-gpt writing to see if it was more prevalent than we remember but lacked the acute sensitivity to it.
There's also the potential that AI started it but people read AI stuff and organically propagate AI tropes in their own words because it's part of the writing they consume.
Changed my mental model of using Skills a bit anyway.
> LLMs forget things as their context grows. When a conversation gets long, the context window fills up, and Claude Code starts compacting older messages. To prevent the agent from forgetting the skill’s instructions during a long thread, Claude Code registers the invoked skill in a dedicated session state.
> When the conversation history undergoes compaction, Claude Code references this registry and explicitly re-injects the skill’s instructions: you never lose the skill guardrails to context bloat.
If true, this means that over time a session can grow to contain all or most skills, negating the benefit of progressive disclosure. I would expect it would be better to let compaction do its thing with the possibility of an agent re-fetching a skill if needed.
I don't trust the article though. It looks like someone just pointed a LLM at the codebase and asked it to write an article.
> It looks like someone just pointed a LLM at the codebase and asked it to write an article.
Not entirely true. I pointed an LLM at the codebase to get me to the right files for understanding skills, and to map out the dependencies and lifecycles - Then I spent quite a bit of time reading the code myself and writing about it.
An AI review at the end of the writing (to "sharpen" the language) unfortunately brought in a couple of AI fingerprints (note the "mic drop" comment above)
edit: write -> right (its 8am)