NHacker Next
  • new
  • past
  • show
  • ask
  • show
  • jobs
  • submit
Caveman Mode Save Token? (twitter.com)
rdevilla 4 days ago [-]
Only two or three weeks from incepting the idea of a token efficient LLM English dialect to seeing it in practice. I just never imagined it to take.... this particular form.....

https://news.ycombinator.com/item?id=47434846

gavinray 4 days ago [-]
I've had the thought that English is an efficiency barrier for a while now. Surely there are more information-dense representations of semantic concepts.

Some languages for example have single characters that represent entire ideas/phrases.

https://news.ycombinator.com/item?id=47442478

downboots 4 days ago [-]
https://en.wikipedia.org/wiki/Le_Taureau

"Everything should be made simple as possible but not simpler"

4 days ago [-]
schmorptron 4 days ago [-]
I used a system prompt similar to this, where I just dumped the entirety of https://grugbrain.dev/ into it and prefaced it with the assistant having to emulate grug.

Didn't find it particularly useful, but is is funny!

brightball 4 days ago [-]
Can this actually work?
illwrks 4 days ago [-]
It does. I've been tinkering with Copilot Studio Agents and you can hit a 8k character limit quickly. By taking your instructions and asking Copilot to compress the information down, while ensuring they are still human readable, you can cut it back to about 5k characters. The information is more dense and functionally the same and the agent is just as consistent as before.
pixel_popping 4 days ago [-]
Anything that reduce input/output works to an extent logically.
JaceDev 4 days ago [-]
[flagged]
Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact
Rendered at 15:13:30 GMT+0000 (UTC) with Wasmer Edge.