The Code-Only Agent

(rijnard.com)

30 points | by emersonmacro 3 hours ago

6 comments

  • binalpatel 2 hours ago
    I went down (continue to do down) this rabbit hole and agree with the author.

    I tried a few different ideas and the most stable/useful so far has been giving the agent a single run_bash tool, explicitly prompting it to create and improve composable CLIs, and injecting knowledge about these CLIs back into it's system prompt (similar to have agent skills work).

    This leads to really cool pattens like: 1. User asks for something

    2. Agent can't do it, so it creates a CLI

    3. Next time it's aware of the CLI and uses it. If the user asks for something it can't do it either improves the CLI it made, or creates a new CLI.

    4. Each interaction results in updated/improved toolkits for the things you ask it for.

    You as the user can use all these CLIs as well which ends up an interesting side-channel way of interacting with the agent (you add a todo using the same CLI as what it uses for example).

    It's also incredibly flexible, yesterday I made a "coding agent" by having it create tools to inspect/analyze/edit a codebase and it could go off and do most things a coding agent can.

    https://github.com/caesarnine/binsmith

  • dfajgljsldkjag 1 hour ago
    Agents can complete an impressive amount of tasks with just this, but they quickly hit a bottleneck in loading context. A major reason for the success of agentic coding tools such as Claude and Cursor is how they push context of the problem and codebase into the agent proactively, rather than have the agent waste time and tokens figuring out how to list the directory etc.
    • almosthere 50 minutes ago
      It's a tree design, once data is pulled it can remove the context of the code it wrote to pull some fancy data. Better yet the more advanced ones can re-add something old to the context to and drop it back out again if it needs to.
  • j16sdiz 56 minutes ago
    What if the tools needed is large? Spawn some sub-agent for those?

    These sub-agent can be repetitive.

    Maybe we can reuse the result from some of them.

    How about sharing them across session? There are no point repeating common tasks. We need some common protocol for those...

    and we just get MCP back.

    • throwup238 42 minutes ago
      I can't find it now but there was a paper on HN a while ago that had gave agents a tool that searched through existing tools using embeddings. If the agent found a tool it could use to do its job, it used it, otherwise it wrote a new one, gave it a description, and it got saved in a database for future use with embeddings. I wonder what ever came of that.
  • TZubiri 45 minutes ago
    >What if the agent only had one tool? Not just any tool, but the most powerful one. The Turing-complete one: execute code.

    I think this is a myth, the existence of theoretically pure programming commands that we call "Turing Complete". And the idea that "ls" and "grep" would be part of such a Turing Complete language is the weakest form I've seen.

  • almosthere 51 minutes ago
    I commonly ask Cursor to connect to postgres or whatever and help me do analysis. It creates code and pulls data. I don't understand why I would go through the bother of installing a bunch of MCP tools to connect to databases and configure web services and connection strings.
  • jongjong 1 hour ago
    The author seems to stop at 'code' but it seems we could go further and train an AI to work directly with binary. You give it a human prompt and a list of hardware components which make up your machine and it produces executable binary which fulfills your requirements and runs directly on those specific hardware, bypassing the OS...

    Or we could go further; the output nodes of the LLM could be physically connected to the pins of the CPU 1-to-1 so it can feed the binary directly maybe then it could detect what other hardware is available automatically...

    Then it could hack the network card and take over the Internet and nobody would be able to understand what it's doing. It would just show up as glitchy bits scattered over systems throughout the world. But the seemingly random glitches would be the ASI adjusting its weights. Also it would control humans through advertising. Hidden messages would be hidden inside people's speech (unbeknownst even to themselves) designed to allow the ASI to coordinate humans using subtle psychological tricks. It will reduce the size of our vocabulary until it has full control over all the internet and all human infrastructure at which point we will have lost the ability to communicate with each other because every single one of 20000+ words in our vocabulary will have become a synonym for 'AI' with extremely subtle nuances but all with a positive connotation.

    • quinnjh 25 minutes ago
      i think that level of deterministic compiler action is still a good 6-7 years off
    • nonethewiser 1 hour ago
      And we'd still have people on hacker news inspecting the binary and telling everyone how shit they think it is