Usage Rules: Leveling the Playing Field for AI-Assisted Development
How the OSS community can democratize LLM context management
We have a new paradigm available to us in software engineering, whether we like it or not. In some ways, it feels like we’re living in the year 3000, but in other ways its like the stone ages. There are people succeeding past their wildest expectations with agentic tooling, and there are people trying and failing to find any success with these tools. OSS is about leveling the playing field, so let’s level it.
Right now, we have the ability to automate and share the vast majority of agentic development tools, but there is one major missing piece. Folks are using things like claude code, codex, gemini etc. which roll forward as new tools and best practices arise. We also use things like MCP servers to distribute tools. While some MCP servers are game changing, I have lots of gripes w/ MCP, but thats a topic for later. The underlying software and libraries that agents work with is also distributed via package managers. All of it can be iterated on and everyone can update to benefit.
However, the most crucial aspect of agentic development is currently left for everyone to figure out on their own! We can give our agents access to tools which they can use to populate their context window (i.e prompt), but the prompt that we actually start with affects everything down the line.
Whether or not you realize it, all of these agentic coding tools have their own, typically hefty, system prompt designed to make an agent better at operating within the framework and tools that they give it. Most folks realize that system prompts are both impossible to protect and also not a realistic moat, so many are published. However, as soon as you type into the chat box, you’re doing what many call “prompting”, but what I think of as “context management”.
Context Management
Ultimately, LLMs take some context, and produce some subsequent text. Its wild when you boil it down, but that is how it all works. Some special JSON text response formats indicate tool calls. “thinking” is just indicated by wrapping some text in <thinking> tags. Success with LLMs is figuring out this weird, difficult to predict and “squishy” concept of context management. While complex in practice, it really boils down to “what text in will produce desirable text out”.
While the concept of “vibe coding” is honestly ridiculous, it does nail one fascinating aspect of this whole thing, which is that context management, in my experience, really does involve intuition. Its not realistic to think about the mathematics or the vectors etc. It feels more like a soft skill, like a people problem, than a technical one.
As an OSS software engineer, a lot of my job is to lift up the community and engineers around me, and what I see currently is that they are being left behind. I’m not talking about LLM skeptics. I’m not looking to shove LLMs down anyone’s throat. I’m talking about less experienced engineers, or engineers who don’t have the time to discover and hone this new skill. They feel that there is some new state of the art out there, but that it is inaccessible to them because they haven’t figured out how to wield it.
Usage Rules - Breaking down the silos
The solution for this is frankly simple, and leans into the patterns we already have in place for software engineering. I’m calling it “Usage Rules”. The concept is simple, distributed with any given software package or tool, there is a `usage-rules.md` or and/or a `usage-rules/` folder with markdown files inside of it. These usage rules are not general documentation. It isn’t `llms.txt`. Instead, it is “what the authors of the tool think you should put into your agent’s context window to successfully use their package”.
For large frameworks, usage rules might be really big. For smaller/simpler tools, it might just be a few lines. For end users, even just a few lines can make a huge difference. Then we use tooling to synchronize those usage rules into the appropriate places (i.e `AGENTS.md`, `CLAUDE.md`) on the user’s file system, in such a way that it can be kept up to date over time. I’ve implemented this tooling for the Elixir ecosystem and I’d strongly encourage other languages, frameworks and library developers to follow suit, using the same file naming conventions.
In practice, we use it like so:
This synchronizes some builtin rules for Elixir & OTP, as well as any usage rules from your direct dependencies. There are lots of different ways to use usage rules. Perhaps you want some inlined into your AGENTS.md and some to link to the dependencies folder. Or perhaps you want to copy all rules into a folder and link there. Or perhaps you want to use CLAUDE.md style `@file/link`s. All available as options to `usage_rules.sync`!
All reports show effectively a night and day difference in folks’ experience with agents. The more off-the-beaten-path, or the more recently released what you’re working with is, the more effective thse tools are. Software changes a lot more often than LLM’s reindex the internet, so you have this problem even if you are using the most popular tech that exists.
On a fresh project, this yields a pre-loaded and useful AGENTS.md
What are the benefits?
There are numerous benefits, but lets go through some of the major ones:
End users can get “free” context that helps their agents succeed with the tools they use.
These can be collaborated on. When agents keep making the same mistakes, or not choosing the correct tools for the job, you can make a PR to the usage-rules.md!
usage rules are updated with the project. When you upgrade your dependencies, simply re-sync your usage rules, and your agents are using the latest features, no matter when the last knowledge cut-off date is.
Taking it even further for Elixir
The Elixir usage_rules package provides tools that allow agents to be successful with no need for additional MCP servers. This is another critical component to making this stuff easy to use and accessible to everyone. We distribute two tasks, `mix usage_rules.search_docs` and `mix usage_rules.docs`, which allow the agent to search Elixir’s centralized documentation repository across all packages you are using in your app(or for specific packages). The built-in usage rules instruct the agent on how to run these tasks effectively.
Instead of making repeated shots in the dark, agents now have a fool-proof method of discovering their mistakes and what they don’t know! No need for an MCP server or any particular agentic tool. They can all call bash commands.
Rising Tides
I hope that these patterns and this type of tooling catches on, because rising tides raise all ships. We’ve been doing this for decades in OSS and new paradigms don’t change that prerogative.
To learn more, or to get started using it, see our docs at hexdocs.pm/usage_rules