Basic Usage
The codectx CLI provides powerful features for different workflows.
The analyze Command
Section titled “The analyze Command”The core command you will use most often is analyze:
codectx analyze <path>Controlling Output Size
Section titled “Controlling Output Size”LLMs have finite context windows. To ensure your context fits cleanly, use the --tokens flag (default: 120,000).
codectx will intelligently compress files to fit this budget.
codectx analyze . --tokens 60000Task Profiles
Section titled “Task Profiles”You can bias the ranking algorithm toward the task your AI agent is performing using the --task flag. The available profiles are:
default— Balanced overview of the project architecturedebug— Bias toward recently modified files and entry pointsfeature— Bias heavily toward heavily imported modules (fan-in) and high symbol densityarchitecture— Focus purely on structural connections and distance from entry pointsrefactor— Highlight high fan-in and dense modules while ignoring recency
codectx analyze . --task debugSemantic Search Ranking
Section titled “Semantic Search Ranking”You can provide a semantic query describing the area you want to focus on using --query. This requires the [semantic] extra to be installed (pip install codectx[semantic]).
codectx analyze . --query "authentication middleware and login flow"Including Recent Changes
Section titled “Including Recent Changes”If you want the agent to focus on what you’ve just been working on, you can include recent git changes using the --since flag:
codectx analyze . --since "2 days ago"Watching for Changes
Section titled “Watching for Changes”If you’re iterating locally alongside an agent, you can have codectx continually update the file as you code:
codectx watch .For a comprehensive list of all commands (including benchmark and search) and flags, see the CLI Reference.