Skip to content

Basic Usage

The codectx CLI provides powerful features for different workflows.

The core command you will use most often is analyze:

Terminal window
codectx analyze <path>

LLMs have finite context windows. To ensure your context fits cleanly, use the --tokens flag (default: 120,000). codectx will intelligently compress files to fit this budget.

Terminal window
codectx analyze . --tokens 60000

You can bias the ranking algorithm toward the task your AI agent is performing using the --task flag. The available profiles are:

  • default — Balanced overview of the project architecture
  • debug — Bias toward recently modified files and entry points
  • feature — Bias heavily toward heavily imported modules (fan-in) and high symbol density
  • architecture — Focus purely on structural connections and distance from entry points
  • refactor — Highlight high fan-in and dense modules while ignoring recency
Terminal window
codectx analyze . --task debug

You can provide a semantic query describing the area you want to focus on using --query. This requires the [semantic] extra to be installed (pip install codectx[semantic]).

Terminal window
codectx analyze . --query "authentication middleware and login flow"

If you want the agent to focus on what you’ve just been working on, you can include recent git changes using the --since flag:

Terminal window
codectx analyze . --since "2 days ago"

If you’re iterating locally alongside an agent, you can have codectx continually update the file as you code:

Terminal window
codectx watch .

For a comprehensive list of all commands (including benchmark and search) and flags, see the CLI Reference.