Code Assistant
A CLI tool built in Rust for assisting with code-related tasks.
Features
- Autonomous Exploration: The agent can intelligently explore codebases and build up working memory of the project structure.
- Reading/Writing Files: The agent can read file contents and make changes to files as needed.
- Working Memory Management: Efficient handling of file contents with the ability to load and unload files from memory.
- File Summarization: Capability to create and store file summaries for quick reference and better understanding of the codebase.
- Interactive Communication: Ability to ask users questions and get responses for better decision-making.
- MCP Server Mode: Can run as a Model Context Protocol server, providing tools and resources to LLMs running in an MCP client.
Installation
Ensure you have Rust installed on your system. Then:
1# Clone the repository 2git clone https://github.com/stippi/code-assistant 3 4# Navigate to the project directory 5cd code-assistant 6 7# Build the project 8cargo build --release 9 10# The binary will be available in target/release/code-assistant
Configuration in Claude Desktop
The code-assistant
implements the Model Context Protocol by Anthropic.
This means it can be added as a plugin to MCP client applications such as Claude Desktop.
Configure Your Projects
Create a file .code-assistant/projects.json
in your home directory.
This file adds available projects in MCP server mode (list_projects
and open_project
tools).
It has the following structure:
1{ 2 "code-assistant": { 3 "path": "/Users/<username>/workspace/code-assistant" 4 }, 5 "asteroids": { 6 "path": "/Users/<username>/workspace/asteroids" 7 }, 8 "zed": { 9 "path": "Users/<username>/workspace/zed" 10 } 11}
Notes:
- The absolute paths are not provided by the tool, to avoid leaking such information to LLM cloud providers.
- This file can be edited without restarting Claude Desktop, respectively the MCP server.
Configure MCP Servers
- Open the Claude Desktop application settings (Claude -> Settings)
- Switch to the Developer tab.
- Click the Edit Config button.
A Finder window opens highlighting the file claude_desktop_config.json
.
Open that file in your favorite text editor.
An example configuration is given below:
1{ 2 "mcpServers": { 3 "code-assistant": { 4 "command": "/Users/<username>/workspace/code-assistant/target/release/code-assistant", 5 "args": [ 6 "server" 7 ] 8 } 9 } 10}
Usage
Code Assistant can run in two modes:
Agent Mode (Default)
1code-assistant --task <TASK> [OPTIONS]
Available options:
--path <PATH>
: Path to the code directory to analyze (default: current directory)-t, --task <TASK>
: Task to perform on the codebase (required unless--continue-task
or--ui
is used)--ui
: Start with GUI interface--continue-task
: Continue from previous state-v, --verbose
: Enable verbose logging-p, --provider <PROVIDER>
: LLM provider to use [ai-core, anthropic, open-ai, ollama, vertex] (default: anthropic)-m, --model <MODEL>
: Model name to use (defaults: anthropic="claude-3-7-sonnet-20250219", open-ai="gpt-4o", vertex="gemini-1.5-pro-latest", ollama=required)--base-url <URL>
: API base URL for the LLM provider--tools-type <TOOLS_TYPE>
: Type of tool declaration [native, xml] (default: xml)native
= tools via LLM provider API,xml
= custom system message--num-ctx <NUM>
: Context window size in tokens (default: 8192, only relevant for Ollama)--agent-mode <MODE>
: Agent mode to use [working_memory, message_history] (default: message_history)--record <PATH>
: Record API responses to a file for testing (currently supported for Anthropic and AI Core providers)--playback <PATH>
: Play back a recorded session from a file--fast-playback
: Fast playback mode - ignore chunk timing when playing recordings
Environment variables:
ANTHROPIC_API_KEY
: Required when using the Anthropic providerOPENAI_API_KEY
: Required when using the OpenAI providerGOOGLE_API_KEY
: Required when using the Vertex provider- Note: AI Core authentication is configured via deployment config file
Examples:
1# Analyze code in current directory using Anthropic's Claude 2code-assistant --task "Explain the purpose of this codebase" 3 4# Use OpenAI to analyze a specific directory with verbose logging 5code-assistant -p open-ai --path ./my-project -t "List all API endpoints" -v 6 7# Use Google's Vertex AI with a specific model 8code-assistant -p vertex --model gemini-1.5-flash -t "Analyze code complexity" 9 10# Use Ollama with a specific model (model is required for Ollama) 11code-assistant -p ollama -m codellama --task "Find all TODO comments in the codebase" 12 13# Use AI Core provider 14code-assistant -p ai-core --task "Document the public API" 15 16# Use with working memory agent mode instead of message history mode 17code-assistant --task "Find performance bottlenecks" --agent-mode working_memory 18 19# Continue a previously interrupted task 20code-assistant --continue-task 21 22# Start with GUI interface 23code-assistant --ui 24 25# Record a session for later playback 26code-assistant --task "Optimize database queries" --record ./recordings/db-optimization.json 27 28# Play back a recorded session with fast-forward (no timing delays) 29code-assistant --playback ./recordings/db-optimization.json --fast-playback
Server Mode
Runs as a Model Context Protocol server:
1code-assistant server [OPTIONS]
Available options:
-v, --verbose
: Enable verbose logging
Contributing
Contributions are welcome! Please feel free to submit a Pull Request.