mirror of
https://github.com/instructkr/claw-code.git
synced 2026-04-05 16:39:04 +08:00
Compare commits
2 Commits
fix/ui-par
...
feat/relea
| Author | SHA1 | Date | |
|---|---|---|---|
|
|
83bbf5c7cb | ||
|
|
85f0e892c5 |
48
PARITY.md
48
PARITY.md
@@ -59,18 +59,15 @@ Evidence:
|
||||
### Rust exists
|
||||
Evidence:
|
||||
- Hook config is parsed and merged in `rust/crates/runtime/src/config.rs`.
|
||||
- Shell-command `PreToolUse` / `PostToolUse` hooks execute via `rust/crates/runtime/src/hooks.rs`.
|
||||
- Conversation runtime runs pre/post hooks around tool execution in `rust/crates/runtime/src/conversation.rs`.
|
||||
- Hook config can now be inspected through a dedicated Rust `/hooks` report in `rust/crates/commands/src/lib.rs` and `rust/crates/claw-cli/src/main.rs`.
|
||||
- Hook config can be inspected via Rust config reporting in `rust/crates/commands/src/lib.rs` and `rust/crates/claw-cli/src/main.rs`.
|
||||
- Prompt guidance mentions hooks in `rust/crates/runtime/src/prompt.rs`.
|
||||
|
||||
### Missing or broken in Rust
|
||||
- No TS-style matcher-based hook config model; Rust only supports merged string command lists under `settings.hooks.PreToolUse` and `PostToolUse`.
|
||||
- No TS-style prompt/agent/http hook types, `PostToolUseFailure`, `PermissionDenied`, or richer hook lifecycle surfaces.
|
||||
- No TS-equivalent interactive `/hooks` browser/editor; Rust currently provides inspection/reporting only.
|
||||
- No PreToolUse/PostToolUse input rewrite, MCP-output mutation, or continuation-stop behavior beyond allow/deny plus feedback text.
|
||||
- No actual hook execution pipeline in `rust/crates/runtime/src/conversation.rs`.
|
||||
- No PreToolUse/PostToolUse mutation/deny/rewrite/result-hook behavior.
|
||||
- No Rust `/hooks` parity command.
|
||||
|
||||
**Status:** basic shell hook runtime plus `/hooks` inspection; richer TS hook model still missing.
|
||||
**Status:** config-only; runtime behavior missing.
|
||||
|
||||
---
|
||||
|
||||
@@ -84,19 +81,16 @@ Evidence:
|
||||
|
||||
### Rust exists
|
||||
Evidence:
|
||||
- Local plugin manifests, registry/state, install/update/uninstall flows, and bundled/external discovery live in `rust/crates/plugins/src/lib.rs`.
|
||||
- Runtime config parses plugin settings (`enabledPlugins`, external directories, install root, registry path, bundled root) in `rust/crates/runtime/src/config.rs`.
|
||||
- CLI wiring builds a `PluginManager`, exposes `/plugin` inspection/reporting, and now exposes `/reload-plugins` runtime rebuild/reporting in `rust/crates/commands/src/lib.rs` and `rust/crates/claw-cli/src/main.rs`.
|
||||
- Plugin-provided tools are merged into the runtime tool registry in `rust/crates/claw-cli/src/main.rs` and `rust/crates/tools/src/lib.rs`.
|
||||
- No dedicated plugin subsystem appears under `rust/crates/`.
|
||||
- Repo-wide Rust references to plugins are effectively absent beyond text/help mentions.
|
||||
|
||||
### Missing or broken in Rust
|
||||
- No TS-style marketplace/discovery/editor UI; current surfaces are local manifest/reporting oriented.
|
||||
- Plugin-defined slash commands are validated from manifests but not exposed in the CLI runtime.
|
||||
- Plugin hooks and lifecycle commands are validated but not wired into the conversation runtime startup/shutdown or hook runner.
|
||||
- No plugin-provided MCP/server extension path.
|
||||
- `/reload-plugins` only rebuilds the current local runtime; it is not a richer TS hot-reload/plugin-browser flow.
|
||||
- No plugin loader.
|
||||
- No marketplace install/update/enable/disable flow.
|
||||
- No `/plugin` or `/reload-plugins` parity.
|
||||
- No plugin-provided hook/tool/command/MCP extension path.
|
||||
|
||||
**Status:** local plugin discovery/install/inspection exists; TS marketplace/runtime-extension parity is still partial.
|
||||
**Status:** missing.
|
||||
|
||||
---
|
||||
|
||||
@@ -110,18 +104,18 @@ Evidence:
|
||||
|
||||
### Rust exists
|
||||
Evidence:
|
||||
- `Skill` tool in `rust/crates/tools/src/lib.rs` now resolves workspace-local `.codex/.claw` skills plus legacy `/commands` entries through shared runtime discovery.
|
||||
- `/skills` exists in `rust/crates/commands/src/lib.rs` and `rust/crates/claw-cli/src/main.rs`, listing discoverable local skills and checked skill directories in the current workspace context.
|
||||
- `Skill` tool in `rust/crates/tools/src/lib.rs` resolves and reads local `SKILL.md` files.
|
||||
- CLAW.md discovery is implemented in `rust/crates/runtime/src/prompt.rs`.
|
||||
- Rust supports `/memory` and `/init` via `rust/crates/commands/src/lib.rs` and `rust/crates/claw-cli/src/main.rs`.
|
||||
|
||||
### Missing or broken in Rust
|
||||
- No bundled skill registry equivalent.
|
||||
- No `/skills` command.
|
||||
- No MCP skill-builder pipeline.
|
||||
- No TS-style live skill discovery/reload/change handling.
|
||||
- No comparable session-memory / team-memory integration around skills.
|
||||
|
||||
**Status:** local/workspace skill loading plus minimal `/skills` discovery; bundled/MCP parity still missing.
|
||||
**Status:** basic local skill loading only.
|
||||
|
||||
---
|
||||
|
||||
@@ -136,14 +130,14 @@ Evidence:
|
||||
### Rust exists
|
||||
Evidence:
|
||||
- Shared slash command registry in `rust/crates/commands/src/lib.rs`.
|
||||
- Rust slash commands currently cover `help`, `status`, `compact`, `model`, `permissions`, `clear`, `cost`, `resume`, `config`, `hooks`, `memory`, `init`, `diff`, `version`, `export`, `session`, `plugin`, `reload-plugins`, `agents`, and `skills`.
|
||||
- Rust slash commands currently cover `help`, `status`, `compact`, `model`, `permissions`, `clear`, `cost`, `resume`, `config`, `memory`, `init`, `diff`, `version`, `export`, `session`.
|
||||
- Main CLI/repl/prompt handling lives in `rust/crates/claw-cli/src/main.rs`.
|
||||
|
||||
### Missing or broken in Rust
|
||||
- Missing major TS command families: `/hooks`, `/mcp`, `/plan`, `/review`, `/tasks`, and many others.
|
||||
- Missing major TS command families: `/agents`, `/hooks`, `/mcp`, `/plugin`, `/skills`, `/plan`, `/review`, `/tasks`, and many others.
|
||||
- No Rust equivalent to TS structured IO / remote transport layers.
|
||||
- No TS-style handler decomposition for auth/plugins/MCP/agents.
|
||||
- JSON prompt mode now maintains clean transport output in tool-capable runs; targeted CLI coverage should guard against regressions.
|
||||
- JSON prompt mode is improved on this branch, but still not clean transport parity: empirical verification shows tool-capable JSON output can emit human-readable tool-result lines before the final JSON object.
|
||||
|
||||
**Status:** functional local CLI core, much narrower than TS.
|
||||
|
||||
@@ -167,7 +161,7 @@ Evidence:
|
||||
- No TS-style hook-aware orchestration layer.
|
||||
- No TS structured/remote assistant transport stack.
|
||||
- No richer TS assistant/session-history/background-task integration.
|
||||
- JSON output path is no longer single-turn only on this branch, and tool-capable prompt output now stays transport-clean like the TypeScript behavior.
|
||||
- JSON output path is no longer single-turn only on this branch, but output cleanliness still lags TS transport expectations.
|
||||
|
||||
**Status:** strong core loop, missing orchestration layers.
|
||||
|
||||
@@ -215,6 +209,6 @@ Evidence:
|
||||
- **Unlimited max_iterations**
|
||||
- Verified at `rust/crates/runtime/src/conversation.rs` with `usize::MAX`.
|
||||
|
||||
### JSON prompt output cleanliness status
|
||||
### Remaining notable parity issue
|
||||
- **JSON prompt output cleanliness**
|
||||
- Verified clean in tool-capable prompt mode: stdout remains a single final JSON object when tools fire.
|
||||
- Tool-capable JSON mode now loops, but empirical verification still shows pre-JSON human-readable tool-result output when tools fire.
|
||||
|
||||
57
README.md
57
README.md
@@ -26,27 +26,6 @@
|
||||
<a href="https://github.com/sponsors/instructkr"><img src="https://img.shields.io/badge/Sponsor-%E2%9D%A4-pink?logo=github&style=for-the-badge" alt="Sponsor on GitHub" /></a>
|
||||
</p>
|
||||
|
||||
## Philosophy
|
||||
|
||||
If you are staring at the generated files, you are looking at the wrong layer.
|
||||
|
||||
The Python rewrite was a byproduct, and the Rust port is too. The real subject of this repository is the **agent coordination system** that made both possible: a human giving direction in Discord, agents decomposing the work, implementing in parallel, reviewing each other, fixing failures, and shipping without constant babysitting.
|
||||
|
||||
That is the point of this project.
|
||||
|
||||
- **the code is evidence, not the product**
|
||||
- **the system that produces the code is the thing worth studying**
|
||||
- **architectural clarity, task decomposition, and coordination matter more than typing speed**
|
||||
- **clean-room rebuilding is valuable because it exposes process, not because it preserves an archive**
|
||||
- **the future of software work is better agent orchestration, not more manual pane babysitting**
|
||||
|
||||
This repository exists to document and improve that loop: direction from the human, execution by the agents, notifications routed outside the context window, and repeated verification until the result is good enough to ship.
|
||||
|
||||
In other words: **stop staring at the files**. Study the workflow that produced them.
|
||||
|
||||
> [!IMPORTANT]
|
||||
> **This repository is not affiliated with any coin, token, NFT, or crypto project.** It is software infrastructure work only, and any attempt to frame it as a cryptocurrency project is incorrect.
|
||||
|
||||
> [!IMPORTANT]
|
||||
> **Rust port is now in progress** on the [`dev/rust`](https://github.com/instructkr/claw-code/tree/dev/rust) branch and is expected to be merged into main today. The Rust implementation aims to deliver a faster, memory-safe harness runtime. Stay tuned — this will be the definitive version of the project.
|
||||
|
||||
@@ -89,33 +68,21 @@ https://github.com/instructkr/claw-code
|
||||
|
||||

|
||||
|
||||
## Related Projects
|
||||
## The Creators Featured in Wall Street Journal For Avid Claw Code Fans
|
||||
|
||||
This repository sits inside a broader harness-engineering stack. If you want the surrounding tooling rather than only this port, start here:
|
||||
I've been deeply interested in **harness engineering** — studying how agent systems wire tools, orchestrate tasks, and manage runtime context. This isn't a sudden thing. The Wall Street Journal featured my work earlier this month, documenting how I've been one of the most active power users exploring these systems:
|
||||
|
||||
### oh-my-codex (OmX)
|
||||
> AI startup worker Sigrid Jin, who attended the Seoul dinner, single-handedly used 25 billion of Claw Code tokens last year. At the time, usage limits were looser, allowing early enthusiasts to reach tens of billions of tokens at a very low cost.
|
||||
>
|
||||
> Despite his countless hours with Claw Code, Jin isn't faithful to any one AI lab. The tools available have different strengths and weaknesses, he said. Codex is better at reasoning, while Claw Code generates cleaner, more shareable code.
|
||||
>
|
||||
> Jin flew to San Francisco in February for Claw Code's first birthday party, where attendees waited in line to compare notes with Cherny. The crowd included a practicing cardiologist from Belgium who had built an app to help patients navigate care, and a California lawyer who made a tool for automating building permit approvals using Claw Code.
|
||||
>
|
||||
> "It was basically like a sharing party," Jin said. "There were lawyers, there were doctors, there were dentists. They did not have software engineering backgrounds."
|
||||
>
|
||||
> — *The Wall Street Journal*, March 21, 2026, [*"The Trillion Dollar Race to Automate Our Entire Lives"*](https://lnkd.in/gs9td3qd)
|
||||
|
||||
[](https://github.com/Yeachan-Heo/oh-my-codex)
|
||||
|
||||
Primary orchestration layer for planning, delegation, verification loops, and long-running execution patterns such as `$team` and `$ralph`.
|
||||
|
||||
### oh-my-claudecode (OmC)
|
||||
|
||||
[](https://github.com/Yeachan-Heo/oh-my-claudecode)
|
||||
|
||||
Companion workflow layer for Claude Code-centered orchestration and multi-agent terminal workflows.
|
||||
|
||||
### clawhip
|
||||
|
||||
[](https://github.com/Yeachan-Heo/clawhip)
|
||||
|
||||
Event-to-channel routing for commits, PRs, issues, tmux sessions, and agent lifecycle updates — keeping monitoring traffic out of the active agent context window.
|
||||
|
||||
### oh-my-opencode (OmO)
|
||||
|
||||
[](https://github.com/code-yeongyu/oh-my-openagent)
|
||||
|
||||
Used here for later-pass implementation acceleration and verification support alongside OmX.
|
||||

|
||||
|
||||
---
|
||||
|
||||
|
||||
@@ -6,31 +6,6 @@ use crossterm::event::{self, Event, KeyCode, KeyEvent, KeyEventKind, KeyModifier
|
||||
use crossterm::queue;
|
||||
use crossterm::terminal::{self, Clear, ClearType};
|
||||
|
||||
#[derive(Debug, Clone, PartialEq, Eq)]
|
||||
pub struct SlashCommandDescriptor {
|
||||
pub command: String,
|
||||
pub description: Option<String>,
|
||||
pub argument_hint: Option<String>,
|
||||
pub aliases: Vec<String>,
|
||||
}
|
||||
|
||||
impl SlashCommandDescriptor {
|
||||
#[allow(dead_code)]
|
||||
#[must_use]
|
||||
pub fn simple(command: impl Into<String>) -> Self {
|
||||
Self {
|
||||
command: command.into(),
|
||||
description: None,
|
||||
argument_hint: None,
|
||||
aliases: Vec::new(),
|
||||
}
|
||||
}
|
||||
|
||||
fn triggers(&self) -> impl Iterator<Item = &str> {
|
||||
std::iter::once(self.command.as_str()).chain(self.aliases.iter().map(String::as_str))
|
||||
}
|
||||
}
|
||||
|
||||
#[derive(Debug, Clone, PartialEq, Eq)]
|
||||
pub enum ReadOutcome {
|
||||
Submit(String),
|
||||
@@ -203,21 +178,14 @@ impl EditSession {
|
||||
out: &mut impl Write,
|
||||
base_prompt: &str,
|
||||
vim_enabled: bool,
|
||||
assist_lines: &[String],
|
||||
) -> io::Result<()> {
|
||||
self.clear_render(out)?;
|
||||
|
||||
let prompt = self.prompt(base_prompt, vim_enabled);
|
||||
let buffer = self.visible_buffer();
|
||||
write!(out, "{prompt}{buffer}")?;
|
||||
if !assist_lines.is_empty() {
|
||||
for line in assist_lines {
|
||||
write!(out, "\r\n{line}")?;
|
||||
}
|
||||
}
|
||||
|
||||
let (cursor_row, cursor_col, total_lines) =
|
||||
self.cursor_layout(prompt.as_ref(), assist_lines.len());
|
||||
let (cursor_row, cursor_col, total_lines) = self.cursor_layout(prompt.as_ref());
|
||||
let rows_to_move_up = total_lines.saturating_sub(cursor_row + 1);
|
||||
if rows_to_move_up > 0 {
|
||||
queue!(out, MoveUp(to_u16(rows_to_move_up)?))?;
|
||||
@@ -243,7 +211,7 @@ impl EditSession {
|
||||
writeln!(out)
|
||||
}
|
||||
|
||||
fn cursor_layout(&self, prompt: &str, assist_line_count: usize) -> (usize, usize, usize) {
|
||||
fn cursor_layout(&self, prompt: &str) -> (usize, usize, usize) {
|
||||
let active_text = self.active_text();
|
||||
let cursor = if self.mode == EditorMode::Command {
|
||||
self.command_cursor
|
||||
@@ -257,8 +225,7 @@ impl EditSession {
|
||||
Some((_, suffix)) => suffix.chars().count(),
|
||||
None => prompt.chars().count() + cursor_prefix.chars().count(),
|
||||
};
|
||||
let total_lines =
|
||||
active_text.bytes().filter(|byte| *byte == b'\n').count() + 1 + assist_line_count;
|
||||
let total_lines = active_text.bytes().filter(|byte| *byte == b'\n').count() + 1;
|
||||
(cursor_row, cursor_col, total_lines)
|
||||
}
|
||||
}
|
||||
@@ -273,43 +240,21 @@ enum KeyAction {
|
||||
|
||||
pub struct LineEditor {
|
||||
prompt: String,
|
||||
slash_commands: Vec<SlashCommandDescriptor>,
|
||||
completions: Vec<String>,
|
||||
history: Vec<String>,
|
||||
yank_buffer: YankBuffer,
|
||||
vim_enabled: bool,
|
||||
completion_state: Option<CompletionState>,
|
||||
}
|
||||
|
||||
#[derive(Debug, Clone, PartialEq, Eq)]
|
||||
struct CompletionState {
|
||||
prefix: String,
|
||||
matches: Vec<String>,
|
||||
next_index: usize,
|
||||
}
|
||||
|
||||
impl LineEditor {
|
||||
#[allow(dead_code)]
|
||||
#[must_use]
|
||||
pub fn new(prompt: impl Into<String>, completions: Vec<String>) -> Self {
|
||||
let slash_commands = completions
|
||||
.into_iter()
|
||||
.map(SlashCommandDescriptor::simple)
|
||||
.collect();
|
||||
Self::with_slash_commands(prompt, slash_commands)
|
||||
}
|
||||
|
||||
#[must_use]
|
||||
pub fn with_slash_commands(
|
||||
prompt: impl Into<String>,
|
||||
slash_commands: Vec<SlashCommandDescriptor>,
|
||||
) -> Self {
|
||||
Self {
|
||||
prompt: prompt.into(),
|
||||
slash_commands,
|
||||
completions,
|
||||
history: Vec::new(),
|
||||
yank_buffer: YankBuffer::default(),
|
||||
vim_enabled: false,
|
||||
completion_state: None,
|
||||
}
|
||||
}
|
||||
|
||||
@@ -330,12 +275,7 @@ impl LineEditor {
|
||||
let _raw_mode = RawModeGuard::new()?;
|
||||
let mut stdout = io::stdout();
|
||||
let mut session = EditSession::new(self.vim_enabled);
|
||||
session.render(
|
||||
&mut stdout,
|
||||
&self.prompt,
|
||||
self.vim_enabled,
|
||||
&self.command_assist_lines(&session),
|
||||
)?;
|
||||
session.render(&mut stdout, &self.prompt, self.vim_enabled)?;
|
||||
|
||||
loop {
|
||||
let Event::Key(key) = event::read()? else {
|
||||
@@ -347,12 +287,7 @@ impl LineEditor {
|
||||
|
||||
match self.handle_key_event(&mut session, key) {
|
||||
KeyAction::Continue => {
|
||||
session.render(
|
||||
&mut stdout,
|
||||
&self.prompt,
|
||||
self.vim_enabled,
|
||||
&self.command_assist_lines(&session),
|
||||
)?;
|
||||
session.render(&mut stdout, &self.prompt, self.vim_enabled)?;
|
||||
}
|
||||
KeyAction::Submit(line) => {
|
||||
session.finalize_render(&mut stdout, &self.prompt, self.vim_enabled)?;
|
||||
@@ -381,12 +316,7 @@ impl LineEditor {
|
||||
}
|
||||
)?;
|
||||
session = EditSession::new(self.vim_enabled);
|
||||
session.render(
|
||||
&mut stdout,
|
||||
&self.prompt,
|
||||
self.vim_enabled,
|
||||
&self.command_assist_lines(&session),
|
||||
)?;
|
||||
session.render(&mut stdout, &self.prompt, self.vim_enabled)?;
|
||||
}
|
||||
}
|
||||
}
|
||||
@@ -427,10 +357,6 @@ impl LineEditor {
|
||||
}
|
||||
|
||||
fn handle_key_event(&mut self, session: &mut EditSession, key: KeyEvent) -> KeyAction {
|
||||
if key.code != KeyCode::Tab {
|
||||
self.completion_state = None;
|
||||
}
|
||||
|
||||
if key.modifiers.contains(KeyModifiers::CONTROL) {
|
||||
match key.code {
|
||||
KeyCode::Char('c') | KeyCode::Char('C') => {
|
||||
@@ -747,162 +673,23 @@ impl LineEditor {
|
||||
session.cursor = insert_at + self.yank_buffer.text.len();
|
||||
}
|
||||
|
||||
fn complete_slash_command(&mut self, session: &mut EditSession) {
|
||||
fn complete_slash_command(&self, session: &mut EditSession) {
|
||||
if session.mode == EditorMode::Command {
|
||||
self.completion_state = None;
|
||||
return;
|
||||
}
|
||||
if let Some(state) = self
|
||||
.completion_state
|
||||
.as_mut()
|
||||
.filter(|_| session.cursor == session.text.len())
|
||||
.filter(|state| {
|
||||
state
|
||||
.matches
|
||||
.iter()
|
||||
.any(|candidate| session.text == *candidate || session.text == format!("{candidate} "))
|
||||
})
|
||||
{
|
||||
let candidate = state.matches[state.next_index % state.matches.len()].clone();
|
||||
state.next_index += 1;
|
||||
let replacement = completed_command(&candidate);
|
||||
session.text.replace_range(..session.cursor, &replacement);
|
||||
session.cursor = replacement.len();
|
||||
return;
|
||||
}
|
||||
let Some(prefix) = slash_command_prefix(&session.text, session.cursor) else {
|
||||
self.completion_state = None;
|
||||
return;
|
||||
};
|
||||
let matches = self.matching_commands(prefix);
|
||||
if matches.is_empty() {
|
||||
self.completion_state = None;
|
||||
return;
|
||||
}
|
||||
|
||||
let candidate = if let Some(state) = self
|
||||
.completion_state
|
||||
.as_mut()
|
||||
.filter(|state| state.prefix == prefix && state.matches == matches)
|
||||
{
|
||||
let index = state.next_index % state.matches.len();
|
||||
state.next_index += 1;
|
||||
state.matches[index].clone()
|
||||
} else {
|
||||
let candidate = matches[0].clone();
|
||||
self.completion_state = Some(CompletionState {
|
||||
prefix: prefix.to_string(),
|
||||
matches,
|
||||
next_index: 1,
|
||||
});
|
||||
candidate
|
||||
};
|
||||
|
||||
let replacement = completed_command(&candidate);
|
||||
session.text.replace_range(..session.cursor, &replacement);
|
||||
session.cursor = replacement.len();
|
||||
}
|
||||
|
||||
fn matching_commands(&self, prefix: &str) -> Vec<String> {
|
||||
let normalized = prefix.to_ascii_lowercase();
|
||||
let mut ranked = self
|
||||
.slash_commands
|
||||
let Some(candidate) = self
|
||||
.completions
|
||||
.iter()
|
||||
.filter_map(|descriptor| {
|
||||
let command = descriptor.command.clone();
|
||||
let mut best_rank = None::<(u8, usize)>;
|
||||
for trigger in descriptor.triggers() {
|
||||
let trigger_lower = trigger.to_ascii_lowercase();
|
||||
let rank = if trigger_lower == normalized {
|
||||
if trigger == descriptor.command {
|
||||
Some((0, trigger.len()))
|
||||
} else {
|
||||
Some((1, trigger.len()))
|
||||
}
|
||||
} else if trigger_lower.starts_with(&normalized) {
|
||||
if trigger == descriptor.command {
|
||||
Some((2, trigger.len()))
|
||||
} else {
|
||||
Some((3, trigger.len()))
|
||||
}
|
||||
} else if trigger_lower.contains(&normalized) {
|
||||
Some((4, trigger.len()))
|
||||
} else {
|
||||
None
|
||||
};
|
||||
if let Some(rank) = rank {
|
||||
best_rank = Some(best_rank.map_or(rank, |current| current.min(rank)));
|
||||
}
|
||||
}
|
||||
best_rank.map(|(bucket, len)| (bucket, len, command))
|
||||
})
|
||||
.collect::<Vec<_>>();
|
||||
.find(|candidate| candidate.starts_with(prefix) && candidate.as_str() != prefix)
|
||||
else {
|
||||
return;
|
||||
};
|
||||
|
||||
ranked.sort_by(|left, right| left.cmp(right));
|
||||
ranked.dedup_by(|left, right| left.2 == right.2);
|
||||
ranked.into_iter().map(|(_, _, command)| command).collect()
|
||||
}
|
||||
|
||||
fn command_assist_lines(&self, session: &EditSession) -> Vec<String> {
|
||||
if session.mode == EditorMode::Command || session.cursor != session.text.len() {
|
||||
return Vec::new();
|
||||
}
|
||||
|
||||
let input = session.text.as_str();
|
||||
if !input.starts_with('/') {
|
||||
return Vec::new();
|
||||
}
|
||||
|
||||
if let Some((command, args)) = command_and_args(input) {
|
||||
if input.ends_with(' ') && args.is_empty() {
|
||||
if let Some(descriptor) = self.find_command_descriptor(command) {
|
||||
let mut lines = Vec::new();
|
||||
if let Some(argument_hint) = &descriptor.argument_hint {
|
||||
lines.push(dimmed_line(format!("Arguments: {argument_hint}")));
|
||||
}
|
||||
if let Some(description) = &descriptor.description {
|
||||
lines.push(dimmed_line(description));
|
||||
}
|
||||
if !lines.is_empty() {
|
||||
return lines;
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
if input.contains(char::is_whitespace) {
|
||||
return Vec::new();
|
||||
}
|
||||
|
||||
let matches = self.matching_commands(input);
|
||||
if matches.is_empty() {
|
||||
return Vec::new();
|
||||
}
|
||||
|
||||
let mut lines = vec![dimmed_line("Suggestions")];
|
||||
lines.extend(matches.into_iter().take(3).map(|command| {
|
||||
let description = self
|
||||
.find_command_descriptor(command.trim_start_matches('/'))
|
||||
.and_then(|descriptor| descriptor.description.as_deref())
|
||||
.unwrap_or_default();
|
||||
if description.is_empty() {
|
||||
dimmed_line(format!(" {command}"))
|
||||
} else {
|
||||
dimmed_line(format!(" {command:<18} {description}"))
|
||||
}
|
||||
}));
|
||||
lines
|
||||
}
|
||||
|
||||
fn find_command_descriptor(&self, name: &str) -> Option<&SlashCommandDescriptor> {
|
||||
let normalized = name.trim().trim_start_matches('/').to_ascii_lowercase();
|
||||
self.slash_commands.iter().find(|descriptor| {
|
||||
descriptor.command.trim_start_matches('/').eq_ignore_ascii_case(&normalized)
|
||||
|| descriptor
|
||||
.aliases
|
||||
.iter()
|
||||
.any(|alias| alias.trim_start_matches('/').eq_ignore_ascii_case(&normalized))
|
||||
})
|
||||
session.text.replace_range(..session.cursor, candidate);
|
||||
session.cursor = candidate.len();
|
||||
}
|
||||
|
||||
fn history_up(&self, session: &mut EditSession) {
|
||||
@@ -1124,27 +911,6 @@ fn slash_command_prefix(line: &str, pos: usize) -> Option<&str> {
|
||||
Some(prefix)
|
||||
}
|
||||
|
||||
fn command_and_args(input: &str) -> Option<(&str, &str)> {
|
||||
let trimmed = input.trim_start();
|
||||
let without_slash = trimmed.strip_prefix('/')?;
|
||||
let (command, args) = without_slash
|
||||
.split_once(' ')
|
||||
.map_or((without_slash, ""), |(command, args)| (command, args));
|
||||
Some((command, args))
|
||||
}
|
||||
|
||||
fn completed_command(command: &str) -> String {
|
||||
if command.ends_with(' ') {
|
||||
command.to_string()
|
||||
} else {
|
||||
format!("{command} ")
|
||||
}
|
||||
}
|
||||
|
||||
fn dimmed_line(text: impl AsRef<str>) -> String {
|
||||
format!("\x1b[2m{}\x1b[0m", text.as_ref())
|
||||
}
|
||||
|
||||
fn to_u16(value: usize) -> io::Result<u16> {
|
||||
u16::try_from(value).map_err(|_| {
|
||||
io::Error::new(
|
||||
@@ -1158,7 +924,6 @@ fn to_u16(value: usize) -> io::Result<u16> {
|
||||
mod tests {
|
||||
use super::{
|
||||
selection_bounds, slash_command_prefix, EditSession, EditorMode, KeyAction, LineEditor,
|
||||
SlashCommandDescriptor,
|
||||
};
|
||||
use crossterm::event::{KeyCode, KeyEvent, KeyModifiers};
|
||||
|
||||
@@ -1321,7 +1086,7 @@ mod tests {
|
||||
#[test]
|
||||
fn tab_completes_matching_slash_commands() {
|
||||
// given
|
||||
let mut editor = LineEditor::new("> ", vec!["/help".to_string(), "/hello".to_string()]);
|
||||
let editor = LineEditor::new("> ", vec!["/help".to_string(), "/hello".to_string()]);
|
||||
let mut session = EditSession::new(false);
|
||||
session.text = "/he".to_string();
|
||||
session.cursor = session.text.len();
|
||||
@@ -1330,88 +1095,8 @@ mod tests {
|
||||
editor.complete_slash_command(&mut session);
|
||||
|
||||
// then
|
||||
assert_eq!(session.text, "/help ");
|
||||
assert_eq!(session.cursor, 6);
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn tab_cycles_between_matching_slash_commands() {
|
||||
// given
|
||||
let mut editor = LineEditor::new(
|
||||
"> ",
|
||||
vec!["/permissions".to_string(), "/plugin".to_string()],
|
||||
);
|
||||
let mut session = EditSession::new(false);
|
||||
session.text = "/p".to_string();
|
||||
session.cursor = session.text.len();
|
||||
|
||||
// when
|
||||
editor.complete_slash_command(&mut session);
|
||||
let first = session.text.clone();
|
||||
session.cursor = session.text.len();
|
||||
editor.complete_slash_command(&mut session);
|
||||
let second = session.text.clone();
|
||||
|
||||
// then
|
||||
assert_eq!(first, "/plugin ");
|
||||
assert_eq!(second, "/permissions ");
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn tab_completion_prefers_canonical_command_over_alias() {
|
||||
let mut editor = LineEditor::with_slash_commands(
|
||||
"> ",
|
||||
vec![SlashCommandDescriptor {
|
||||
command: "/plugin".to_string(),
|
||||
description: Some("Manage plugins".to_string()),
|
||||
argument_hint: Some("[list]".to_string()),
|
||||
aliases: vec!["/plugins".to_string(), "/marketplace".to_string()],
|
||||
}],
|
||||
);
|
||||
let mut session = EditSession::new(false);
|
||||
session.text = "/plugins".to_string();
|
||||
session.cursor = session.text.len();
|
||||
|
||||
editor.complete_slash_command(&mut session);
|
||||
|
||||
assert_eq!(session.text, "/plugin ");
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn command_assist_lines_show_suggestions_and_argument_hints() {
|
||||
let editor = LineEditor::with_slash_commands(
|
||||
"> ",
|
||||
vec![
|
||||
SlashCommandDescriptor {
|
||||
command: "/help".to_string(),
|
||||
description: Some("Show help and available commands".to_string()),
|
||||
argument_hint: None,
|
||||
aliases: Vec::new(),
|
||||
},
|
||||
SlashCommandDescriptor {
|
||||
command: "/model".to_string(),
|
||||
description: Some("Show or switch the active model".to_string()),
|
||||
argument_hint: Some("[model]".to_string()),
|
||||
aliases: Vec::new(),
|
||||
},
|
||||
],
|
||||
);
|
||||
|
||||
let mut prefix_session = EditSession::new(false);
|
||||
prefix_session.text = "/h".to_string();
|
||||
prefix_session.cursor = prefix_session.text.len();
|
||||
let prefix_lines = editor.command_assist_lines(&prefix_session);
|
||||
assert!(prefix_lines.iter().any(|line| line.contains("Suggestions")));
|
||||
assert!(prefix_lines.iter().any(|line| line.contains("/help")));
|
||||
|
||||
let mut hint_session = EditSession::new(false);
|
||||
hint_session.text = "/model ".to_string();
|
||||
hint_session.cursor = hint_session.text.len();
|
||||
let hint_lines = editor.command_assist_lines(&hint_session);
|
||||
assert!(hint_lines.iter().any(|line| line.contains("Arguments: [model]")));
|
||||
assert!(hint_lines
|
||||
.iter()
|
||||
.any(|line| line.contains("Show or switch the active model")));
|
||||
assert_eq!(session.text, "/help");
|
||||
assert_eq!(session.cursor, 5);
|
||||
}
|
||||
|
||||
#[test]
|
||||
|
||||
File diff suppressed because it is too large
Load Diff
@@ -1,299 +0,0 @@
|
||||
use std::fs;
|
||||
use std::io::{Read, Write};
|
||||
use std::net::TcpListener;
|
||||
use std::path::PathBuf;
|
||||
use std::process::Command;
|
||||
use std::thread;
|
||||
use std::time::{Duration, Instant, SystemTime, UNIX_EPOCH};
|
||||
|
||||
use serde_json::{json, Value};
|
||||
|
||||
#[test]
|
||||
fn prompt_json_with_tool_use_writes_clean_transport_output() {
|
||||
let fixture_root = unique_temp_dir("claw-json-transport");
|
||||
fs::create_dir_all(&fixture_root).expect("create fixture root");
|
||||
fs::write(fixture_root.join("fixture.txt"), "fixture contents\n").expect("write fixture file");
|
||||
fs::create_dir_all(fixture_root.join("config")).expect("create config dir");
|
||||
|
||||
let server = TestServer::spawn(vec![
|
||||
sse_response(
|
||||
"req_tool",
|
||||
&tool_use_stream("read_file", json!({ "path": "fixture.txt" })),
|
||||
),
|
||||
sse_response("req_done", &text_stream("done")),
|
||||
]);
|
||||
|
||||
let output = Command::new(env!("CARGO_BIN_EXE_claw"))
|
||||
.current_dir(&fixture_root)
|
||||
.env("ANTHROPIC_BASE_URL", server.base_url())
|
||||
.env("ANTHROPIC_API_KEY", "test-key")
|
||||
.env("CLAW_CONFIG_HOME", fixture_root.join("config"))
|
||||
.arg("--output-format")
|
||||
.arg("json")
|
||||
.arg("prompt")
|
||||
.arg("use a tool")
|
||||
.output()
|
||||
.expect("run claw prompt json");
|
||||
|
||||
server.finish();
|
||||
|
||||
let stdout = String::from_utf8(output.stdout).expect("stdout should be utf8");
|
||||
let stderr = String::from_utf8(output.stderr).expect("stderr should be utf8");
|
||||
|
||||
assert!(
|
||||
output.status.success(),
|
||||
"status: {:?}\nstderr:\n{stderr}",
|
||||
output.status
|
||||
);
|
||||
assert!(stderr.trim().is_empty(), "unexpected stderr: {stderr}");
|
||||
assert!(
|
||||
stdout.trim_start().starts_with('{'),
|
||||
"stdout should begin with JSON object, got:\n{stdout}"
|
||||
);
|
||||
|
||||
let parsed: Value = serde_json::from_str(stdout.trim())
|
||||
.expect("full stdout should be a single parseable JSON object");
|
||||
|
||||
assert_eq!(parsed["message"], "done");
|
||||
assert_eq!(parsed["iterations"], 2);
|
||||
assert_eq!(parsed["tool_uses"].as_array().map(Vec::len), Some(1));
|
||||
assert_eq!(parsed["tool_results"].as_array().map(Vec::len), Some(1));
|
||||
assert_eq!(parsed["tool_uses"][0]["name"], "read_file");
|
||||
assert_eq!(parsed["tool_results"][0]["tool_name"], "read_file");
|
||||
assert_eq!(parsed["tool_results"][0]["is_error"], false);
|
||||
|
||||
let tool_output = parsed["tool_results"][0]["output"]
|
||||
.as_str()
|
||||
.expect("tool result output string");
|
||||
assert!(tool_output.contains("fixture contents"));
|
||||
assert!(
|
||||
!stdout.contains("📄 Read"),
|
||||
"stdout leaked human-readable tool rendering:\n{stdout}"
|
||||
);
|
||||
}
|
||||
|
||||
struct TestServer {
|
||||
base_url: String,
|
||||
join_handle: thread::JoinHandle<()>,
|
||||
}
|
||||
|
||||
impl TestServer {
|
||||
fn spawn(responses: Vec<String>) -> Self {
|
||||
let listener = TcpListener::bind("127.0.0.1:0").expect("bind listener");
|
||||
listener
|
||||
.set_nonblocking(true)
|
||||
.expect("set nonblocking listener");
|
||||
let address = listener.local_addr().expect("listener addr");
|
||||
let join_handle = thread::spawn(move || {
|
||||
let deadline = Instant::now() + Duration::from_secs(10);
|
||||
let mut served = 0usize;
|
||||
|
||||
while served < responses.len() && Instant::now() < deadline {
|
||||
match listener.accept() {
|
||||
Ok((mut stream, _)) => {
|
||||
drain_http_request(&mut stream);
|
||||
stream
|
||||
.write_all(responses[served].as_bytes())
|
||||
.expect("write response");
|
||||
served += 1;
|
||||
}
|
||||
Err(error) if error.kind() == std::io::ErrorKind::WouldBlock => {
|
||||
thread::sleep(Duration::from_millis(10));
|
||||
}
|
||||
Err(error) => panic!("accept failed: {error}"),
|
||||
}
|
||||
}
|
||||
|
||||
assert_eq!(
|
||||
served,
|
||||
responses.len(),
|
||||
"server did not observe expected request count"
|
||||
);
|
||||
});
|
||||
|
||||
Self {
|
||||
base_url: format!("http://{address}"),
|
||||
join_handle,
|
||||
}
|
||||
}
|
||||
|
||||
fn base_url(&self) -> &str {
|
||||
&self.base_url
|
||||
}
|
||||
|
||||
fn finish(self) {
|
||||
self.join_handle.join().expect("join server thread");
|
||||
}
|
||||
}
|
||||
|
||||
fn drain_http_request(stream: &mut std::net::TcpStream) {
|
||||
stream
|
||||
.set_read_timeout(Some(Duration::from_secs(5)))
|
||||
.expect("set read timeout");
|
||||
let mut buffer = Vec::new();
|
||||
let mut header_end = None;
|
||||
|
||||
while header_end.is_none() {
|
||||
let mut chunk = [0_u8; 1024];
|
||||
let read = stream.read(&mut chunk).expect("read request chunk");
|
||||
if read == 0 {
|
||||
break;
|
||||
}
|
||||
buffer.extend_from_slice(&chunk[..read]);
|
||||
header_end = find_header_end(&buffer);
|
||||
}
|
||||
|
||||
let header_end = header_end.expect("request should contain headers");
|
||||
let headers = String::from_utf8(buffer[..header_end].to_vec()).expect("header utf8");
|
||||
let content_length = headers
|
||||
.lines()
|
||||
.find_map(|line| {
|
||||
line.split_once(':').and_then(|(name, value)| {
|
||||
name.eq_ignore_ascii_case("content-length")
|
||||
.then(|| value.trim().parse::<usize>().expect("content length"))
|
||||
})
|
||||
})
|
||||
.unwrap_or(0);
|
||||
let mut body = buffer[(header_end + 4)..].to_vec();
|
||||
while body.len() < content_length {
|
||||
let mut chunk = vec![0_u8; content_length - body.len()];
|
||||
let read = stream.read(&mut chunk).expect("read request body");
|
||||
if read == 0 {
|
||||
break;
|
||||
}
|
||||
body.extend_from_slice(&chunk[..read]);
|
||||
}
|
||||
}
|
||||
|
||||
fn find_header_end(buffer: &[u8]) -> Option<usize> {
|
||||
buffer.windows(4).position(|window| window == b"\r\n\r\n")
|
||||
}
|
||||
|
||||
fn sse_response(request_id: &str, body: &str) -> String {
|
||||
format!(
|
||||
"HTTP/1.1 200 OK\r\nContent-Type: text/event-stream\r\nrequest-id: {request_id}\r\nContent-Length: {}\r\nConnection: close\r\n\r\n{body}",
|
||||
body.len()
|
||||
)
|
||||
}
|
||||
|
||||
fn tool_use_stream(tool_name: &str, input: Value) -> String {
|
||||
let mut body = String::new();
|
||||
body.push_str(&sse_event(
|
||||
"message_start",
|
||||
json!({
|
||||
"type": "message_start",
|
||||
"message": {
|
||||
"id": "msg_tool",
|
||||
"type": "message",
|
||||
"role": "assistant",
|
||||
"content": [],
|
||||
"model": "claude-opus-4-6",
|
||||
"stop_reason": null,
|
||||
"stop_sequence": null,
|
||||
"usage": {"input_tokens": 8, "output_tokens": 0}
|
||||
}
|
||||
}),
|
||||
));
|
||||
body.push_str(&sse_event(
|
||||
"content_block_start",
|
||||
json!({
|
||||
"type": "content_block_start",
|
||||
"index": 0,
|
||||
"content_block": {
|
||||
"type": "tool_use",
|
||||
"id": "toolu_1",
|
||||
"name": tool_name,
|
||||
"input": {}
|
||||
}
|
||||
}),
|
||||
));
|
||||
body.push_str(&sse_event(
|
||||
"content_block_delta",
|
||||
json!({
|
||||
"type": "content_block_delta",
|
||||
"index": 0,
|
||||
"delta": {
|
||||
"type": "input_json_delta",
|
||||
"partial_json": input.to_string()
|
||||
}
|
||||
}),
|
||||
));
|
||||
body.push_str(&sse_event(
|
||||
"content_block_stop",
|
||||
json!({"type": "content_block_stop", "index": 0}),
|
||||
));
|
||||
body.push_str(&sse_event(
|
||||
"message_delta",
|
||||
json!({
|
||||
"type": "message_delta",
|
||||
"delta": {"stop_reason": "tool_use", "stop_sequence": null},
|
||||
"usage": {"input_tokens": 8, "output_tokens": 1}
|
||||
}),
|
||||
));
|
||||
body.push_str(&sse_event("message_stop", json!({"type": "message_stop"})));
|
||||
body.push_str("data: [DONE]\n\n");
|
||||
body
|
||||
}
|
||||
|
||||
fn text_stream(text: &str) -> String {
|
||||
let mut body = String::new();
|
||||
body.push_str(&sse_event(
|
||||
"message_start",
|
||||
json!({
|
||||
"type": "message_start",
|
||||
"message": {
|
||||
"id": "msg_done",
|
||||
"type": "message",
|
||||
"role": "assistant",
|
||||
"content": [],
|
||||
"model": "claude-opus-4-6",
|
||||
"stop_reason": null,
|
||||
"stop_sequence": null,
|
||||
"usage": {"input_tokens": 20, "output_tokens": 0}
|
||||
}
|
||||
}),
|
||||
));
|
||||
body.push_str(&sse_event(
|
||||
"content_block_start",
|
||||
json!({
|
||||
"type": "content_block_start",
|
||||
"index": 0,
|
||||
"content_block": {"type": "text", "text": ""}
|
||||
}),
|
||||
));
|
||||
body.push_str(&sse_event(
|
||||
"content_block_delta",
|
||||
json!({
|
||||
"type": "content_block_delta",
|
||||
"index": 0,
|
||||
"delta": {"type": "text_delta", "text": text}
|
||||
}),
|
||||
));
|
||||
body.push_str(&sse_event(
|
||||
"content_block_stop",
|
||||
json!({"type": "content_block_stop", "index": 0}),
|
||||
));
|
||||
body.push_str(&sse_event(
|
||||
"message_delta",
|
||||
json!({
|
||||
"type": "message_delta",
|
||||
"delta": {"stop_reason": "end_turn", "stop_sequence": null},
|
||||
"usage": {"input_tokens": 20, "output_tokens": 2}
|
||||
}),
|
||||
));
|
||||
body.push_str(&sse_event("message_stop", json!({"type": "message_stop"})));
|
||||
body.push_str("data: [DONE]\n\n");
|
||||
body
|
||||
}
|
||||
|
||||
fn sse_event(event_name: &str, payload: Value) -> String {
|
||||
format!("event: {event_name}\ndata: {payload}\n\n")
|
||||
}
|
||||
|
||||
fn unique_temp_dir(prefix: &str) -> PathBuf {
|
||||
let nanos = SystemTime::now()
|
||||
.duration_since(UNIX_EPOCH)
|
||||
.expect("clock should be after epoch")
|
||||
.as_nanos();
|
||||
std::env::temp_dir().join(format!("{prefix}-{nanos}"))
|
||||
}
|
||||
File diff suppressed because it is too large
Load Diff
@@ -648,17 +648,6 @@ pub struct PluginSummary {
|
||||
pub enabled: bool,
|
||||
}
|
||||
|
||||
#[derive(Debug, Clone, PartialEq, Eq)]
|
||||
pub struct PluginInspection {
|
||||
pub install_root: PathBuf,
|
||||
pub registry_path: PathBuf,
|
||||
pub settings_path: PathBuf,
|
||||
pub bundled_root: PathBuf,
|
||||
pub external_dirs: Vec<PathBuf>,
|
||||
pub discoverable_plugins: Vec<PluginSummary>,
|
||||
pub installed_plugins: Vec<PluginSummary>,
|
||||
}
|
||||
|
||||
#[derive(Debug, Clone, Default, PartialEq)]
|
||||
pub struct PluginRegistry {
|
||||
plugins: Vec<RegisteredPlugin>,
|
||||
@@ -945,31 +934,6 @@ impl PluginManager {
|
||||
self.config.config_home.join(SETTINGS_FILE_NAME)
|
||||
}
|
||||
|
||||
#[must_use]
|
||||
pub fn bundled_root_path(&self) -> PathBuf {
|
||||
self.config
|
||||
.bundled_root
|
||||
.clone()
|
||||
.unwrap_or_else(Self::bundled_root)
|
||||
}
|
||||
|
||||
#[must_use]
|
||||
pub fn external_dirs(&self) -> &[PathBuf] {
|
||||
&self.config.external_dirs
|
||||
}
|
||||
|
||||
pub fn inspect(&self) -> Result<PluginInspection, PluginError> {
|
||||
Ok(PluginInspection {
|
||||
install_root: self.install_root(),
|
||||
registry_path: self.registry_path(),
|
||||
settings_path: self.settings_path(),
|
||||
bundled_root: self.bundled_root_path(),
|
||||
external_dirs: self.external_dirs().to_vec(),
|
||||
discoverable_plugins: self.list_plugins()?,
|
||||
installed_plugins: self.list_installed_plugins()?,
|
||||
})
|
||||
}
|
||||
|
||||
pub fn plugin_registry(&self) -> Result<PluginRegistry, PluginError> {
|
||||
Ok(PluginRegistry::new(
|
||||
self.discover_plugins()?
|
||||
|
||||
@@ -15,9 +15,12 @@ mod prompt;
|
||||
mod remote;
|
||||
pub mod sandbox;
|
||||
mod session;
|
||||
mod skills;
|
||||
mod usage;
|
||||
|
||||
pub use lsp::{
|
||||
FileDiagnostics, LspContextEnrichment, LspError, LspManager, LspServerConfig,
|
||||
SymbolLocation, WorkspaceDiagnostics,
|
||||
};
|
||||
pub use bash::{execute_bash, BashCommandInput, BashCommandOutput};
|
||||
pub use bootstrap::{BootstrapPhase, BootstrapPlan};
|
||||
pub use compact::{
|
||||
@@ -25,8 +28,8 @@ pub use compact::{
|
||||
get_compact_continuation_message, should_compact, CompactionConfig, CompactionResult,
|
||||
};
|
||||
pub use config::{
|
||||
ConfigEntry, ConfigError, ConfigLoader, ConfigSource, McpConfigCollection,
|
||||
McpManagedProxyServerConfig, McpOAuthConfig, McpRemoteServerConfig, McpSdkServerConfig,
|
||||
ConfigEntry, ConfigError, ConfigLoader, ConfigSource, McpManagedProxyServerConfig,
|
||||
McpConfigCollection, McpOAuthConfig, McpRemoteServerConfig, McpSdkServerConfig,
|
||||
McpServerConfig, McpStdioServerConfig, McpTransport, McpWebSocketServerConfig, OAuthConfig,
|
||||
ResolvedPermissionMode, RuntimeConfig, RuntimeFeatureConfig, RuntimeHookConfig,
|
||||
RuntimePluginConfig, ScopedMcpServerConfig, CLAW_SETTINGS_SCHEMA_NAME,
|
||||
@@ -41,16 +44,12 @@ pub use file_ops::{
|
||||
WriteFileOutput,
|
||||
};
|
||||
pub use hooks::{HookEvent, HookRunResult, HookRunner};
|
||||
pub use lsp::{
|
||||
FileDiagnostics, LspContextEnrichment, LspError, LspManager, LspServerConfig, SymbolLocation,
|
||||
WorkspaceDiagnostics,
|
||||
};
|
||||
pub use mcp::{
|
||||
mcp_server_signature, mcp_tool_name, mcp_tool_prefix, normalize_name_for_mcp,
|
||||
scoped_mcp_config_hash, unwrap_ccr_proxy_url,
|
||||
};
|
||||
pub use mcp_client::{
|
||||
McpClientAuth, McpClientBootstrap, McpClientTransport, McpManagedProxyTransport,
|
||||
McpManagedProxyTransport, McpClientAuth, McpClientBootstrap, McpClientTransport,
|
||||
McpRemoteTransport, McpSdkTransport, McpStdioTransport,
|
||||
};
|
||||
pub use mcp_stdio::{
|
||||
@@ -82,10 +81,6 @@ pub use remote::{
|
||||
DEFAULT_SESSION_TOKEN_PATH, DEFAULT_SYSTEM_CA_BUNDLE, NO_PROXY_HOSTS, UPSTREAM_PROXY_ENV_KEYS,
|
||||
};
|
||||
pub use session::{ContentBlock, ConversationMessage, MessageRole, Session, SessionError};
|
||||
pub use skills::{
|
||||
discover_skill_roots, resolve_skill_path, SkillDiscoveryRoot, SkillDiscoverySource,
|
||||
SkillRootKind,
|
||||
};
|
||||
pub use usage::{
|
||||
format_usd, pricing_for_model, ModelPricing, TokenUsage, UsageCostEstimate, UsageTracker,
|
||||
};
|
||||
|
||||
@@ -1,313 +0,0 @@
|
||||
use std::env;
|
||||
use std::path::{Path, PathBuf};
|
||||
|
||||
#[derive(Debug, Clone, Copy, PartialEq, Eq, PartialOrd, Ord)]
|
||||
pub enum SkillDiscoverySource {
|
||||
ProjectCodex,
|
||||
ProjectClaw,
|
||||
UserCodexHome,
|
||||
UserCodex,
|
||||
UserClaw,
|
||||
}
|
||||
|
||||
impl SkillDiscoverySource {
|
||||
#[must_use]
|
||||
pub const fn label(self) -> &'static str {
|
||||
match self {
|
||||
Self::ProjectCodex => "Project (.codex)",
|
||||
Self::ProjectClaw => "Project (.claw)",
|
||||
Self::UserCodexHome => "User ($CODEX_HOME)",
|
||||
Self::UserCodex => "User (~/.codex)",
|
||||
Self::UserClaw => "User (~/.claw)",
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
#[derive(Debug, Clone, Copy, PartialEq, Eq)]
|
||||
pub enum SkillRootKind {
|
||||
SkillsDir,
|
||||
LegacyCommandsDir,
|
||||
}
|
||||
|
||||
impl SkillRootKind {
|
||||
#[must_use]
|
||||
pub const fn detail_label(self) -> Option<&'static str> {
|
||||
match self {
|
||||
Self::SkillsDir => None,
|
||||
Self::LegacyCommandsDir => Some("legacy /commands"),
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
#[derive(Debug, Clone, PartialEq, Eq)]
|
||||
pub struct SkillDiscoveryRoot {
|
||||
pub source: SkillDiscoverySource,
|
||||
pub path: PathBuf,
|
||||
pub kind: SkillRootKind,
|
||||
}
|
||||
|
||||
pub fn discover_skill_roots(cwd: &Path) -> Vec<SkillDiscoveryRoot> {
|
||||
let mut roots = Vec::new();
|
||||
|
||||
for ancestor in cwd.ancestors() {
|
||||
push_unique_skill_root(
|
||||
&mut roots,
|
||||
SkillDiscoverySource::ProjectCodex,
|
||||
ancestor.join(".codex").join("skills"),
|
||||
SkillRootKind::SkillsDir,
|
||||
);
|
||||
push_unique_skill_root(
|
||||
&mut roots,
|
||||
SkillDiscoverySource::ProjectClaw,
|
||||
ancestor.join(".claw").join("skills"),
|
||||
SkillRootKind::SkillsDir,
|
||||
);
|
||||
push_unique_skill_root(
|
||||
&mut roots,
|
||||
SkillDiscoverySource::ProjectCodex,
|
||||
ancestor.join(".codex").join("commands"),
|
||||
SkillRootKind::LegacyCommandsDir,
|
||||
);
|
||||
push_unique_skill_root(
|
||||
&mut roots,
|
||||
SkillDiscoverySource::ProjectClaw,
|
||||
ancestor.join(".claw").join("commands"),
|
||||
SkillRootKind::LegacyCommandsDir,
|
||||
);
|
||||
}
|
||||
|
||||
if let Ok(codex_home) = env::var("CODEX_HOME") {
|
||||
let codex_home = PathBuf::from(codex_home);
|
||||
push_unique_skill_root(
|
||||
&mut roots,
|
||||
SkillDiscoverySource::UserCodexHome,
|
||||
codex_home.join("skills"),
|
||||
SkillRootKind::SkillsDir,
|
||||
);
|
||||
push_unique_skill_root(
|
||||
&mut roots,
|
||||
SkillDiscoverySource::UserCodexHome,
|
||||
codex_home.join("commands"),
|
||||
SkillRootKind::LegacyCommandsDir,
|
||||
);
|
||||
}
|
||||
|
||||
if let Some(home) = env::var_os("HOME") {
|
||||
let home = PathBuf::from(home);
|
||||
push_unique_skill_root(
|
||||
&mut roots,
|
||||
SkillDiscoverySource::UserCodex,
|
||||
home.join(".codex").join("skills"),
|
||||
SkillRootKind::SkillsDir,
|
||||
);
|
||||
push_unique_skill_root(
|
||||
&mut roots,
|
||||
SkillDiscoverySource::UserCodex,
|
||||
home.join(".codex").join("commands"),
|
||||
SkillRootKind::LegacyCommandsDir,
|
||||
);
|
||||
push_unique_skill_root(
|
||||
&mut roots,
|
||||
SkillDiscoverySource::UserClaw,
|
||||
home.join(".claw").join("skills"),
|
||||
SkillRootKind::SkillsDir,
|
||||
);
|
||||
push_unique_skill_root(
|
||||
&mut roots,
|
||||
SkillDiscoverySource::UserClaw,
|
||||
home.join(".claw").join("commands"),
|
||||
SkillRootKind::LegacyCommandsDir,
|
||||
);
|
||||
}
|
||||
|
||||
roots
|
||||
}
|
||||
|
||||
pub fn resolve_skill_path(skill: &str, cwd: &Path) -> Result<PathBuf, String> {
|
||||
let requested = normalize_requested_skill_name(skill)?;
|
||||
|
||||
for root in discover_skill_roots(cwd) {
|
||||
match root.kind {
|
||||
SkillRootKind::SkillsDir => {
|
||||
let direct = root.path.join(&requested).join("SKILL.md");
|
||||
if direct.is_file() {
|
||||
return Ok(direct);
|
||||
}
|
||||
|
||||
if let Ok(entries) = std::fs::read_dir(&root.path) {
|
||||
for entry in entries.flatten() {
|
||||
let path = entry.path().join("SKILL.md");
|
||||
if !path.is_file() {
|
||||
continue;
|
||||
}
|
||||
if entry
|
||||
.file_name()
|
||||
.to_string_lossy()
|
||||
.eq_ignore_ascii_case(&requested)
|
||||
{
|
||||
return Ok(path);
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
SkillRootKind::LegacyCommandsDir => {
|
||||
let direct_markdown = root.path.join(format!("{requested}.md"));
|
||||
if direct_markdown.is_file() {
|
||||
return Ok(direct_markdown);
|
||||
}
|
||||
|
||||
let direct_skill_dir = root.path.join(&requested).join("SKILL.md");
|
||||
if direct_skill_dir.is_file() {
|
||||
return Ok(direct_skill_dir);
|
||||
}
|
||||
|
||||
if let Ok(entries) = std::fs::read_dir(&root.path) {
|
||||
for entry in entries.flatten() {
|
||||
let path = entry.path();
|
||||
if path.is_dir() {
|
||||
let skill_path = path.join("SKILL.md");
|
||||
if !skill_path.is_file() {
|
||||
continue;
|
||||
}
|
||||
if entry
|
||||
.file_name()
|
||||
.to_string_lossy()
|
||||
.eq_ignore_ascii_case(&requested)
|
||||
{
|
||||
return Ok(skill_path);
|
||||
}
|
||||
continue;
|
||||
}
|
||||
|
||||
if !path
|
||||
.extension()
|
||||
.is_some_and(|ext| ext.to_string_lossy().eq_ignore_ascii_case("md"))
|
||||
{
|
||||
continue;
|
||||
}
|
||||
|
||||
let Some(stem) = path.file_stem() else {
|
||||
continue;
|
||||
};
|
||||
if stem.to_string_lossy().eq_ignore_ascii_case(&requested) {
|
||||
return Ok(path);
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
Err(format!("unknown skill: {requested}"))
|
||||
}
|
||||
|
||||
fn normalize_requested_skill_name(skill: &str) -> Result<String, String> {
|
||||
let requested = skill.trim().trim_start_matches('/').trim_start_matches('$');
|
||||
if requested.is_empty() {
|
||||
return Err(String::from("skill must not be empty"));
|
||||
}
|
||||
Ok(requested.to_string())
|
||||
}
|
||||
|
||||
fn push_unique_skill_root(
|
||||
roots: &mut Vec<SkillDiscoveryRoot>,
|
||||
source: SkillDiscoverySource,
|
||||
path: PathBuf,
|
||||
kind: SkillRootKind,
|
||||
) {
|
||||
if path.is_dir() && !roots.iter().any(|existing| existing.path == path) {
|
||||
roots.push(SkillDiscoveryRoot { source, path, kind });
|
||||
}
|
||||
}
|
||||
|
||||
#[cfg(test)]
|
||||
mod tests {
|
||||
use std::fs;
|
||||
use std::path::{Path, PathBuf};
|
||||
use std::time::{SystemTime, UNIX_EPOCH};
|
||||
|
||||
use super::{
|
||||
discover_skill_roots, resolve_skill_path, SkillDiscoveryRoot, SkillDiscoverySource,
|
||||
SkillRootKind,
|
||||
};
|
||||
|
||||
fn temp_dir(label: &str) -> PathBuf {
|
||||
let nanos = SystemTime::now()
|
||||
.duration_since(UNIX_EPOCH)
|
||||
.expect("clock")
|
||||
.as_nanos();
|
||||
std::env::temp_dir().join(format!("runtime-skills-{label}-{nanos}"))
|
||||
}
|
||||
|
||||
fn write_skill(root: &Path, name: &str) {
|
||||
let skill_root = root.join(name);
|
||||
fs::create_dir_all(&skill_root).expect("skill root");
|
||||
fs::write(skill_root.join("SKILL.md"), format!("# {name}\n")).expect("write skill");
|
||||
}
|
||||
|
||||
fn write_legacy_markdown(root: &Path, name: &str) {
|
||||
fs::create_dir_all(root).expect("legacy root");
|
||||
fs::write(root.join(format!("{name}.md")), format!("# {name}\n")).expect("write command");
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn discovers_workspace_and_user_skill_roots() {
|
||||
let _guard = crate::test_env_lock();
|
||||
let workspace = temp_dir("workspace");
|
||||
let nested = workspace.join("apps").join("ui");
|
||||
let user_home = temp_dir("home");
|
||||
|
||||
fs::create_dir_all(&nested).expect("nested cwd");
|
||||
fs::create_dir_all(workspace.join(".codex").join("skills")).expect("project codex skills");
|
||||
fs::create_dir_all(workspace.join(".claw").join("commands"))
|
||||
.expect("project claw commands");
|
||||
fs::create_dir_all(user_home.join(".codex").join("skills")).expect("user codex skills");
|
||||
|
||||
std::env::set_var("HOME", &user_home);
|
||||
std::env::remove_var("CODEX_HOME");
|
||||
|
||||
let roots = discover_skill_roots(&nested);
|
||||
|
||||
assert!(roots.contains(&SkillDiscoveryRoot {
|
||||
source: SkillDiscoverySource::ProjectCodex,
|
||||
path: workspace.join(".codex").join("skills"),
|
||||
kind: SkillRootKind::SkillsDir,
|
||||
}));
|
||||
assert!(roots.contains(&SkillDiscoveryRoot {
|
||||
source: SkillDiscoverySource::ProjectClaw,
|
||||
path: workspace.join(".claw").join("commands"),
|
||||
kind: SkillRootKind::LegacyCommandsDir,
|
||||
}));
|
||||
assert!(roots.contains(&SkillDiscoveryRoot {
|
||||
source: SkillDiscoverySource::UserCodex,
|
||||
path: user_home.join(".codex").join("skills"),
|
||||
kind: SkillRootKind::SkillsDir,
|
||||
}));
|
||||
|
||||
std::env::remove_var("HOME");
|
||||
let _ = fs::remove_dir_all(workspace);
|
||||
let _ = fs::remove_dir_all(user_home);
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn resolves_workspace_skills_and_legacy_commands() {
|
||||
let _guard = crate::test_env_lock();
|
||||
let workspace = temp_dir("resolve");
|
||||
let nested = workspace.join("apps").join("ui");
|
||||
let original_dir = std::env::current_dir().expect("cwd");
|
||||
|
||||
fs::create_dir_all(&nested).expect("nested cwd");
|
||||
write_skill(&workspace.join(".claw").join("skills"), "review");
|
||||
write_legacy_markdown(&workspace.join(".codex").join("commands"), "deploy");
|
||||
|
||||
std::env::set_current_dir(&nested).expect("set cwd");
|
||||
let review = resolve_skill_path("review", &nested).expect("workspace skill");
|
||||
let deploy = resolve_skill_path("/deploy", &nested).expect("legacy command");
|
||||
std::env::set_current_dir(&original_dir).expect("restore cwd");
|
||||
|
||||
assert!(review.ends_with(".claw/skills/review/SKILL.md"));
|
||||
assert!(deploy.ends_with(".codex/commands/deploy.md"));
|
||||
|
||||
let _ = fs::remove_dir_all(workspace);
|
||||
}
|
||||
}
|
||||
@@ -11,11 +11,10 @@ use api::{
|
||||
use plugins::PluginTool;
|
||||
use reqwest::blocking::Client;
|
||||
use runtime::{
|
||||
edit_file, execute_bash, glob_search, grep_search, load_system_prompt, read_file,
|
||||
resolve_skill_path as resolve_runtime_skill_path, write_file, ApiClient, ApiRequest,
|
||||
AssistantEvent, BashCommandInput, ContentBlock, ConversationMessage, ConversationRuntime,
|
||||
GrepSearchInput, MessageRole, PermissionMode, PermissionPolicy, RuntimeError, Session,
|
||||
TokenUsage, ToolError, ToolExecutor,
|
||||
edit_file, execute_bash, glob_search, grep_search, load_system_prompt, read_file, write_file,
|
||||
ApiClient, ApiRequest, AssistantEvent, BashCommandInput, ContentBlock, ConversationMessage,
|
||||
ConversationRuntime, GrepSearchInput, MessageRole, PermissionMode, PermissionPolicy,
|
||||
RuntimeError, Session, TokenUsage, ToolError, ToolExecutor,
|
||||
};
|
||||
use serde::{Deserialize, Serialize};
|
||||
use serde_json::{json, Value};
|
||||
@@ -92,10 +91,7 @@ impl GlobalToolRegistry {
|
||||
Ok(Self { plugin_tools })
|
||||
}
|
||||
|
||||
pub fn normalize_allowed_tools(
|
||||
&self,
|
||||
values: &[String],
|
||||
) -> Result<Option<BTreeSet<String>>, String> {
|
||||
pub fn normalize_allowed_tools(&self, values: &[String]) -> Result<Option<BTreeSet<String>>, String> {
|
||||
if values.is_empty() {
|
||||
return Ok(None);
|
||||
}
|
||||
@@ -104,11 +100,7 @@ impl GlobalToolRegistry {
|
||||
let canonical_names = builtin_specs
|
||||
.iter()
|
||||
.map(|spec| spec.name.to_string())
|
||||
.chain(
|
||||
self.plugin_tools
|
||||
.iter()
|
||||
.map(|tool| tool.definition().name.clone()),
|
||||
)
|
||||
.chain(self.plugin_tools.iter().map(|tool| tool.definition().name.clone()))
|
||||
.collect::<Vec<_>>();
|
||||
let mut name_map = canonical_names
|
||||
.iter()
|
||||
@@ -159,8 +151,7 @@ impl GlobalToolRegistry {
|
||||
.plugin_tools
|
||||
.iter()
|
||||
.filter(|tool| {
|
||||
allowed_tools
|
||||
.is_none_or(|allowed| allowed.contains(tool.definition().name.as_str()))
|
||||
allowed_tools.is_none_or(|allowed| allowed.contains(tool.definition().name.as_str()))
|
||||
})
|
||||
.map(|tool| ToolDefinition {
|
||||
name: tool.definition().name.clone(),
|
||||
@@ -183,8 +174,7 @@ impl GlobalToolRegistry {
|
||||
.plugin_tools
|
||||
.iter()
|
||||
.filter(|tool| {
|
||||
allowed_tools
|
||||
.is_none_or(|allowed| allowed.contains(tool.definition().name.as_str()))
|
||||
allowed_tools.is_none_or(|allowed| allowed.contains(tool.definition().name.as_str()))
|
||||
})
|
||||
.map(|tool| {
|
||||
(
|
||||
@@ -1465,8 +1455,47 @@ fn todo_store_path() -> Result<std::path::PathBuf, String> {
|
||||
}
|
||||
|
||||
fn resolve_skill_path(skill: &str) -> Result<std::path::PathBuf, String> {
|
||||
let cwd = std::env::current_dir().map_err(|error| error.to_string())?;
|
||||
resolve_runtime_skill_path(skill, &cwd)
|
||||
let requested = skill.trim().trim_start_matches('/').trim_start_matches('$');
|
||||
if requested.is_empty() {
|
||||
return Err(String::from("skill must not be empty"));
|
||||
}
|
||||
|
||||
let mut candidates = Vec::new();
|
||||
if let Ok(codex_home) = std::env::var("CODEX_HOME") {
|
||||
candidates.push(std::path::PathBuf::from(codex_home).join("skills"));
|
||||
}
|
||||
if let Ok(home) = std::env::var("HOME") {
|
||||
let home = std::path::PathBuf::from(home);
|
||||
candidates.push(home.join(".agents").join("skills"));
|
||||
candidates.push(home.join(".config").join("opencode").join("skills"));
|
||||
candidates.push(home.join(".codex").join("skills"));
|
||||
}
|
||||
candidates.push(std::path::PathBuf::from("/home/bellman/.codex/skills"));
|
||||
|
||||
for root in candidates {
|
||||
let direct = root.join(requested).join("SKILL.md");
|
||||
if direct.exists() {
|
||||
return Ok(direct);
|
||||
}
|
||||
|
||||
if let Ok(entries) = std::fs::read_dir(&root) {
|
||||
for entry in entries.flatten() {
|
||||
let path = entry.path().join("SKILL.md");
|
||||
if !path.exists() {
|
||||
continue;
|
||||
}
|
||||
if entry
|
||||
.file_name()
|
||||
.to_string_lossy()
|
||||
.eq_ignore_ascii_case(requested)
|
||||
{
|
||||
return Ok(path);
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
Err(format!("unknown skill: {requested}"))
|
||||
}
|
||||
|
||||
const DEFAULT_AGENT_MODEL: &str = "claude-opus-4-6";
|
||||
@@ -3459,65 +3488,6 @@ mod tests {
|
||||
.ends_with("/help/SKILL.md"));
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn skill_resolves_workspace_skill_and_legacy_command() {
|
||||
let _guard = env_lock()
|
||||
.lock()
|
||||
.unwrap_or_else(std::sync::PoisonError::into_inner);
|
||||
let root = temp_path("workspace-skills");
|
||||
let cwd = root.join("apps").join("ui");
|
||||
let original_dir = std::env::current_dir().expect("cwd");
|
||||
|
||||
std::fs::create_dir_all(root.join(".claw").join("skills").join("review"))
|
||||
.expect("workspace skill dir");
|
||||
std::fs::write(
|
||||
root.join(".claw")
|
||||
.join("skills")
|
||||
.join("review")
|
||||
.join("SKILL.md"),
|
||||
"---\ndescription: Workspace review guidance\n---\n# review\n",
|
||||
)
|
||||
.expect("write workspace skill");
|
||||
std::fs::create_dir_all(root.join(".codex").join("commands")).expect("legacy root");
|
||||
std::fs::write(
|
||||
root.join(".codex").join("commands").join("deploy.md"),
|
||||
"---\ndescription: Deploy command guidance\n---\n# deploy\n",
|
||||
)
|
||||
.expect("write legacy command");
|
||||
std::fs::create_dir_all(&cwd).expect("cwd");
|
||||
|
||||
std::env::set_current_dir(&cwd).expect("set cwd");
|
||||
|
||||
let workspace_skill = execute_tool("Skill", &json!({ "skill": "review" }))
|
||||
.expect("workspace skill should resolve");
|
||||
let workspace_output: serde_json::Value =
|
||||
serde_json::from_str(&workspace_skill).expect("valid json");
|
||||
assert_eq!(
|
||||
workspace_output["description"].as_str(),
|
||||
Some("Workspace review guidance")
|
||||
);
|
||||
assert!(workspace_output["path"]
|
||||
.as_str()
|
||||
.expect("path")
|
||||
.ends_with(".claw/skills/review/SKILL.md"));
|
||||
|
||||
let legacy_skill = execute_tool("Skill", &json!({ "skill": "/deploy" }))
|
||||
.expect("legacy command should resolve");
|
||||
let legacy_output: serde_json::Value =
|
||||
serde_json::from_str(&legacy_skill).expect("valid json");
|
||||
assert_eq!(
|
||||
legacy_output["description"].as_str(),
|
||||
Some("Deploy command guidance")
|
||||
);
|
||||
assert!(legacy_output["path"]
|
||||
.as_str()
|
||||
.expect("path")
|
||||
.ends_with(".codex/commands/deploy.md"));
|
||||
|
||||
std::env::set_current_dir(&original_dir).expect("restore cwd");
|
||||
let _ = std::fs::remove_dir_all(root);
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn tool_search_supports_keyword_and_select_queries() {
|
||||
let keyword = execute_tool(
|
||||
|
||||
Reference in New Issue
Block a user