Compare commits

...

84 Commits

Author SHA1 Message Date
公明 dc1779275d Add files via upload 2026-05-16 13:46:24 +08:00
公明 10dff937b1 Update config.yaml 2026-05-16 13:00:29 +08:00
公明 d4e1fe3bbe Add files via upload 2026-05-15 18:03:59 +08:00
公明 179976ae57 Add files via upload 2026-05-15 17:49:33 +08:00
公明 1c758bb98c Add files via upload 2026-05-15 17:34:25 +08:00
公明 17c4f38ee3 Add files via upload 2026-05-15 17:27:45 +08:00
公明 cd7e57d121 Add files via upload 2026-05-15 14:55:43 +08:00
公明 0f2c3f65cc Add files via upload 2026-05-15 14:21:40 +08:00
公明 7779666e27 Update config.yaml 2026-05-15 14:19:18 +08:00
公明 c74bd4403b Add files via upload 2026-05-15 14:16:04 +08:00
公明 04d23ddb43 Update config.yaml 2026-05-15 14:14:09 +08:00
公明 0874e84393 Add files via upload 2026-05-15 14:12:52 +08:00
公明 57f57f30b1 Add files via upload 2026-05-15 14:11:24 +08:00
公明 f37d613a0c Add files via upload 2026-05-15 14:09:37 +08:00
公明 87d0ff9154 Update config.yaml 2026-05-15 14:08:28 +08:00
公明 b3418f39b8 Update config.yaml 2026-05-15 11:53:07 +08:00
公明 f9e1ca0e2d Add files via upload 2026-05-15 11:49:53 +08:00
公明 2c45879669 Add files via upload 2026-05-15 11:48:58 +08:00
公明 1cdcfa2c2d Add files via upload 2026-05-15 11:47:34 +08:00
公明 eab5b73846 Add files via upload 2026-05-15 11:46:02 +08:00
公明 d961ba1ec7 Add files via upload 2026-05-15 11:43:33 +08:00
公明 1ba5e57ec6 Update config.yaml 2026-05-14 19:35:37 +08:00
公明 1216d25f96 Add files via upload 2026-05-14 19:33:15 +08:00
公明 fde693408e Add files via upload 2026-05-14 19:31:21 +08:00
公明 352a81a869 Add files via upload 2026-05-14 19:29:59 +08:00
公明 b2562b1010 Add files via upload 2026-05-14 19:28:37 +08:00
公明 0d8ba51087 Add files via upload 2026-05-14 19:26:23 +08:00
公明 0b847fcea3 Delete multiagent directory 2026-05-14 19:25:42 +08:00
公明 bf2f49fe62 Delete skillpackage directory 2026-05-14 19:25:19 +08:00
公明 75e64b1a86 Delete einomcp directory 2026-05-14 19:25:09 +08:00
公明 2167735022 Delete database directory 2026-05-14 19:24:58 +08:00
公明 4ee292cc1f Delete storage directory 2026-05-14 19:24:48 +08:00
公明 961205940f Delete agents directory 2026-05-14 19:24:19 +08:00
公明 ffe797bd06 Delete agent directory 2026-05-14 19:24:04 +08:00
公明 b6c864547e Delete mcp directory 2026-05-14 19:23:52 +08:00
公明 da369c2edc Add files via upload 2026-05-14 19:23:27 +08:00
公明 54dc31a616 Add files via upload 2026-05-14 19:21:35 +08:00
公明 9e0b985221 Add files via upload 2026-05-14 19:19:26 +08:00
公明 eb47077082 Update config.yaml 2026-05-14 14:59:27 +08:00
公明 f9a482857d Add files via upload 2026-05-14 11:57:00 +08:00
公明 679a68b12f Add files via upload 2026-05-14 11:55:47 +08:00
公明 840a26c7ef Add files via upload 2026-05-14 11:54:23 +08:00
公明 030e69c02d Add files via upload 2026-05-14 11:49:08 +08:00
公明 d9683cdb44 Add files via upload 2026-05-14 11:33:12 +08:00
公明 60a063dd7d Add files via upload 2026-05-14 11:31:56 +08:00
公明 5f0c1805a7 Add files via upload 2026-05-14 11:30:28 +08:00
公明 cb7e66001b Update config.yaml 2026-05-13 17:09:31 +08:00
公明 4ea838f1d7 Update config.yaml 2026-05-13 16:48:03 +08:00
公明 573648fc4b Add files via upload 2026-05-13 16:43:26 +08:00
公明 f0e090abea Add files via upload 2026-05-13 16:41:23 +08:00
公明 549dcf518c Add files via upload 2026-05-13 16:39:08 +08:00
公明 c74e20c54a Add files via upload 2026-05-13 16:36:09 +08:00
公明 c94a9fd9e9 Add files via upload 2026-05-13 15:26:02 +08:00
公明 ce9749a8ef Update config.yaml 2026-05-13 15:23:18 +08:00
公明 145da12017 Add files via upload 2026-05-13 12:33:23 +08:00
公明 5111f4c311 Add files via upload 2026-05-13 12:08:28 +08:00
公明 8f6384a083 Add files via upload 2026-05-13 12:06:56 +08:00
公明 762f778e1e Add files via upload 2026-05-13 12:05:12 +08:00
公明 4a11ba8f14 Add files via upload 2026-05-13 10:40:56 +08:00
公明 86090af4df Update config.yaml 2026-05-12 17:34:59 +08:00
公明 2dea6e36bd Add files via upload 2026-05-12 17:33:14 +08:00
公明 38ce695708 Update config.yaml 2026-05-12 17:29:45 +08:00
公明 41fe90faa3 Add files via upload 2026-05-12 17:23:57 +08:00
公明 9f54bdb1bf Add files via upload 2026-05-12 17:22:19 +08:00
公明 08e727aa41 Add files via upload 2026-05-12 17:19:51 +08:00
公明 176c17d630 Add files via upload 2026-05-12 17:17:36 +08:00
公明 62710f6619 Add files via upload 2026-05-12 16:42:43 +08:00
公明 e4dbb96b3e Add files via upload 2026-05-12 16:41:15 +08:00
公明 832532213a Add files via upload 2026-05-12 16:39:09 +08:00
公明 eb04ac0c3a Delete web/templates/index.html.bak 2026-05-12 16:36:51 +08:00
公明 1946508325 Add files via upload 2026-05-12 16:36:23 +08:00
公明 89d1c5124f Add files via upload 2026-05-12 14:57:04 +08:00
公明 1e7a3299a5 Merge pull request #118 from Dilligaf371/fix/mcp-stdio-init-result-storage
fix(mcp-stdio): initialize result storage so query tools work
2026-05-12 13:01:04 +08:00
公明 cae3a77331 Add files via upload 2026-05-12 12:56:11 +08:00
公明 2e1e57ce27 Add files via upload 2026-05-12 12:55:02 +08:00
公明 45b6ed2847 Add files via upload 2026-05-12 12:53:20 +08:00
公明 88eadf13a4 Add files via upload 2026-05-12 12:48:42 +08:00
Gilles Ceyssat dca5666b18 fix(mcp-stdio): initialize result storage so query tools work
The stdio MCP entrypoint (cmd/mcp-stdio/main.go) constructed the
security Executor without calling SetResultStorage, leaving it nil.
Any tool that goes through the query path — notably `exec` (the
generic shell tool) and the YAML wrappers that emit large results —
failed with:

    "错误: 结果存储未初始化"  (Error: result storage not initialized)

The full HTTP app at internal/app/app.go:118-147 initializes a
FileResultStorage from cfg.Agent.ResultStorageDir and wires it via
both agent.SetResultStorage and executor.SetResultStorage. The stdio
entrypoint needs the same wiring.

This replicates the storage init block in main.go so stdio-mode tool
execution stops failing on the query path.

Verified: before, `exec` calls returned the "结果存储未初始化" error.
After, `exec nmap -p 22,80,443 127.0.0.1` (bridged through an
external MCP client) returns the full nmap output as expected.
2026-05-12 08:13:13 +04:00
公明 e5d52cdf85 Update config.yaml 2026-05-11 20:36:58 +08:00
公明 65e48826ff Update config.yaml 2026-05-11 19:59:41 +08:00
公明 0cff507272 Add files via upload 2026-05-11 19:57:46 +08:00
公明 30afd71c05 Add files via upload 2026-05-11 19:56:38 +08:00
公明 d2b6a154de Add files via upload 2026-05-11 19:54:40 +08:00
公明 278d5aa25c Add files via upload 2026-05-11 19:52:39 +08:00
66 changed files with 4853 additions and 2674 deletions
+9 -5
View File
@@ -174,9 +174,11 @@ The `run.sh` script will automatically:
- ✅ Build the project
- ✅ Start the server
**Networking defaults:** `run.sh` starts the server with **`--https`** and the repo **`config.yaml`** (local self-signed TLS; better for many concurrent streams). Use **`./run.sh --http`** for plain HTTP. In production, set **`server.tls_cert_path`** / **`server.tls_key_path`** in **`config.yaml`** (see comments there). For manual runs, add **`--https`** or **`CYBERSTRIKE_HTTPS=1`**; if **`-config`** is wrong, the binary prints a short usage hint on stderr.
**First-Time Configuration:**
1. **Configure OpenAI-compatible API** (required before first use)
- Open http://localhost:8080 after launch
- After launch, open **`https://127.0.0.1:8080/`** (or **`https://localhost:8080/`**; replace **8080** with `server.port` in `config.yaml`) and accept the self-signed certificate warning once. If you used `./run.sh --http`, use **`http://`** instead.
- Go to `Settings` → Fill in your API credentials:
```yaml
openai:
@@ -197,21 +199,23 @@ The `run.sh` script will automatically:
**Alternative Launch Methods:**
```bash
# Direct Go run (requires manual setup)
go run cmd/server/main.go
# Direct Go run (set up env yourself); add --https to match run.sh defaults
go run cmd/server/main.go --https
# Manual build
go build -o cyberstrike-ai cmd/server/main.go
./cyberstrike-ai
./cyberstrike-ai --https
```
If server logs show `client sent an HTTP request to an HTTPS server`, a client is still using **`http://`** on a TLS-only port—switch the URL to **`https://`**.
**Note:** The Python virtual environment (`venv/`) is automatically created and managed by `run.sh`. Tools that require Python (like `api-fuzzer`, `http-framework-test`, etc.) will automatically use this environment.
### Version Update (No Breaking Changes)
**CyberStrikeAI one-click upgrade (recommended):**
1. (First time) enable the script: `chmod +x upgrade.sh`
2. Upgrade with: `./upgrade.sh` (optional flags: `--tag vX.Y.Z`, `--no-venv`, `--preserve-custom`, `--yes`)
2. Upgrade with: `./upgrade.sh` (optional flags: `--tag vX.Y.Z`, `--no-venv`, `--yes`). Local `tools/`, `roles/`, and `skills/` are always preserved.
3. The script will back up your `config.yaml` and `data/`, upgrade the code from GitHub Release, update `config.yaml`'s `version`, then restart the server.
Recommended one-liner:
+9 -5
View File
@@ -173,9 +173,11 @@ chmod +x run.sh && ./run.sh
- ✅ 编译构建项目
- ✅ 启动服务器
**网络默认:** `run.sh` 会以 **`--https`** 并传入项目根 **`config.yaml`** 启动(本机自签证书,多路流式场景更稳)。只要明文 HTTP 用 **`./run.sh --http`**。生产环境在 **`config.yaml`** 的 **`server.tls_cert_path` / `server.tls_key_path`** 配正式证书(见文件内注释)。手动启动可加 **`--https`** 或环境变量 **`CYBERSTRIKE_HTTPS=1`**`-config` 写错时程序会在终端提示正确写法。
**首次配置:**
1. **配置 AI 模型 API**(首次使用前必填)
- 启动后访问 http://localhost:8080
- 启动后在浏览器打开 **`https://127.0.0.1:8080/`**(或 **`https://localhost:8080/`**;端口以 `config.yaml`**`server.port`** 为准,默认 8080),并按提示信任自签证书。若使用 **`./run.sh --http`**,则改用 **`http://`** 访问。
- 进入 `设置` → 填写 API 配置信息:
```yaml
openai:
@@ -196,20 +198,22 @@ chmod +x run.sh && ./run.sh
**其他启动方式:**
```bash
# 直接运行(需手动配置环境)
go run cmd/server/main.go
# 直接运行(需自行配环境);与 run.sh 默认一致可加 --https
go run cmd/server/main.go --https
# 手动编译
go build -o cyberstrike-ai cmd/server/main.go
./cyberstrike-ai
./cyberstrike-ai --https
```
若日志出现 `client sent an HTTP request to an HTTPS server`,说明仍有客户端用 **`http://`** 访问只提供 HTTPS 的端口,请改为 **`https://`**。
**说明:** Python 虚拟环境(`venv/`)由 `run.sh` 自动创建和管理。需要 Python 的工具(如 `api-fuzzer`、`http-framework-test` 等)会自动使用该环境。
### CyberStrikeAI 版本更新(无兼容性问题)
1. (首次使用)启用脚本:`chmod +x upgrade.sh`
2. 一键升级:`./upgrade.sh`(可选参数:`--tag vX.Y.Z`、`--no-venv`、`--preserve-custom`、`--yes`
2. 一键升级:`./upgrade.sh`(可选参数:`--tag vX.Y.Z`、`--no-venv`、`--yes`)。本地的 `tools/`、`roles/`、`skills/` 会始终保留不被覆盖。
3. 脚本会备份你的 `config.yaml` 和 `data/`,从 GitHub Release 升级代码,更新 `config.yaml` 的 `version` 字段后重启服务。
推荐的一键指令:
+18
View File
@@ -5,6 +5,7 @@ import (
"cyberstrike-ai/internal/logger"
"cyberstrike-ai/internal/mcp"
"cyberstrike-ai/internal/security"
"cyberstrike-ai/internal/storage"
"flag"
"fmt"
"os"
@@ -32,6 +33,23 @@ func main() {
// 创建安全工具执行器
executor := security.NewExecutor(&cfg.Security, mcpServer, log.Logger)
// 初始化结果存储(与 internal/app/app.go 同样的逻辑)。
// stdio 模式下原本不初始化,导致 'exec' 等查询型工具报"结果存储未初始化"。
resultStorageDir := "tmp"
if cfg.Agent.ResultStorageDir != "" {
resultStorageDir = cfg.Agent.ResultStorageDir
}
if err := os.MkdirAll(resultStorageDir, 0755); err != nil {
fmt.Fprintf(os.Stderr, "创建结果存储目录失败: %v\n", err)
os.Exit(1)
}
resultStorage, err := storage.NewFileResultStorage(resultStorageDir, log.Logger)
if err != nil {
fmt.Fprintf(os.Stderr, "初始化结果存储失败: %v\n", err)
os.Exit(1)
}
executor.SetResultStorage(resultStorage)
// 注册工具
executor.RegisterTools(mcpServer)
+43 -3
View File
@@ -9,22 +9,62 @@ import (
"fmt"
"os"
"os/signal"
"strings"
"syscall"
)
func main() {
var configPath = flag.String("config", "config.yaml", "配置文件路径")
var httpsBootstrap = flag.Bool("https", false, "启用主站 HTTPS:未配置 tls_cert_path/tls_key_path 时使用内存自签证书(本地测试);与 run.sh 默认行为一致")
flag.Parse()
// 环境变量兼容(便于 systemd/docker 等不传参场景)
if !*httpsBootstrap {
v := strings.TrimSpace(os.Getenv("CYBERSTRIKE_HTTPS"))
if v == "1" || strings.EqualFold(v, "true") || strings.EqualFold(v, "yes") {
*httpsBootstrap = true
}
}
// 加载配置
cfg, err := config.Load(*configPath)
cp := strings.TrimSpace(*configPath)
if cp == "" {
cp = "config.yaml"
}
if strings.HasPrefix(cp, "-") {
fmt.Fprintf(os.Stderr, "无效的 -config 路径 %q。\n若同时需要 HTTPS,请写成: ./cyberstrike-ai --https -config config.yaml-config 后必须是 yaml 文件路径)。\n", cp)
os.Exit(2)
}
cfg, err := config.Load(cp)
if err != nil {
fmt.Printf("加载配置失败: %v\n", err)
return
}
if *httpsBootstrap {
config.ApplyDevHTTPSBootstrap(cfg)
}
port := cfg.Server.Port
if port <= 0 {
port = 8080
}
scheme := "http"
if config.MainWebUIUsesHTTPS(&cfg.Server) {
scheme = "https"
}
fmt.Println()
fmt.Printf("→ Web 界面: %s://127.0.0.1:%d/\n", scheme, port)
if scheme == "https" && cfg.Server.TLSAutoSelfSign {
fmt.Println(" (内存自签证书:浏览器首次需确认「继续访问」)")
}
if scheme == "https" && config.ServerHTTPRedirectEnabled(&cfg.Server) {
fmt.Printf(" http://127.0.0.1:%d/ 将自动跳转到 HTTPS\n", port)
}
fmt.Println()
// MCP 启用且 auth_header_value 为空时,自动生成随机密钥并写回配置
if err := config.EnsureMCPAuth(*configPath, cfg); err != nil {
if err := config.EnsureMCPAuth(cp, cfg); err != nil {
fmt.Printf("MCP 鉴权配置失败: %v\n", err)
return
}
@@ -44,7 +84,7 @@ func main() {
signal.Notify(sigCh, syscall.SIGINT, syscall.SIGTERM)
// 创建应用
application, err := app.New(cfg, log)
application, err := app.New(cfg, log, cp)
if err != nil {
log.Fatal("应用初始化失败", "error", err)
}
+37 -4
View File
@@ -10,11 +10,22 @@
# ============================================
# 前端显示的版本号(可选,不填则显示默认版本)
version: "v1.6.7"
version: "v1.6.15"
# 服务器配置
server:
host: 0.0.0.0 # 监听地址,0.0.0.0 表示监听所有网络接口
port: 8080 # HTTP 服务端口,可通过浏览器访问 http://localhost:8080
port: 8080 # 服务端口;未启用 TLS 时为 http://localhost:8080
# --- 可选:HTTPS + HTTP/2(缓解浏览器对同源 HTTP/1.1 的并发连接数限制,多路 Deep 流式更稳)---
# 启用 TLS 的条件(满足其一即可):tls_enabled: true,或 tls_auto_self_sign: true,或同时配置了 tls_cert_path + tls_key_path。
# 启用后请用 https://127.0.0.1:<本端口>/ 访问;若仍用 http:// 访问同端口,将自动 308 跳转到 HTTPS(可用 tls_http_redirect: false 关闭)。
tls_enabled: true
# 启用 HTTPS 时,明文 HTTP 是否自动跳转到 HTTPS(默认 true;同端口嗅探 TLS/HTTP 后分流)
# tls_http_redirect: true
# 方式 A(推荐生产):PEM 证书与私钥路径
# tls_cert_path: /path/to/fullchain.pem
# tls_key_path: /path/to/privkey.pem
# 方式 B(仅本地/测试):无证书文件时内存自签(浏览器会提示不受信任;SAN 含 localhost / 127.0.0.1
tls_auto_self_sign: true
# 认证配置
auth:
password: # Web 登录密码,请修改为强密码
@@ -41,6 +52,13 @@ openai:
api_key: sk-xxxxxxx # API 密钥(必填)
model: qwen3-max # 模型名称(必填)
max_total_tokens: 120000 # LLM 相关上下文的最大 Token 数限制(内存压缩和攻击链构建会共用此配置)
# Eino 路径模型推理:DeepSeek/OpenAI 为 thinking / reasoning_effort 等;provider 为 claude 时合并为 Anthropic 顶层 thinkingextended thinking),mode: off 关闭
reasoning:
mode: off # auto | on | offoff 时不附加任何推理扩展字段
effort: max # low | medium | high | max;空表示不指定(openai_compat 下 auto 且无强度时不发请求扩展)
allow_client_reasoning: true # false 时忽略对话请求体 reasoning,仅以下方为准
profile: openai_compat # auto | deepseek_compat | openai_compat | output_config_effort
# extra_request_fields: {} # 可选:管理员自定义根级 JSON 片段(高级)
# ============================================
# 信息收集(FOFA)配置(可选)
# ============================================
@@ -53,10 +71,10 @@ fofa:
# Agent 配置
# 达到最大迭代次数时,AI 会自动总结测试结果
agent:
max_iterations: 120 # 最大迭代次数,AI 代理最多执行多少轮工具调用
max_iterations: 1200 # 最大迭代次数,AI 代理最多执行多少轮工具调用
large_result_threshold: 102400 # 大结果阈值(字节),默认50KB,超过此大小会自动保存到存储
result_storage_dir: tmp # 结果存储目录,大结果会保存在此目录下
tool_timeout_minutes: 30 # 单次工具执行最大时长(分钟),超时自动终止;0 表示不限制(不推荐,易出现长时间挂起)
tool_timeout_minutes: 60 # 单次工具执行最大时长(分钟),超时自动终止;0 表示不限制(不推荐,易出现长时间挂起)
# system_prompt_path: prompts/single-react.md # 可选:单代理系统提示文件(相对本配置文件所在目录);非空且可读时替换内置提示
# 人机协同(HITL)全局白名单:此处列出的工具始终免审批,与对话页「白名单工具(免审批,逗号分隔)」合并为并集;侧栏「应用」可合并写入本列表并立即生效。
hitl:
@@ -110,6 +128,21 @@ multi_agent:
deep_output_key: "" # 非空:将最终助手输出写入 adk session 的键名(Deep 与 Supervisor 主代理);空表示不写入
deep_model_retry_max_retries: 0 # >0ChatModel 调用失败时的框架级最大重试次数(Deep 与 Supervisor 主);0:不重试
task_tool_description_prefix: "" # 非空:仅 Deep 的 task 工具使用自定义描述前缀,运行时会拼接子代理名称;空则走 Eino 默认生成逻辑
# Eino callbacks + OpenTelemetry:框架级 span(与 Zap 对齐);默认不向终端用户 UI 推 eino_trace_*(见 sse_trace_to_client
eino_callbacks:
enabled: true
# log_only=仅 Zap+OTel(推荐默认)| sse/full=才启用流式回调副本关闭等(full 含 stream hooks
mode: log_only
sse_trace_to_client: false # true:且 mode 为 sse/full 时,向前端时间线推送 eino_trace_*(排障/内网演示用)
max_input_summary_runes: 400
max_output_summary_runes: 400
zap_verbose: false # trueDebug 附带 input/output 摘要
otel:
enabled: true
service_name: cyberstrike-ai
exporter: stdout # none | stdout(开发/本机)| otlphttp(生产接 Collector
otlp_endpoint: localhost:4318 # otlphttp 时使用,host:port,路径固定 /v1/traces
sample_ratio: 1.0 # 0~1ParentBased+TraceIDRatio
# 数据库配置
database:
path: data/conversations.db # SQLite 数据库文件路径,用于存储对话历史和消息
+18 -2
View File
@@ -27,6 +27,11 @@ require (
github.com/open-dingtalk/dingtalk-stream-sdk-go v0.9.1
github.com/pkoukk/tiktoken-go v0.1.8
github.com/robfig/cron/v3 v3.0.1
go.opentelemetry.io/otel v1.34.0
go.opentelemetry.io/otel/exporters/otlp/otlptrace/otlptracehttp v1.34.0
go.opentelemetry.io/otel/exporters/stdout/stdouttrace v1.34.0
go.opentelemetry.io/otel/sdk v1.34.0
go.opentelemetry.io/otel/trace v1.34.0
go.uber.org/zap v1.26.0
golang.org/x/text v0.26.0
golang.org/x/time v0.14.0
@@ -39,6 +44,7 @@ require (
github.com/buger/jsonparser v1.1.1 // indirect
github.com/bytedance/gopkg v0.1.3 // indirect
github.com/bytedance/sonic/loader v0.5.0 // indirect
github.com/cenkalti/backoff/v4 v4.3.0 // indirect
github.com/cloudwego/base64x v0.1.6 // indirect
github.com/cloudwego/eino-ext/libs/acl/openai v0.1.17 // indirect
github.com/dlclark/regexp2 v1.10.0 // indirect
@@ -46,6 +52,8 @@ require (
github.com/evanphx/json-patch v0.5.2 // indirect
github.com/gabriel-vasile/mimetype v1.4.2 // indirect
github.com/gin-contrib/sse v0.1.0 // indirect
github.com/go-logr/logr v1.4.2 // indirect
github.com/go-logr/stdr v1.2.2 // indirect
github.com/go-playground/locales v0.14.1 // indirect
github.com/go-playground/universal-translator v0.18.1 // indirect
github.com/go-playground/validator/v10 v10.14.0 // indirect
@@ -53,6 +61,7 @@ require (
github.com/gogo/protobuf v1.3.2 // indirect
github.com/google/jsonschema-go v0.3.0 // indirect
github.com/goph/emperror v0.17.2 // indirect
github.com/grpc-ecosystem/grpc-gateway/v2 v2.25.1 // indirect
github.com/json-iterator/go v1.1.12 // indirect
github.com/klauspost/cpuid/v2 v2.2.10 // indirect
github.com/leodido/go-urn v1.2.4 // indirect
@@ -71,14 +80,21 @@ require (
github.com/wk8/go-ordered-map/v2 v2.1.8 // indirect
github.com/yargevad/filepathx v1.0.0 // indirect
github.com/yosida95/uritemplate/v3 v3.0.2 // indirect
go.opentelemetry.io/auto/sdk v1.1.0 // indirect
go.opentelemetry.io/otel/exporters/otlp/otlptrace v1.34.0 // indirect
go.opentelemetry.io/otel/metric v1.34.0 // indirect
go.opentelemetry.io/proto/otlp v1.5.0 // indirect
go.uber.org/multierr v1.11.0 // indirect
golang.org/x/arch v0.15.0 // indirect
golang.org/x/crypto v0.39.0 // indirect
golang.org/x/exp v0.0.0-20250305212735-054e65f0b394 // indirect
golang.org/x/net v0.24.0 // indirect
golang.org/x/net v0.35.0 // indirect
golang.org/x/oauth2 v0.30.0 // indirect
golang.org/x/sys v0.33.0 // indirect
google.golang.org/protobuf v1.30.0 // indirect
google.golang.org/genproto/googleapis/api v0.0.0-20250115164207-1a7da9e5054f // indirect
google.golang.org/genproto/googleapis/rpc v0.0.0-20250115164207-1a7da9e5054f // indirect
google.golang.org/grpc v1.69.4 // indirect
google.golang.org/protobuf v1.36.3 // indirect
)
// 修复钉钉 Stream SDK 在长连接断开(熄屏/网络中断)后 "panic: send on closed channel" 问题
+43 -7
View File
@@ -17,6 +17,8 @@ github.com/bytedance/sonic v1.15.0 h1:/PXeWFaR5ElNcVE84U0dOHjiMHQOwNIx3K4ymzh/uS
github.com/bytedance/sonic v1.15.0/go.mod h1:tFkWrPz0/CUCLEF4ri4UkHekCIcdnkqXw9VduqpJh0k=
github.com/bytedance/sonic/loader v0.5.0 h1:gXH3KVnatgY7loH5/TkeVyXPfESoqSBSBEiDd5VjlgE=
github.com/bytedance/sonic/loader v0.5.0/go.mod h1:AR4NYCk5DdzZizZ5djGqQ92eEhCCcdf5x77udYiSJRo=
github.com/cenkalti/backoff/v4 v4.3.0 h1:MyRJ/UdXutAwSAT+s3wNd7MfTIcy71VQueUuFK343L8=
github.com/cenkalti/backoff/v4 v4.3.0/go.mod h1:Y3VNntkOUPxTVeUxJ/G5vcM//AlwfmyYozVcomhLiZE=
github.com/certifi/gocertifi v0.0.0-20190105021004-abcd57078448/go.mod h1:GJKEexRPVJrBSOjoqN5VNOIKJ5Q3RViH6eu3puDRwx4=
github.com/cloudwego/base64x v0.1.6 h1:t11wG9AECkCDk5fMSoxmufanudBtJ+/HemLstXDLI2M=
github.com/cloudwego/base64x v0.1.6/go.mod h1:OFcloc187FXDaYHvrNIjxSe8ncn0OOM8gEHfghB2IPU=
@@ -59,6 +61,11 @@ github.com/gin-gonic/gin v1.9.1 h1:4idEAncQnU5cB7BeOkPtxjfCSye0AAm1R0RVIqJ+Jmg=
github.com/gin-gonic/gin v1.9.1/go.mod h1:hPrL7YrpYKXt5YId3A/Tnip5kqbEAP+KLuI3SUcPTeU=
github.com/go-check/check v0.0.0-20180628173108-788fd7840127 h1:0gkP6mzaMqkmpcJYCFOLkIBwI7xFExG03bbkOkCvUPI=
github.com/go-check/check v0.0.0-20180628173108-788fd7840127/go.mod h1:9ES+weclKsC9YodN5RgxqK/VD9HM9JsCSh7rNhMZE98=
github.com/go-logr/logr v1.2.2/go.mod h1:jdQByPbusPIv2/zmleS9BjJVeZ6kBagPoEUsqbVz/1A=
github.com/go-logr/logr v1.4.2 h1:6pFjapn8bFcIbiKo3XT4j/BhANplGihG6tvd+8rYgrY=
github.com/go-logr/logr v1.4.2/go.mod h1:9T104GzyrTigFIr8wt5mBrctHMim0Nb2HLGrmQ40KvY=
github.com/go-logr/stdr v1.2.2 h1:hSWxHoqTgW2S2qGc0LTAI563KZ5YKYRhT3MFKZMbjag=
github.com/go-logr/stdr v1.2.2/go.mod h1:mMo/vtBO5dYbehREoey6XUKy/eSumjCCveDpRre4VKE=
github.com/go-playground/assert/v2 v2.2.0 h1:JvknZsQTYeFEAhQwI4qEt9cyV5ONwRHC+lYKSsYSR8s=
github.com/go-playground/assert/v2 v2.2.0/go.mod h1:VDjEfimB/XKnb+ZQfWdccd7VUvScMdVu0Titje2rxJ4=
github.com/go-playground/locales v0.14.1 h1:EWaQ/wswjilfKLTECiXz7Rh+3BjFhfDFKv/oXslEjJA=
@@ -75,8 +82,8 @@ github.com/gogo/protobuf v1.3.2/go.mod h1:P1XiOD3dCwIKUDQYPy72D8LYyHL2YPYrpS2s69
github.com/golang-jwt/jwt/v5 v5.2.2 h1:Rl4B7itRWVtYIHFrSNd7vhTiz9UpLdi6gZhZ3wEeDy8=
github.com/golang-jwt/jwt/v5 v5.2.2/go.mod h1:pqrtFR0X4osieyHYxtmOUWsAWrfe1Q5UVIyoH402zdk=
github.com/golang/protobuf v1.2.0/go.mod h1:6lQm79b+lXiMfvg/cZm0SGofjICqVBUtrP5yJMmIC1U=
github.com/golang/protobuf v1.5.0/go.mod h1:FsONVRAS9T7sI+LIUmWTfcYkHO4aIWwzhcaSAoJOfIk=
github.com/google/go-cmp v0.5.5/go.mod h1:v8dTdLbMG2kIc/vJvl+f65V22dbkXbowE6jgT/gNBxE=
github.com/golang/protobuf v1.5.4 h1:i7eJL8qZTpSEXOPTxNKhASYpMn+8e5Q6AdndVa1dWek=
github.com/golang/protobuf v1.5.4/go.mod h1:lnTiLA8Wa4RWRcIUkrtSVa5nRhsEGBg48fD6rSs7xps=
github.com/google/go-cmp v0.7.0 h1:wk8382ETsv4JYUZwIsn6YpYiWiBsYLSJiTsyBybVuN8=
github.com/google/go-cmp v0.7.0/go.mod h1:pXiqmnSA92OHEEa9HXL2W4E7lf9JzCmGVUdgjX3N/iU=
github.com/google/gofuzz v1.0.0/go.mod h1:dBl0BpW6vV/+mYPU4Po3pmUjxk6FQPldtuIdl/M65Eg=
@@ -90,6 +97,8 @@ github.com/gopherjs/gopherjs v1.17.2 h1:fQnZVsXk8uxXIStYb0N4bGk7jeyTalG/wsZjQ25d
github.com/gopherjs/gopherjs v1.17.2/go.mod h1:pRRIvn/QzFLrKfvEz3qUuEhtE/zLCWfreZ6J5gM2i+k=
github.com/gorilla/websocket v1.5.0 h1:PPwGk2jz7EePpoHN/+ClbZu8SPxiqlu12wZP/3sWmnc=
github.com/gorilla/websocket v1.5.0/go.mod h1:YR8l580nyteQvAITg2hZ9XVh4b55+EU/adAjf1fMHhE=
github.com/grpc-ecosystem/grpc-gateway/v2 v2.25.1 h1:VNqngBF40hVlDloBruUehVYC3ArSgIyScOAyMRqBxRg=
github.com/grpc-ecosystem/grpc-gateway/v2 v2.25.1/go.mod h1:RBRO7fro65R6tjKzYgLAFo0t1QEXY1Dp+i/bvpRiqiQ=
github.com/hpcloud/tail v1.0.0/go.mod h1:ab1qPbhIpdTxEkNHXyeSf5vhxWSCs/tWer42PpOxQnU=
github.com/jessevdk/go-flags v1.4.0/go.mod h1:4FA24M0QyGHXBuZZK/XkWh8h0e1EYbRYJSGM75WSRxI=
github.com/json-iterator/go v1.1.12 h1:PV8peI4a0ysnczrg+LtxykD8LfKY9ML6u2jnxaEnrnM=
@@ -191,6 +200,26 @@ github.com/yosida95/uritemplate/v3 v3.0.2 h1:Ed3Oyj9yrmi9087+NczuL5BwkIc4wvTb5zI
github.com/yosida95/uritemplate/v3 v3.0.2/go.mod h1:ILOh0sOhIJR3+L/8afwt/kE++YT040gmv5BQTMR2HP4=
github.com/yuin/goldmark v1.1.27/go.mod h1:3hX8gzYuyVAZsxl0MRgGTJEmQBFcNTphYh9decYSb74=
github.com/yuin/goldmark v1.2.1/go.mod h1:3hX8gzYuyVAZsxl0MRgGTJEmQBFcNTphYh9decYSb74=
go.opentelemetry.io/auto/sdk v1.1.0 h1:cH53jehLUN6UFLY71z+NDOiNJqDdPRaXzTel0sJySYA=
go.opentelemetry.io/auto/sdk v1.1.0/go.mod h1:3wSPjt5PWp2RhlCcmmOial7AvC4DQqZb7a7wCow3W8A=
go.opentelemetry.io/otel v1.34.0 h1:zRLXxLCgL1WyKsPVrgbSdMN4c0FMkDAskSTQP+0hdUY=
go.opentelemetry.io/otel v1.34.0/go.mod h1:OWFPOQ+h4G8xpyjgqo4SxJYdDQ/qmRH+wivy7zzx9oI=
go.opentelemetry.io/otel/exporters/otlp/otlptrace v1.34.0 h1:OeNbIYk/2C15ckl7glBlOBp5+WlYsOElzTNmiPW/x60=
go.opentelemetry.io/otel/exporters/otlp/otlptrace v1.34.0/go.mod h1:7Bept48yIeqxP2OZ9/AqIpYS94h2or0aB4FypJTc8ZM=
go.opentelemetry.io/otel/exporters/otlp/otlptrace/otlptracehttp v1.34.0 h1:BEj3SPM81McUZHYjRS5pEgNgnmzGJ5tRpU5krWnV8Bs=
go.opentelemetry.io/otel/exporters/otlp/otlptrace/otlptracehttp v1.34.0/go.mod h1:9cKLGBDzI/F3NoHLQGm4ZrYdIHsvGt6ej6hUowxY0J4=
go.opentelemetry.io/otel/exporters/stdout/stdouttrace v1.34.0 h1:jBpDk4HAUsrnVO1FsfCfCOTEc/MkInJmvfCHYLFiT80=
go.opentelemetry.io/otel/exporters/stdout/stdouttrace v1.34.0/go.mod h1:H9LUIM1daaeZaz91vZcfeM0fejXPmgCYE8ZhzqfJuiU=
go.opentelemetry.io/otel/metric v1.34.0 h1:+eTR3U0MyfWjRDhmFMxe2SsW64QrZ84AOhvqS7Y+PoQ=
go.opentelemetry.io/otel/metric v1.34.0/go.mod h1:CEDrp0fy2D0MvkXE+dPV7cMi8tWZwX3dmaIhwPOaqHE=
go.opentelemetry.io/otel/sdk v1.34.0 h1:95zS4k/2GOy069d321O8jWgYsW3MzVV+KuSPKp7Wr1A=
go.opentelemetry.io/otel/sdk v1.34.0/go.mod h1:0e/pNiaMAqaykJGKbi+tSjWfNNHMTxoC9qANsCzbyxU=
go.opentelemetry.io/otel/sdk/metric v1.31.0 h1:i9hxxLJF/9kkvfHppyLL55aW7iIJz4JjxTeYusH7zMc=
go.opentelemetry.io/otel/sdk/metric v1.31.0/go.mod h1:CRInTMVvNhUKgSAMbKyTMxqOBC0zgyxzW55lZzX43Y8=
go.opentelemetry.io/otel/trace v1.34.0 h1:+ouXS2V8Rd4hp4580a8q23bg0azF2nI8cqLYnC8mh/k=
go.opentelemetry.io/otel/trace v1.34.0/go.mod h1:Svm7lSjQD7kG7KJ/MUHPVXSDGz2OX4h0M2jHBhmSfRE=
go.opentelemetry.io/proto/otlp v1.5.0 h1:xJvq7gMzB31/d406fB8U5CBdyQGw4P399D1aQWU/3i4=
go.opentelemetry.io/proto/otlp v1.5.0/go.mod h1:keN8WnHxOy8PG0rQZjJJ5A2ebUoafqWp0eVQ4yIXvJ4=
go.uber.org/goleak v1.2.0 h1:xqgm/S+aQvhWFTtR0XK3Jvg7z8kGV8P4X14IzwN3Eqk=
go.uber.org/goleak v1.2.0/go.mod h1:XJYK+MuIchqpmGmUSAzotztawfKvYLUIgg7guXrwVUo=
go.uber.org/mock v0.4.0 h1:VcM4ZOtdbR4f6VXfiOpwpVJDL6lCReaZ6mw31wqh7KU=
@@ -216,8 +245,10 @@ golang.org/x/net v0.0.0-20190404232315-eb5bcb51f2a3/go.mod h1:t9HGtf8HONx5eT2rtn
golang.org/x/net v0.0.0-20190620200207-3b0461eec859/go.mod h1:z5CRVTTTmAJ677TzLLGU+0bjPO0LkuOLi4/5GtJWs/s=
golang.org/x/net v0.0.0-20200226121028-0de0cce0169b/go.mod h1:z5CRVTTTmAJ677TzLLGU+0bjPO0LkuOLi4/5GtJWs/s=
golang.org/x/net v0.0.0-20201021035429-f5854403a974/go.mod h1:sp8m0HH+o8qH0wwXwYZr8TS3Oi6o0r6Gce1SSxlDquU=
golang.org/x/net v0.24.0 h1:1PcaxkF854Fu3+lvBIx5SYn9wRlBzzcnHZSiaFFAb0w=
golang.org/x/net v0.24.0/go.mod h1:2Q7sJY5mzlzWjKtYUEXSlBWCdyaioyXzRB2RtU8KVE8=
golang.org/x/net v0.34.0 h1:Mb7Mrk043xzHgnRM88suvJFwzVrRfHEHJEl5/71CKw0=
golang.org/x/net v0.34.0/go.mod h1:di0qlW3YNM5oh6GqDGQr92MyTozJPmybPK4Ev/Gm31k=
golang.org/x/net v0.35.0 h1:T5GQRQb2y08kTAByq9L4/bz8cipCdA8FbRTXewonqY8=
golang.org/x/net v0.35.0/go.mod h1:EglIi67kWsHKlRzzVMUD93VMSWGFOMSZgxFjparz1Qk=
golang.org/x/oauth2 v0.30.0 h1:dnDm7JmhM45NNpd8FDDeLhK6FwqbOf4MLCM9zb1BOHI=
golang.org/x/oauth2 v0.30.0/go.mod h1:B++QgG3ZKulg6sRPGD/mqlHQs5rB3Ml9erfeDY7xKlU=
golang.org/x/sync v0.0.0-20180314180146-1d60e4601c6f/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
@@ -251,9 +282,14 @@ golang.org/x/xerrors v0.0.0-20190717185122-a985d3407aa7/go.mod h1:I/5z698sn9Ka8T
golang.org/x/xerrors v0.0.0-20191011141410-1b5146add898/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0=
golang.org/x/xerrors v0.0.0-20191204190536-9bdfabe68543/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0=
golang.org/x/xerrors v0.0.0-20200804184101-5ec99f83aff1/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0=
google.golang.org/protobuf v1.26.0-rc.1/go.mod h1:jlhhOSvTdKEhbULTjvd4ARK9grFBp09yW+WbY/TyQbw=
google.golang.org/protobuf v1.30.0 h1:kPPoIgf3TsEvrm0PFe15JQ+570QVxYzEvvHqChK+cng=
google.golang.org/protobuf v1.30.0/go.mod h1:HV8QOd/L58Z+nl8r43ehVNZIU/HEI6OcFqwMG9pJV4I=
google.golang.org/genproto/googleapis/api v0.0.0-20250115164207-1a7da9e5054f h1:gap6+3Gk41EItBuyi4XX/bp4oqJ3UwuIMl25yGinuAA=
google.golang.org/genproto/googleapis/api v0.0.0-20250115164207-1a7da9e5054f/go.mod h1:Ic02D47M+zbarjYYUlK57y316f2MoN0gjAwI3f2S95o=
google.golang.org/genproto/googleapis/rpc v0.0.0-20250115164207-1a7da9e5054f h1:OxYkA3wjPsZyBylwymxSHa7ViiW1Sml4ToBrncvFehI=
google.golang.org/genproto/googleapis/rpc v0.0.0-20250115164207-1a7da9e5054f/go.mod h1:+2Yz8+CLJbIfL9z73EW45avw8Lmge3xVElCP9zEKi50=
google.golang.org/grpc v1.69.4 h1:MF5TftSMkd8GLw/m0KM6V8CMOCY6NZ1NQDPGFgbTt4A=
google.golang.org/grpc v1.69.4/go.mod h1:vyjdE6jLBI76dgpDojsFGNaHlxdjXN9ghpnd2o7JGZ4=
google.golang.org/protobuf v1.36.3 h1:82DV7MYdb8anAVi3qge1wSnMDrnKK7ebr+I0hHRN1BU=
google.golang.org/protobuf v1.36.3/go.mod h1:9fA7Ob0pmnwhb644+1+CVWFRbNajQ6iRojtC/QF5bRE=
gopkg.in/check.v1 v0.0.0-20161208181325-20d25e280405/go.mod h1:Co6ibVJAznAaIkqp8huTwlJQCZ016jof/cbN4VW5Yz0=
gopkg.in/check.v1 v1.0.0-20201130134442-10cb98267c6c h1:Hei/4ADfdWqJk1ZMxUNpqntNwaWcugrBjAiHlqqRiVk=
gopkg.in/check.v1 v1.0.0-20201130134442-10cb98267c6c/go.mod h1:JHkPIbrfpd72SG/EVd6muEfDQjcINNoR0C8j2r3qZ4Q=
+7 -2
View File
@@ -195,6 +195,8 @@ type ChatMessage struct {
ToolCallID string `json:"tool_call_id,omitempty"`
// ToolName 仅 tool 角色:从 Eino/轨迹 JSON 的 name 或 tool_name 恢复,供续跑构造 ToolMessage。
ToolName string `json:"tool_name,omitempty"`
// ReasoningContent 对应 OpenAI/DeepSeek 的 reasoning_content;思考模式 + 工具调用后续跑须回传(见 DeepSeek 文档)。
ReasoningContent string `json:"reasoning_content,omitempty"`
}
// MarshalJSON 自定义JSON序列化,将tool_calls中的arguments转换为JSON字符串
@@ -208,6 +210,9 @@ func (cm ChatMessage) MarshalJSON() ([]byte, error) {
if cm.Content != "" {
aux["content"] = cm.Content
}
if cm.ReasoningContent != "" {
aux["reasoning_content"] = cm.ReasoningContent
}
// 添加tool_call_id(如果存在)
if cm.ToolCallID != "" {
@@ -663,8 +668,8 @@ func (a *Agent) AgentLoopWithProgress(ctx context.Context, userInput string, his
// 检查是否有工具调用
if len(choice.Message.ToolCalls) > 0 {
// 思考内容:如果本轮启用了思考流式增量(thinking_stream_*前端会去重
// 同时也需要在该“思考阶段结束”时补一条可落库的 thinking用于刷新后持久化展示)。
// ReAct 助手正文流式增量(thinking_stream_*在 UI 上归为「思考」;若与 streamId 重复则前端会去重
// 该条 thinking 用于刷新后持久化展示(与流式聚合一致)。
if choice.Message.Content != "" {
sendProgress("thinking", choice.Message.Content, map[string]interface{}{
"iteration": i + 1,
+78 -9
View File
@@ -3,8 +3,10 @@ package app
import (
"context"
"crypto/subtle"
"crypto/tls"
"database/sql"
"fmt"
"net"
"net/http"
"os"
"path/filepath"
@@ -16,6 +18,7 @@ import (
"cyberstrike-ai/internal/c2"
"cyberstrike-ai/internal/config"
"cyberstrike-ai/internal/database"
"cyberstrike-ai/internal/einoobserve"
"cyberstrike-ai/internal/handler"
"cyberstrike-ai/internal/knowledge"
"cyberstrike-ai/internal/logger"
@@ -29,6 +32,7 @@ import (
"github.com/gin-gonic/gin"
"github.com/google/uuid"
"go.uber.org/zap"
"golang.org/x/net/http2"
)
// App 应用
@@ -59,7 +63,7 @@ type App struct {
}
// New 创建新应用
func New(cfg *config.Config, log *logger.Logger) (*App, error) {
func New(cfg *config.Config, log *logger.Logger, configPath string) (*App, error) {
gin.SetMode(gin.ReleaseMode)
router := gin.Default()
@@ -90,6 +94,7 @@ func New(cfg *config.Config, log *logger.Logger) (*App, error) {
// 创建MCP服务器(带数据库持久化)
mcpServer := mcp.NewServerWithStorage(log.Logger, db)
mcpServer.ConfigureHTTPToolCallTimeoutFromAgentMinutes(cfg.Agent.ToolTimeoutMinutes)
// 创建安全工具执行器
executor := security.NewExecutor(&cfg.Security, mcpServer, log.Logger)
@@ -290,10 +295,10 @@ func New(cfg *config.Config, log *logger.Logger) (*App, error) {
}()
}
// 获取配置文件路径
configPath := "config.yaml"
if len(os.Args) > 1 {
configPath = os.Args[1]
// 配置文件路径必须由入口传入(与 flag -config 一致)。勿再用 os.Args[1],否则 ./cyberstrike-ai --https 会把 --https 当成路径。
configPath = strings.TrimSpace(configPath)
if configPath == "" {
configPath = "config.yaml"
}
skillsDir := skillpackage.SkillsRootFromConfig(cfg.SkillsDir, configPath)
@@ -528,18 +533,49 @@ func (a *App) RunWithContext(ctx context.Context) error {
}()
}
// 启动主服务器
// 启动主服务器(可选 HTTPS + HTTP/2,见 config server.tls_*
addr := fmt.Sprintf("%s:%d", a.config.Server.Host, a.config.Server.Port)
a.logger.Info("启动HTTP服务器", zap.String("address", addr))
tlsMode, tlsConf, certFile, keyFile, tlsErr := prepareMainServerTLS(&a.config.Server)
if tlsErr != nil {
return tlsErr
}
srv := &http.Server{Addr: addr, Handler: a.router}
var mainMux *mainServerMux
httpRedirect := config.ServerHTTPRedirectEnabled(&a.config.Server)
if tlsMode != mainTLSOff {
srv.TLSConfig = tlsConf
if err := http2.ConfigureServer(srv, &http2.Server{}); err != nil {
return fmt.Errorf("主服务 HTTP/2 配置失败: %w", err)
}
switch tlsMode {
case mainTLSFromFiles:
a.logger.Info("启动 HTTPS 主服务(已启用 HTTP/2 协商)",
zap.String("address", addr),
zap.String("cert", certFile),
)
case mainTLSInMemorySelfSigned:
a.logger.Info("启动 HTTPS 主服务(内存自签证书,仅测试;已启用 HTTP/2 协商)",
zap.String("address", addr),
)
}
if httpRedirect {
a.logger.Info("已启用 HTTP→HTTPS 自动跳转(同端口嗅探分流)", zap.String("address", addr))
}
} else {
a.logger.Info("启动 HTTP 主服务", zap.String("address", addr))
}
// 监听 context 取消,优雅关闭 HTTP 服务器
go func() {
<-ctx.Done()
shutdownCtx, cancel := context.WithTimeout(context.Background(), 10*time.Second)
defer cancel()
if err := srv.Shutdown(shutdownCtx); err != nil {
if mainMux != nil {
if err := mainMux.Shutdown(shutdownCtx); err != nil {
a.logger.Error("HTTP/HTTPS 分流服务器关闭失败", zap.Error(err))
}
} else if err := srv.Shutdown(shutdownCtx); err != nil {
a.logger.Error("HTTP服务器关闭失败", zap.Error(err))
}
if mcpServer != nil {
@@ -549,7 +585,36 @@ func (a *App) RunWithContext(ctx context.Context) error {
}
}()
if err := srv.ListenAndServe(); err != nil && err != http.ErrServerClosed {
var err error
switch {
case tlsMode != mainTLSOff && httpRedirect:
var tlsConfReady *tls.Config
tlsConfReady, err = ensureMainTLSConfigCerts(tlsMode, tlsConf, certFile, keyFile)
if err != nil {
return fmt.Errorf("加载 TLS 证书: %w", err)
}
srv.TLSConfig = tlsConfReady
var ln net.Listener
ln, err = net.Listen("tcp", addr)
if err != nil {
return err
}
mainMux = newMainServerMux(ln, srv, portFromListenAddr(addr), a.logger.Logger)
err = mainMux.Serve()
case tlsMode == mainTLSOff:
err = srv.ListenAndServe()
case tlsMode == mainTLSFromFiles:
err = srv.ListenAndServeTLS(certFile, keyFile)
case tlsMode == mainTLSInMemorySelfSigned:
var ln net.Listener
ln, err = tls.Listen("tcp", addr, srv.TLSConfig)
if err == nil {
err = srv.Serve(ln)
}
default:
err = srv.ListenAndServe()
}
if err != nil && err != http.ErrServerClosed {
return err
}
return nil
@@ -557,6 +622,10 @@ func (a *App) RunWithContext(ctx context.Context) error {
// Shutdown 关闭应用
func (a *App) Shutdown() {
shutdownCtx, shutdownCancel := context.WithTimeout(context.Background(), 5*time.Second)
_ = einoobserve.ShutdownOtel(shutdownCtx)
shutdownCancel()
// 停止钉钉/飞书长连接
a.robotMu.Lock()
if a.dingCancel != nil {
+196
View File
@@ -0,0 +1,196 @@
package app
import (
"bufio"
"context"
"crypto/tls"
"errors"
"fmt"
"net"
"net/http"
"strconv"
"sync"
"time"
"go.uber.org/zap"
)
// peekedConn 在已预读首字节后仍将连接交给 net/http 或 crypto/tls。
type peekedConn struct {
net.Conn
r *bufio.Reader
}
func (c *peekedConn) Read(p []byte) (int, error) {
return c.r.Read(p)
}
// oneConnListener 供 http.Server.Serve 处理单条 TCP 连接(含 keep-alive)。
type oneConnListener struct {
conn net.Conn
addr net.Addr
once sync.Once
}
func (l *oneConnListener) Accept() (net.Conn, error) {
var c net.Conn
l.once.Do(func() {
c = l.conn
l.conn = nil
})
if c == nil {
return nil, net.ErrClosed
}
return c, nil
}
func (l *oneConnListener) Close() error { return nil }
func (l *oneConnListener) Addr() net.Addr { return l.addr }
func isTLSHandshakeRecord(b byte) bool {
return b == 0x16
}
func newHTTPToHTTPSRedirectHandler(httpsPort int) http.Handler {
return http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
host := r.Host
if h, _, err := net.SplitHostPort(host); err == nil {
host = h
}
var target string
if httpsPort == 443 {
target = fmt.Sprintf("https://%s%s", host, r.URL.RequestURI())
} else {
target = fmt.Sprintf("https://%s:%d%s", host, httpsPort, r.URL.RequestURI())
}
http.Redirect(w, r, target, http.StatusPermanentRedirect)
})
}
func portFromListenAddr(addr string) int {
_, portStr, err := net.SplitHostPort(addr)
if err != nil {
return 443
}
p, err := strconv.Atoi(portStr)
if err != nil || p <= 0 {
return 443
}
return p
}
func ensureMainTLSConfigCerts(mode mainTLSMode, tlsConf *tls.Config, certFile, keyFile string) (*tls.Config, error) {
if mode != mainTLSFromFiles {
return tlsConf, nil
}
if tlsConf == nil {
tlsConf = &tls.Config{MinVersion: tls.VersionTLS12}
}
if len(tlsConf.Certificates) > 0 {
return tlsConf, nil
}
cert, err := tls.LoadX509KeyPair(certFile, keyFile)
if err != nil {
return nil, err
}
tlsConf.Certificates = []tls.Certificate{cert}
return tlsConf, nil
}
type mainServerMux struct {
ln net.Listener
httpsSrv *http.Server
redirectSrv *http.Server
logger *zap.Logger
}
func newMainServerMux(ln net.Listener, httpsSrv *http.Server, httpsPort int, logger *zap.Logger) *mainServerMux {
return &mainServerMux{
ln: ln,
httpsSrv: httpsSrv,
redirectSrv: &http.Server{Handler: newHTTPToHTTPSRedirectHandler(httpsPort), ReadHeaderTimeout: 10 * time.Second},
logger: logger,
}
}
func (m *mainServerMux) Serve() error {
for {
conn, err := m.ln.Accept()
if err != nil {
if errors.Is(err, net.ErrClosed) {
return http.ErrServerClosed
}
return err
}
go m.handleConn(conn)
}
}
func (m *mainServerMux) handleConn(raw net.Conn) {
if err := raw.SetReadDeadline(time.Now().Add(10 * time.Second)); err != nil {
_ = raw.Close()
return
}
br := bufio.NewReader(raw)
b, err := br.Peek(1)
if err != nil {
_ = raw.Close()
return
}
_ = raw.SetReadDeadline(time.Time{})
pc := &peekedConn{Conn: raw, r: br}
ocl := &oneConnListener{conn: pc, addr: raw.LocalAddr()}
if isTLSHandshakeRecord(b[0]) {
m.serveHTTPS(pc, raw.LocalAddr())
return
}
if err := m.redirectSrv.Serve(ocl); err != nil && !errors.Is(err, net.ErrClosed) && !errors.Is(err, http.ErrServerClosed) {
m.logger.Debug("HTTP 重定向连接处理结束", zap.Error(err))
}
}
// serveHTTPS 在已嗅探为 TLS 的连接上完成握手,再按 ALPN 走 HTTP/2 或 HTTP/1.1。
// 不能对同一 http.Server 并发调用 Serve(TLSConfig!=nil),否则握手/ALPN 会异常(浏览器 ERR_SSL_PROTOCOL_ERROR)。
func (m *mainServerMux) serveHTTPS(pc *peekedConn, localAddr net.Addr) {
tlsConn := tls.Server(pc, m.httpsSrv.TLSConfig)
handCtx, cancel := context.WithTimeout(context.Background(), 15*time.Second)
defer cancel()
if err := tlsConn.HandshakeContext(handCtx); err != nil {
m.logger.Debug("TLS 握手失败", zap.Error(err))
_ = pc.Close()
return
}
srv := m.httpsSrv
if srv.TLSNextProto != nil {
proto := tlsConn.ConnectionState().NegotiatedProtocol
if fn := srv.TLSNextProto[proto]; fn != nil {
fn(srv, tlsConn, srv.Handler)
return
}
}
plain := *srv
plain.TLSConfig = nil
ocl := &oneConnListener{conn: tlsConn, addr: localAddr}
if err := plain.Serve(ocl); err != nil && !errors.Is(err, net.ErrClosed) && !errors.Is(err, http.ErrServerClosed) {
m.logger.Debug("HTTPS 连接处理结束", zap.Error(err))
}
}
func (m *mainServerMux) Shutdown(ctx context.Context) error {
_ = m.ln.Close()
var err1, err2 error
if m.httpsSrv != nil {
err1 = m.httpsSrv.Shutdown(ctx)
}
if m.redirectSrv != nil {
err2 = m.redirectSrv.Shutdown(ctx)
}
if err1 != nil {
return err1
}
return err2
}
@@ -0,0 +1,150 @@
package app
import (
"crypto/tls"
"io"
"net"
"net/http"
"net/http/httptest"
"strconv"
"testing"
"cyberstrike-ai/internal/config"
"golang.org/x/net/http2"
)
func TestNewHTTPToHTTPSRedirectHandler(t *testing.T) {
t.Parallel()
tests := []struct {
name string
httpsPort int
host string
uri string
wantTarget string
}{
{
name: "non standard port",
httpsPort: 8080,
host: "127.0.0.1:8080",
uri: "/login?next=/",
wantTarget: "https://127.0.0.1:8080/login?next=/",
},
{
name: "standard port",
httpsPort: 443,
host: "example.com:80",
uri: "/",
wantTarget: "https://example.com/",
},
}
for _, tt := range tests {
tt := tt
t.Run(tt.name, func(t *testing.T) {
t.Parallel()
h := newHTTPToHTTPSRedirectHandler(tt.httpsPort)
req := httptest.NewRequest(http.MethodGet, "http://"+tt.host+tt.uri, nil)
req.Host = tt.host
rec := httptest.NewRecorder()
h.ServeHTTP(rec, req)
if rec.Code != http.StatusPermanentRedirect {
t.Fatalf("status = %d, want %d", rec.Code, http.StatusPermanentRedirect)
}
if got := rec.Header().Get("Location"); got != tt.wantTarget {
t.Fatalf("Location = %q, want %q", got, tt.wantTarget)
}
})
}
}
func TestIsTLSHandshakeRecord(t *testing.T) {
t.Parallel()
if !isTLSHandshakeRecord(0x16) {
t.Fatal("expected TLS handshake record")
}
if isTLSHandshakeRecord('G') {
t.Fatal("GET should not be TLS")
}
}
func TestServerHTTPRedirectEnabled(t *testing.T) {
t.Parallel()
disabled := false
enabled := true
if config.ServerHTTPRedirectEnabled(nil) {
t.Fatal("nil config should disable redirect")
}
if !config.ServerHTTPRedirectEnabled(&config.ServerConfig{TLSEnabled: true}) {
t.Fatal("HTTPS without explicit flag should enable redirect")
}
if config.ServerHTTPRedirectEnabled(&config.ServerConfig{TLSEnabled: true, TLSHTTPRedirect: &disabled}) {
t.Fatal("explicit false should disable redirect")
}
if !config.ServerHTTPRedirectEnabled(&config.ServerConfig{TLSEnabled: true, TLSHTTPRedirect: &enabled}) {
t.Fatal("explicit true should enable redirect")
}
if config.ServerHTTPRedirectEnabled(&config.ServerConfig{}) {
t.Fatal("plain HTTP should not redirect")
}
}
func TestMainServerMuxHTTPRedirectAndHTTPS(t *testing.T) {
cert, err := generateMainServerSelfSignedCert()
if err != nil {
t.Fatalf("generate cert: %v", err)
}
handler := http.HandlerFunc(func(w http.ResponseWriter, _ *http.Request) {
_, _ = io.WriteString(w, "ok")
})
srv := &http.Server{Handler: handler, TLSConfig: &tls.Config{
MinVersion: tls.VersionTLS12,
Certificates: []tls.Certificate{cert},
}}
if err := http2.ConfigureServer(srv, &http2.Server{}); err != nil {
t.Fatalf("configure http2: %v", err)
}
ln, err := net.Listen("tcp", "127.0.0.1:0")
if err != nil {
t.Fatalf("listen: %v", err)
}
defer ln.Close()
mux := newMainServerMux(ln, srv, portFromListenAddr(ln.Addr().String()), nil)
go func() { _ = mux.Serve() }()
client := &http.Client{
Transport: &http.Transport{
TLSClientConfig: &tls.Config{InsecureSkipVerify: true, MinVersion: tls.VersionTLS12},
},
CheckRedirect: func(_ *http.Request, _ []*http.Request) error {
return http.ErrUseLastResponse
},
}
addr := ln.Addr().String()
httpResp, err := client.Get("http://" + addr + "/")
if err != nil {
t.Fatalf("http get: %v", err)
}
_ = httpResp.Body.Close()
if httpResp.StatusCode != http.StatusPermanentRedirect {
t.Fatalf("http status = %d, want %d", httpResp.StatusCode, http.StatusPermanentRedirect)
}
if got := httpResp.Header.Get("Location"); got != "https://127.0.0.1:"+strconv.Itoa(portFromListenAddr(addr))+"/" {
t.Fatalf("Location = %q", got)
}
httpsResp, err := client.Get("https://" + addr + "/")
if err != nil {
t.Fatalf("https get: %v", err)
}
defer httpsResp.Body.Close()
if httpsResp.StatusCode != http.StatusOK {
t.Fatalf("https status = %d, want %d", httpsResp.StatusCode, http.StatusOK)
}
body, _ := io.ReadAll(httpsResp.Body)
if string(body) != "ok" {
t.Fatalf("body = %q, want ok", body)
}
}
+86
View File
@@ -0,0 +1,86 @@
package app
import (
"crypto/ecdsa"
"crypto/elliptic"
"crypto/rand"
"crypto/tls"
"crypto/x509"
"crypto/x509/pkix"
"encoding/pem"
"fmt"
"math/big"
"net"
"strings"
"time"
"cyberstrike-ai/internal/config"
)
// mainTLSMode 主 Web 服务 TLS 启动方式。
type mainTLSMode int
const (
mainTLSOff mainTLSMode = iota
mainTLSFromFiles
mainTLSInMemorySelfSigned
)
// prepareMainServerTLS 根据 server 配置决定主站是否启用 HTTPS(及 HTTP/2 协商)。
// fromFiles:使用 tls_cert_path + tls_key_path,由 http.Server.ListenAndServeTLS 加载 PEM。
// inMemorytls_auto_self_sign 生成的自签证书,仅用于本地/测试。
func prepareMainServerTLS(cfg *config.ServerConfig) (mode mainTLSMode, tlsConf *tls.Config, certFile, keyFile string, err error) {
if cfg == nil || !config.MainWebUIUsesHTTPS(cfg) {
return mainTLSOff, nil, "", "", nil
}
certFile = strings.TrimSpace(cfg.TLSCertPath)
keyFile = strings.TrimSpace(cfg.TLSKeyPath)
if certFile != "" && keyFile != "" {
// 证书由 ListenAndServeTLS 从文件加载;此处仅提供最小 TLS 配置供 http2.ConfigureServer 合并 ALPN。
return mainTLSFromFiles, &tls.Config{MinVersion: tls.VersionTLS12}, certFile, keyFile, nil
}
if cfg.TLSAutoSelfSign {
cert, genErr := generateMainServerSelfSignedCert()
if genErr != nil {
return mainTLSOff, nil, "", "", fmt.Errorf("生成自签 TLS 证书: %w", genErr)
}
tlsConf = &tls.Config{
MinVersion: tls.VersionTLS12,
Certificates: []tls.Certificate{cert},
}
return mainTLSInMemorySelfSigned, tlsConf, "", "", nil
}
return mainTLSOff, nil, "", "", fmt.Errorf("server: 已启用 TLStls_enabled / tls_auto_self_sign / 证书路径),请设置 tls_cert_path 与 tls_key_path,或将 tls_auto_self_sign 设为 true(仅测试环境)")
}
func generateMainServerSelfSignedCert() (tls.Certificate, error) {
priv, err := ecdsa.GenerateKey(elliptic.P256(), rand.Reader)
if err != nil {
return tls.Certificate{}, err
}
serial, err := rand.Int(rand.Reader, big.NewInt(1<<62))
if err != nil {
return tls.Certificate{}, err
}
tmpl := &x509.Certificate{
SerialNumber: serial,
Subject: pkix.Name{CommonName: "CyberStrikeAI"},
NotBefore: time.Now().Add(-1 * time.Hour),
NotAfter: time.Now().Add(365 * 24 * time.Hour),
KeyUsage: x509.KeyUsageDigitalSignature,
ExtKeyUsage: []x509.ExtKeyUsage{x509.ExtKeyUsageServerAuth},
IPAddresses: []net.IP{net.ParseIP("127.0.0.1"), net.ParseIP("::1")},
DNSNames: []string{"localhost"},
}
der, err := x509.CreateCertificate(rand.Reader, tmpl, tmpl, &priv.PublicKey, priv)
if err != nil {
return tls.Certificate{}, err
}
keyDER, err := x509.MarshalECPrivateKey(priv)
if err != nil {
return tls.Certificate{}, err
}
certPEM := pem.EncodeToMemory(&pem.Block{Type: "CERTIFICATE", Bytes: der})
keyPEM := pem.EncodeToMemory(&pem.Block{Type: "EC PRIVATE KEY", Bytes: keyDER})
return tls.X509KeyPair(certPEM, keyPEM)
}
+1 -1
View File
@@ -301,7 +301,7 @@ func (b *Builder) formatProcessDetailsForAttackChain(details []database.ProcessD
// 目标:以主 agent(编排器)视角输出整轮迭代
// - 保留:编排器工具调用/结果、对子代理的 task 调度、子代理最终回复(不含推理)
// - 丢弃:thinking/planning/progress 等噪声、子代理的工具细节与推理过程
if d.EventType == "progress" || d.EventType == "thinking" || d.EventType == "planning" {
if d.EventType == "progress" || d.EventType == "thinking" || d.EventType == "reasoning_chain" || d.EventType == "planning" {
continue
}
+175 -3
View File
@@ -63,6 +63,126 @@ type MultiAgentConfig struct {
EinoSkills MultiAgentEinoSkillsConfig `yaml:"eino_skills,omitempty" json:"eino_skills,omitempty"`
// EinoMiddleware wires optional ADK middleware (patchtoolcalls, toolsearch, plantask, reduction) and Deep extras.
EinoMiddleware MultiAgentEinoMiddlewareConfig `yaml:"eino_middleware,omitempty" json:"eino_middleware,omitempty"`
// EinoCallbacks attaches CloudWeGo eino callbacks.InitCallbacks on ADK Runner context (structured logs + optional SSE trace).
EinoCallbacks MultiAgentEinoCallbacksConfig `yaml:"eino_callbacks,omitempty" json:"eino_callbacks,omitempty"`
}
// MultiAgentEinoCallbacksConfig enables Eino unified callbacks on each ADK agent run (deep / plan_execute / supervisor / eino_single).
// Modes: log_only (zap + optional OTel; no SSE to browser), sse (adds client SSE eino_trace_* when sse_trace_to_client), full (sse rules + stream callback copies closed).
type MultiAgentEinoCallbacksConfig struct {
Enabled bool `yaml:"enabled" json:"enabled"`
Mode string `yaml:"mode,omitempty" json:"mode,omitempty"` // log_only | sse | full; empty with enabled=true defaults to log_only
// SseTraceToClient when true emits eino_trace_* SSE for UI (use only for admin/debug; nil/false recommended in production).
SseTraceToClient *bool `yaml:"sse_trace_to_client,omitempty" json:"sse_trace_to_client,omitempty"`
// Otel configures OpenTelemetry trace export (independent of mode; exporter none disables export even if enabled).
Otel MultiAgentEinoCallbacksOtelConfig `yaml:"otel,omitempty" json:"otel,omitempty"`
// MaxInputSummaryRunes / MaxOutputSummaryRunes cap text placed in SSE payloads and debug logs (not full payloads).
MaxInputSummaryRunes int `yaml:"max_input_summary_runes,omitempty" json:"max_input_summary_runes,omitempty"`
MaxOutputSummaryRunes int `yaml:"max_output_summary_runes,omitempty" json:"max_output_summary_runes,omitempty"`
// ZapVerbose when true logs input/output summaries at zap.Debug on start/end; false uses Info with short fields only.
ZapVerbose bool `yaml:"zap_verbose,omitempty" json:"zap_verbose,omitempty"`
}
// MultiAgentEinoCallbacksOtelConfig OpenTelemetry for Eino callback spans (W3C trace in collector / stdout).
type MultiAgentEinoCallbacksOtelConfig struct {
Enabled bool `yaml:"enabled" json:"enabled"`
ServiceName string `yaml:"service_name,omitempty" json:"service_name,omitempty"`
Exporter string `yaml:"exporter,omitempty" json:"exporter,omitempty"` // none | stdout | otlphttp
OTLPEndpoint string `yaml:"otlp_endpoint,omitempty" json:"otlp_endpoint,omitempty"` // host:port, e.g. localhost:4318 (path /v1/traces)
SampleRatio float64 `yaml:"sample_ratio,omitempty" json:"sample_ratio,omitempty"` // 01, default 1.0
}
// EinoCallbacksModeEffective returns off | log_only | sse | full.
func (c MultiAgentEinoCallbacksConfig) EinoCallbacksModeEffective() string {
if !c.Enabled {
return "off"
}
m := strings.TrimSpace(strings.ToLower(c.Mode))
switch m {
case "log_only":
return "log_only"
case "sse":
return "sse"
case "full":
return "full"
case "":
return "log_only"
default:
return "log_only"
}
}
// SseTraceToClientEffective is false unless explicitly set true (best practice: do not expose framework traces to end users by default).
func (c MultiAgentEinoCallbacksConfig) SseTraceToClientEffective() bool {
if c.SseTraceToClient == nil {
return false
}
return *c.SseTraceToClient
}
// ShouldEmitEinoTraceSSE is true when client-visible trace events should be sent over progress/SSE.
func (c MultiAgentEinoCallbacksConfig) ShouldEmitEinoTraceSSE(mode string) bool {
if !c.SseTraceToClientEffective() {
return false
}
return mode == "sse" || mode == "full"
}
// OtelExporterEffective returns none | stdout | otlphttp.
func (c MultiAgentEinoCallbacksOtelConfig) OtelExporterEffective() string {
e := strings.TrimSpace(strings.ToLower(c.Exporter))
switch e {
case "none", "stdout", "otlphttp":
return e
case "":
if c.Enabled {
return "stdout"
}
return "none"
default:
return "none"
}
}
// OtelTracingActive is true when spans should be started (enabled + non-none exporter).
func (c MultiAgentEinoCallbacksConfig) OtelTracingActive() bool {
if !c.Otel.Enabled {
return false
}
return c.Otel.OtelExporterEffective() != "none"
}
func (c MultiAgentEinoCallbacksOtelConfig) ServiceNameEffective() string {
s := strings.TrimSpace(c.ServiceName)
if s != "" {
return s
}
return "cyberstrike-ai"
}
func (c MultiAgentEinoCallbacksOtelConfig) SampleRatioEffective() float64 {
r := c.SampleRatio
if r <= 0 {
return 1.0
}
if r > 1 {
return 1.0
}
return r
}
func (c MultiAgentEinoCallbacksConfig) EinoCallbacksMaxInputSummaryRunes() int {
if c.MaxInputSummaryRunes > 0 {
return c.MaxInputSummaryRunes
}
return 400
}
func (c MultiAgentEinoCallbacksConfig) EinoCallbacksMaxOutputSummaryRunes() int {
if c.MaxOutputSummaryRunes > 0 {
return c.MaxOutputSummaryRunes
}
return 400
}
// MultiAgentEinoMiddlewareConfig optional Eino ADK middleware and Deep / supervisor tuning.
@@ -271,7 +391,8 @@ type MultiAgentAPIUpdate struct {
RobotUseMultiAgent bool `json:"robot_use_multi_agent"`
BatchUseMultiAgent bool `json:"batch_use_multi_agent"`
PlanExecuteLoopMaxIterations *int `json:"plan_execute_loop_max_iterations,omitempty"`
ToolSearchAlwaysVisibleTools []string `json:"tool_search_always_visible_tools,omitempty"`
// 指针区分「JSON 未传该字段」与「传空数组要清空」;省略时不应覆盖 YAML 中的常驻工具白名单。
ToolSearchAlwaysVisibleTools *[]string `json:"tool_search_always_visible_tools,omitempty"`
}
// RobotsConfig 机器人配置(企业微信、钉钉、飞书等)
@@ -323,8 +444,17 @@ type RobotLarkConfig struct {
}
type ServerConfig struct {
Host string `yaml:"host"`
Port int `yaml:"port"`
Host string `yaml:"host" json:"host"`
Port int `yaml:"port" json:"port"`
// TLSEnabled 为 true 时主 Web UI 使用 HTTPS;现代浏览器在同源下会协商 HTTP/2,缓解 HTTP/1.1 每源并发连接数限制。
TLSEnabled bool `yaml:"tls_enabled,omitempty" json:"tls_enabled,omitempty"`
// TLSCertPath / TLSKeyPath 非空时从 PEM 文件加载证书(生产环境推荐)。
TLSCertPath string `yaml:"tls_cert_path,omitempty" json:"tls_cert_path,omitempty"`
TLSKeyPath string `yaml:"tls_key_path,omitempty" json:"tls_key_path,omitempty"`
// TLSAutoSelfSign 为 true 且未配置有效证书路径时,启动时生成内存自签证书(仅本地/测试;浏览器会提示不受信任)。
TLSAutoSelfSign bool `yaml:"tls_auto_self_sign,omitempty" json:"tls_auto_self_sign,omitempty"`
// TLSHTTPRedirect 为 false 时禁用 HTTP→HTTPS 跳转;省略或为 true 且已启用 HTTPS 时,明文 HTTP 访问将 308 跳转到 HTTPS(同端口嗅探分流)。
TLSHTTPRedirect *bool `yaml:"tls_http_redirect,omitempty" json:"tls_http_redirect,omitempty"`
}
type LogConfig struct {
@@ -346,6 +476,48 @@ type OpenAIConfig struct {
BaseURL string `yaml:"base_url" json:"base_url"`
Model string `yaml:"model" json:"model"`
MaxTotalTokens int `yaml:"max_total_tokens,omitempty" json:"max_total_tokens,omitempty"`
// Reasoning 控制 Eino ChatModel 的 thinking / reasoning_effort / output_config 等(仅 Eino 路径生效;原生 ReAct 忽略)。
Reasoning OpenAIReasoningConfig `yaml:"reasoning,omitempty" json:"reasoning,omitempty"`
}
// OpenAIReasoningConfig 全局默认与网关 profile(对话页可通过 ChatRequest.reasoning 覆盖,受 AllowClientReasoning 约束)。
type OpenAIReasoningConfig struct {
// Mode: auto(默认)| on | off | default(与 auto 相同)。off 时不向模型附加推理扩展字段。
Mode string `yaml:"mode,omitempty" json:"mode,omitempty"`
// Effort: low | medium | high | max;空表示不单独指定强度(各 profile 行为见 internal/reasoning)。
Effort string `yaml:"effort,omitempty" json:"effort,omitempty"`
// AllowClientReasoning 为 false 时忽略请求体 reasoningnil 或未设置等同于 true。
AllowClientReasoning *bool `yaml:"allow_client_reasoning,omitempty" json:"allow_client_reasoning,omitempty"`
// Profile: auto | deepseek_compat | openai_compat | output_config_effort
Profile string `yaml:"profile,omitempty" json:"profile,omitempty"`
// ExtraRequestFields 合并进 Chat Completions 根 JSON(管理员用;与自动字段同名时后者覆盖)。
ExtraRequestFields map[string]interface{} `yaml:"extra_request_fields,omitempty" json:"extra_request_fields,omitempty"`
}
// ModeEffective returns auto when empty or default.
func (c OpenAIReasoningConfig) ModeEffective() string {
m := strings.ToLower(strings.TrimSpace(c.Mode))
if m == "" || m == "default" {
return "auto"
}
return m
}
// ProfileEffective returns auto when empty.
func (c OpenAIReasoningConfig) ProfileEffective() string {
p := strings.ToLower(strings.TrimSpace(c.Profile))
if p == "" {
return "auto"
}
return p
}
// AllowClientReasoningEffective true when client may send ChatRequest.reasoning.
func (c OpenAIReasoningConfig) AllowClientReasoningEffective() bool {
if c.AllowClientReasoning == nil {
return true
}
return *c.AllowClientReasoning
}
type FofaConfig struct {
+46
View File
@@ -0,0 +1,46 @@
package config
import "strings"
// MainWebUIUsesHTTPS 判断主 Web UI 是否以 HTTPS 监听(与 internal/app.prepareMainServerTLS 前置条件一致)。
func MainWebUIUsesHTTPS(s *ServerConfig) bool {
if s == nil {
return false
}
if s.TLSEnabled {
return true
}
if s.TLSAutoSelfSign {
return true
}
cert := strings.TrimSpace(s.TLSCertPath)
key := strings.TrimSpace(s.TLSKeyPath)
return cert != "" && key != ""
}
// ServerHTTPRedirectEnabled 是否在主站启用 HTTPS 时把明文 HTTP 请求重定向到 HTTPS(默认开启)。
func ServerHTTPRedirectEnabled(s *ServerConfig) bool {
if s == nil || !MainWebUIUsesHTTPS(s) {
return false
}
if s.TLSHTTPRedirect == nil {
return true
}
return *s.TLSHTTPRedirect
}
// ApplyDevHTTPSBootstrap 供 --https / 一键脚本使用:强制开启主站 TLS。
// 若已配置 tls_cert_path 与 tls_key_path 则仅用 PEM,不开启自签;否则启用 tls_auto_self_sign(内存证书,仅本地测试)。
func ApplyDevHTTPSBootstrap(cfg *Config) {
if cfg == nil {
return
}
cfg.Server.TLSEnabled = true
cert := strings.TrimSpace(cfg.Server.TLSCertPath)
key := strings.TrimSpace(cfg.Server.TLSKeyPath)
if cert != "" && key != "" {
cfg.Server.TLSAutoSelfSign = false
return
}
cfg.Server.TLSAutoSelfSign = true
}
+40 -13
View File
@@ -25,14 +25,15 @@ type Conversation struct {
// Message 消息
type Message struct {
ID string `json:"id"`
ConversationID string `json:"conversationId"`
Role string `json:"role"`
Content string `json:"content"`
MCPExecutionIDs []string `json:"mcpExecutionIds,omitempty"`
ProcessDetails []map[string]interface{} `json:"processDetails,omitempty"`
CreatedAt time.Time `json:"createdAt"`
UpdatedAt time.Time `json:"updatedAt"`
ID string `json:"id"`
ConversationID string `json:"conversationId"`
Role string `json:"role"`
Content string `json:"content"`
ReasoningContent string `json:"reasoningContent,omitempty"`
MCPExecutionIDs []string `json:"mcpExecutionIds,omitempty"`
ProcessDetails []map[string]interface{} `json:"processDetails,omitempty"`
CreatedAt time.Time `json:"createdAt"`
UpdatedAt time.Time `json:"updatedAt"`
}
// CreateConversation 创建新对话
@@ -116,6 +117,7 @@ func (db *DB) GetConversationByWebshellConnectionID(connectionID string) (*Conve
}
for i := range conv.Messages {
if details, ok := processDetailsMap[conv.Messages[i].ID]; ok {
details = DedupeConsecutiveProcessDetails(details)
detailsJSON := make([]map[string]interface{}, len(details))
for j, detail := range details {
var data interface{}
@@ -234,6 +236,7 @@ func (db *DB) GetConversation(id string) (*Conversation, error) {
// 将过程详情附加到对应的消息上
for i := range conv.Messages {
if details, ok := processDetailsMap[conv.Messages[i].ID]; ok {
details = DedupeConsecutiveProcessDetails(details)
// 将ProcessDetail转换为JSON格式,以便前端使用
detailsJSON := make([]map[string]interface{}, len(details))
for j, detail := range details {
@@ -498,8 +501,8 @@ func (db *DB) AddMessage(conversationID, role, content string, mcpExecutionIDs [
}
_, err := db.Exec(
"INSERT INTO messages (id, conversation_id, role, content, mcp_execution_ids, created_at, updated_at) VALUES (?, ?, ?, ?, ?, ?, ?)",
id, conversationID, role, content, mcpIDsJSON, now, now,
"INSERT INTO messages (id, conversation_id, role, content, reasoning_content, mcp_execution_ids, created_at, updated_at) VALUES (?, ?, ?, ?, ?, ?, ?, ?)",
id, conversationID, role, content, "", mcpIDsJSON, now, now,
)
if err != nil {
return nil, fmt.Errorf("添加消息失败: %w", err)
@@ -523,10 +526,30 @@ func (db *DB) AddMessage(conversationID, role, content string, mcpExecutionIDs [
return message, nil
}
// UpdateAssistantMessageFinalize 更新助手消息终态(正文、MCP id、思考链聚合文本,供无轨迹回退时回放)。
func (db *DB) UpdateAssistantMessageFinalize(messageID, content string, mcpExecutionIDs []string, reasoningContent string) error {
var mcpIDsJSON string
if len(mcpExecutionIDs) > 0 {
jsonData, err := json.Marshal(mcpExecutionIDs)
if err != nil {
return fmt.Errorf("序列化MCP执行ID失败: %w", err)
}
mcpIDsJSON = string(jsonData)
}
_, err := db.Exec(
"UPDATE messages SET content = ?, mcp_execution_ids = ?, reasoning_content = ?, updated_at = ? WHERE id = ?",
content, mcpIDsJSON, strings.TrimSpace(reasoningContent), time.Now(), messageID,
)
if err != nil {
return fmt.Errorf("更新助手消息失败: %w", err)
}
return nil
}
// GetMessages 获取对话的所有消息
func (db *DB) GetMessages(conversationID string) ([]Message, error) {
rows, err := db.Query(
"SELECT id, conversation_id, role, content, mcp_execution_ids, created_at, updated_at FROM messages WHERE conversation_id = ? ORDER BY created_at ASC",
"SELECT id, conversation_id, role, content, reasoning_content, mcp_execution_ids, created_at, updated_at FROM messages WHERE conversation_id = ? ORDER BY created_at ASC",
conversationID,
)
if err != nil {
@@ -537,13 +560,17 @@ func (db *DB) GetMessages(conversationID string) ([]Message, error) {
var messages []Message
for rows.Next() {
var msg Message
var reasoning sql.NullString
var mcpIDsJSON sql.NullString
var createdAt string
var updatedAt sql.NullString
if err := rows.Scan(&msg.ID, &msg.ConversationID, &msg.Role, &msg.Content, &mcpIDsJSON, &createdAt, &updatedAt); err != nil {
if err := rows.Scan(&msg.ID, &msg.ConversationID, &msg.Role, &msg.Content, &reasoning, &mcpIDsJSON, &createdAt, &updatedAt); err != nil {
return nil, fmt.Errorf("扫描消息失败: %w", err)
}
if reasoning.Valid {
msg.ReasoningContent = reasoning.String
}
// 尝试多种时间格式解析
var err error
@@ -683,7 +710,7 @@ type ProcessDetail struct {
ID string `json:"id"`
MessageID string `json:"messageId"`
ConversationID string `json:"conversationId"`
EventType string `json:"eventType"` // iteration, thinking, tool_calls_detected, tool_call, tool_result, progress, error
EventType string `json:"eventType"` // iteration, thinking, reasoning_chain, tool_calls_detected, tool_call, tool_result, progress, error
Message string `json:"message"`
Data string `json:"data"` // JSON格式的数据
CreatedAt time.Time `json:"createdAt"`
+19
View File
@@ -594,6 +594,25 @@ func (db *DB) migrateMessagesTable() error {
// 回填已有数据:让 updated_at 至少等于 created_at,避免前端出现空/当前时间回退。
_, _ = db.Exec("UPDATE messages SET updated_at = created_at WHERE updated_at IS NULL OR updated_at = ''")
// reasoning_contentDeepSeek 思考模式 + 工具调用续跑;与 last_react_input 互补,供消息表回退路径回放
var rcColCount int
errRC := db.QueryRow("SELECT COUNT(*) FROM pragma_table_info('messages') WHERE name='reasoning_content'").Scan(&rcColCount)
if errRC != nil {
if _, addErr := db.Exec("ALTER TABLE messages ADD COLUMN reasoning_content TEXT"); addErr != nil {
errMsg := strings.ToLower(addErr.Error())
if !strings.Contains(errMsg, "duplicate column") && !strings.Contains(errMsg, "already exists") {
return fmt.Errorf("添加 messages.reasoning_content 字段失败: %w", addErr)
}
}
} else if rcColCount == 0 {
if _, err := db.Exec("ALTER TABLE messages ADD COLUMN reasoning_content TEXT"); err != nil {
errMsg := strings.ToLower(err.Error())
if !strings.Contains(errMsg, "duplicate column") && !strings.Contains(errMsg, "already exists") {
return fmt.Errorf("添加 messages.reasoning_content 字段失败: %w", err)
}
}
}
return nil
}
@@ -0,0 +1,28 @@
package database
import (
"fmt"
"strings"
)
// DedupeConsecutiveProcessDetails 去掉相邻且语义相同的过程详情(使用 DB 中 data 列原始 JSON 作指纹,避免 map 序列化键序不稳定)。
func DedupeConsecutiveProcessDetails(rows []ProcessDetail) []ProcessDetail {
if len(rows) < 2 {
return rows
}
out := make([]ProcessDetail, 0, len(rows))
var lastKey string
for _, d := range rows {
key := processDetailRowKey(d)
if len(out) > 0 && key != "" && key == lastKey {
continue
}
out = append(out, d)
lastKey = key
}
return out
}
func processDetailRowKey(d ProcessDetail) string {
return fmt.Sprintf("%s\x00%s\x00%s", d.EventType, strings.TrimSpace(d.Message), d.Data)
}
+435
View File
@@ -0,0 +1,435 @@
// Package einoobserve attaches CloudWeGo Eino [callbacks.Handler] to ADK Runner contexts for
// structured logging and optional SSE trace events (eino_trace_*).
package einoobserve
import (
"context"
"encoding/json"
"fmt"
"strings"
"sync"
"sync/atomic"
"time"
"cyberstrike-ai/internal/config"
"github.com/cloudwego/eino/adk"
"github.com/cloudwego/eino/callbacks"
"github.com/cloudwego/eino/components"
"github.com/cloudwego/eino/components/model"
"github.com/cloudwego/eino/components/tool"
"github.com/cloudwego/eino/schema"
"github.com/google/uuid"
"go.opentelemetry.io/otel"
"go.opentelemetry.io/otel/attribute"
"go.opentelemetry.io/otel/codes"
"go.opentelemetry.io/otel/trace"
"go.uber.org/zap"
)
type ctxSpanKey struct{}
type ctxOtelSpanKey struct{}
// Params for attaching per-run callback instrumentation.
type Params struct {
Logger *zap.Logger
Progress func(eventType, message string, data interface{})
ConversationID string
OrchMode string
OrchestratorName string
}
// AttachAgentRunCallbacks returns ctx wrapped with callbacks.InitCallbacks when enabled.
// Safe to call with nil cfg or disabled cfg (returns ctx unchanged).
func AttachAgentRunCallbacks(ctx context.Context, cfg *config.MultiAgentEinoCallbacksConfig, p Params) context.Context {
if ctx == nil {
return ctx
}
if cfg == nil || !cfg.Enabled {
return ctx
}
mode := cfg.EinoCallbacksModeEffective()
if mode == "off" {
return ctx
}
runID := uuid.New().String()
if p.Progress != nil && cfg.ShouldEmitEinoTraceSSE(mode) {
p.Progress("eino_trace_run", "Eino callbacks session", map[string]interface{}{
"runId": runID,
"conversationId": strings.TrimSpace(p.ConversationID),
"orchestration": strings.TrimSpace(p.OrchMode),
"orchestratorName": strings.TrimSpace(p.OrchestratorName),
"observeMode": mode,
"source": "eino_callbacks",
})
}
h := &runHandler{
cfg: *cfg,
mode: mode,
params: p,
runID: runID,
}
b := callbacks.NewHandlerBuilder().
OnStartFn(h.onStart).
OnEndFn(h.onEnd).
OnErrorFn(h.onError)
if mode == "full" {
b = b.OnStartWithStreamInputFn(h.onStartStreamIn).OnEndWithStreamOutputFn(h.onEndStreamOut)
}
ri := &callbacks.RunInfo{
Name: "CyberStrikeADKRun",
Type: strings.TrimSpace(p.OrchMode),
Component: components.Component("AgentSession"),
}
return callbacks.InitCallbacks(ctx, ri, b.Build())
}
type runHandler struct {
cfg config.MultiAgentEinoCallbacksConfig
mode string
params Params
runID string
mu sync.Mutex
spanStack []string
seq atomic.Uint64
}
func (h *runHandler) genSpanID() string {
return fmt.Sprintf("%s-%d", h.runID, h.seq.Add(1))
}
func (h *runHandler) popSpan() (id string) {
h.mu.Lock()
defer h.mu.Unlock()
if len(h.spanStack) == 0 {
return ""
}
id = h.spanStack[len(h.spanStack)-1]
h.spanStack = h.spanStack[:len(h.spanStack)-1]
return id
}
// popMatching removes the given id from the stack top if it matches; otherwise pops until empty or match (rare ordering mismatch).
func (h *runHandler) popMatching(want string) string {
h.mu.Lock()
defer h.mu.Unlock()
if want == "" {
if len(h.spanStack) == 0 {
return ""
}
id := h.spanStack[len(h.spanStack)-1]
h.spanStack = h.spanStack[:len(h.spanStack)-1]
return id
}
for len(h.spanStack) > 0 {
top := h.spanStack[len(h.spanStack)-1]
h.spanStack = h.spanStack[:len(h.spanStack)-1]
if top == want {
return top
}
}
return want
}
func (h *runHandler) onStart(ctx context.Context, info *callbacks.RunInfo, input callbacks.CallbackInput) context.Context {
var parentID string
h.mu.Lock()
if len(h.spanStack) > 0 {
parentID = h.spanStack[len(h.spanStack)-1]
}
spanID := h.genSpanID()
h.spanStack = append(h.spanStack, spanID)
h.mu.Unlock()
inSum := summarizeCallbackInput(input, h.cfg.EinoCallbacksMaxInputSummaryRunes())
if h.cfg.OtelTracingActive() {
tracer := otel.Tracer("cyberstrike/eino")
spanName := callbackSpanName(info)
var sp trace.Span
ctx, sp = tracer.Start(ctx, spanName,
trace.WithSpanKind(trace.SpanKindInternal),
trace.WithAttributes(
attribute.String("eino.component", string(info.Component)),
attribute.String("eino.name", info.Name),
attribute.String("eino.type", info.Type),
attribute.String("cyberstrike.run_id", h.runID),
attribute.String("cyberstrike.conversation_id", strings.TrimSpace(h.params.ConversationID)),
attribute.String("cyberstrike.orchestration", strings.TrimSpace(h.params.OrchMode)),
),
)
if inSum != "" {
sp.SetAttributes(attribute.String("eino.input.summary", truncateForAttr(inSum, 256)))
}
ctx = context.WithValue(ctx, ctxOtelSpanKey{}, sp)
}
if h.params.Logger != nil {
fields := []zap.Field{
zap.String("runId", h.runID),
zap.String("spanId", spanID),
zap.String("parentSpanId", parentID),
zap.String("component", string(info.Component)),
zap.String("name", info.Name),
zap.String("type", info.Type),
zap.String("phase", "start"),
}
if sp, ok := ctx.Value(ctxOtelSpanKey{}).(trace.Span); ok && sp != nil {
if sc := sp.SpanContext(); sc.IsValid() {
fields = append(fields,
zap.String("trace_id", sc.TraceID().String()),
zap.String("otel_span_id", sc.SpanID().String()),
)
}
}
if h.cfg.ZapVerbose {
h.params.Logger.Debug("eino_callback", append(fields, zap.String("inputSummary", inSum))...)
} else {
h.params.Logger.Info("eino_callback", fields...)
}
}
if h.params.Progress != nil && h.cfg.ShouldEmitEinoTraceSSE(h.mode) {
h.params.Progress("eino_trace_start", "", map[string]interface{}{
"runId": h.runID,
"spanId": spanID,
"parentSpanId": parentID,
"conversationId": strings.TrimSpace(h.params.ConversationID),
"orchestration": strings.TrimSpace(h.params.OrchMode),
"component": string(info.Component),
"name": info.Name,
"type": info.Type,
"ts": time.Now().UTC().Format(time.RFC3339Nano),
"inputSummary": inSum,
"source": "eino_callbacks",
})
}
ctx = context.WithValue(ctx, ctxSpanKey{}, spanID)
return ctx
}
func (h *runHandler) onEnd(ctx context.Context, info *callbacks.RunInfo, output callbacks.CallbackOutput) context.Context {
spanID, _ := ctx.Value(ctxSpanKey{}).(string)
if spanID == "" {
spanID = h.popSpan()
} else {
spanID = h.popMatching(spanID)
}
outSum := summarizeCallbackOutput(output, h.cfg.EinoCallbacksMaxOutputSummaryRunes())
if sp, ok := ctx.Value(ctxOtelSpanKey{}).(trace.Span); ok && sp != nil {
if outSum != "" {
sp.SetAttributes(attribute.String("eino.output.summary", truncateForAttr(outSum, 256)))
}
sp.SetStatus(codes.Ok, "")
sp.End()
}
if h.params.Logger != nil {
fields := []zap.Field{
zap.String("runId", h.runID),
zap.String("spanId", spanID),
zap.String("component", string(info.Component)),
zap.String("name", info.Name),
zap.String("type", info.Type),
zap.String("phase", "end"),
}
if h.cfg.ZapVerbose {
h.params.Logger.Debug("eino_callback", append(fields, zap.String("outputSummary", outSum))...)
} else {
h.params.Logger.Info("eino_callback", fields...)
}
}
if h.params.Progress != nil && h.cfg.ShouldEmitEinoTraceSSE(h.mode) {
h.params.Progress("eino_trace_end", "", map[string]interface{}{
"runId": h.runID,
"spanId": spanID,
"conversationId": strings.TrimSpace(h.params.ConversationID),
"orchestration": strings.TrimSpace(h.params.OrchMode),
"component": string(info.Component),
"name": info.Name,
"type": info.Type,
"ts": time.Now().UTC().Format(time.RFC3339Nano),
"outputSummary": outSum,
"source": "eino_callbacks",
})
}
return ctx
}
func (h *runHandler) onError(ctx context.Context, info *callbacks.RunInfo, err error) context.Context {
spanID, _ := ctx.Value(ctxSpanKey{}).(string)
if spanID == "" {
spanID = h.popSpan()
} else {
spanID = h.popMatching(spanID)
}
msg := ""
if err != nil {
msg = truncateRunes(err.Error(), h.cfg.EinoCallbacksMaxOutputSummaryRunes())
}
if sp, ok := ctx.Value(ctxOtelSpanKey{}).(trace.Span); ok && sp != nil {
if err != nil {
sp.RecordError(err)
}
sp.SetStatus(codes.Error, msg)
sp.End()
}
if h.params.Logger != nil {
h.params.Logger.Warn("eino_callback_error",
zap.String("runId", h.runID),
zap.String("spanId", spanID),
zap.String("component", string(info.Component)),
zap.String("name", info.Name),
zap.String("type", info.Type),
zap.Error(err),
)
}
if h.params.Progress != nil && h.cfg.ShouldEmitEinoTraceSSE(h.mode) {
h.params.Progress("eino_trace_error", msg, map[string]interface{}{
"runId": h.runID,
"spanId": spanID,
"conversationId": strings.TrimSpace(h.params.ConversationID),
"orchestration": strings.TrimSpace(h.params.OrchMode),
"component": string(info.Component),
"name": info.Name,
"type": info.Type,
"ts": time.Now().UTC().Format(time.RFC3339Nano),
"error": msg,
"source": "eino_callbacks",
})
}
return ctx
}
func (h *runHandler) onStartStreamIn(ctx context.Context, info *callbacks.RunInfo, input *schema.StreamReader[callbacks.CallbackInput]) context.Context {
if input != nil {
input.Close()
}
if h.params.Logger != nil {
h.params.Logger.Debug("eino_callback_stream_in",
zap.String("runId", h.runID),
zap.String("component", string(info.Component)),
zap.String("name", info.Name),
)
}
return ctx
}
func (h *runHandler) onEndStreamOut(ctx context.Context, info *callbacks.RunInfo, output *schema.StreamReader[callbacks.CallbackOutput]) context.Context {
if output != nil {
output.Close()
}
if h.params.Logger != nil {
h.params.Logger.Debug("eino_callback_stream_out",
zap.String("runId", h.runID),
zap.String("component", string(info.Component)),
zap.String("name", info.Name),
)
}
return ctx
}
func callbackSpanName(info *callbacks.RunInfo) string {
if info == nil {
return "eino.callback"
}
comp := strings.TrimSpace(string(info.Component))
name := strings.TrimSpace(info.Name)
typ := strings.TrimSpace(info.Type)
if name != "" && comp != "" {
return comp + "/" + name
}
if typ != "" && comp != "" {
return comp + "[" + typ + "]"
}
if comp != "" {
return comp
}
return "eino.callback"
}
func truncateForAttr(s string, maxRunes int) string {
return truncateRunes(s, maxRunes)
}
func summarizeCallbackInput(in callbacks.CallbackInput, maxRunes int) string {
if in == nil {
return ""
}
if ai := adk.ConvAgentCallbackInput(in); ai != nil {
parts := []string{"agent"}
if ai.Input != nil {
parts = append(parts, fmt.Sprintf("messages=%d", len(ai.Input.Messages)))
}
if ai.ResumeInfo != nil {
parts = append(parts, "resume=true")
}
return strings.Join(parts, " ")
}
if mi := model.ConvCallbackInput(in); mi != nil {
return fmt.Sprintf("chatModel messages=%d tools=%d", len(mi.Messages), len(mi.Tools))
}
if ti := tool.ConvCallbackInput(in); ti != nil {
raw := ti.ArgumentsInJSON
return "tool args=" + truncateRunes(raw, maxRunes)
}
b, err := json.Marshal(in)
if err != nil {
return fmt.Sprintf("%T", in)
}
return truncateRunes(string(b), maxRunes)
}
func summarizeCallbackOutput(out callbacks.CallbackOutput, maxRunes int) string {
if out == nil {
return ""
}
if ao := adk.ConvAgentCallbackOutput(out); ao != nil {
return "agent_events=stream"
}
if mo := model.ConvCallbackOutput(out); mo != nil && mo.Message != nil {
s := ""
if mo.Message.Content != "" {
s = mo.Message.Content
}
if mo.TokenUsage != nil {
return fmt.Sprintf("tokens total=%d completion=%d prompt=%d text=%s",
mo.TokenUsage.TotalTokens, mo.TokenUsage.CompletionTokens, mo.TokenUsage.PromptTokens,
truncateRunes(s, minInt(120, maxRunes)))
}
return "assistant len=" + itoa(len(s))
}
if to := tool.ConvCallbackOutput(out); to != nil {
if to.Response != "" {
return truncateRunes(to.Response, maxRunes)
}
if to.ToolOutput != nil {
return "tool_result multimodal"
}
}
b, err := json.Marshal(out)
if err != nil {
return fmt.Sprintf("%T", out)
}
return truncateRunes(string(b), maxRunes)
}
func minInt(a, b int) int {
if a < b {
return a
}
return b
}
func itoa(n int) string {
return fmt.Sprintf("%d", n)
}
func truncateRunes(s string, maxRunes int) string {
if maxRunes <= 0 {
return ""
}
r := []rune(s)
if len(r) <= maxRunes {
return s
}
return string(r[:maxRunes]) + "…"
}
+26
View File
@@ -0,0 +1,26 @@
package einoobserve
import (
"context"
"testing"
"cyberstrike-ai/internal/config"
)
func TestAttachAgentRunCallbacks_Disabled(t *testing.T) {
ctx := context.Background()
cfg := &config.MultiAgentEinoCallbacksConfig{Enabled: false}
out := AttachAgentRunCallbacks(ctx, cfg, Params{})
if out != ctx {
t.Fatalf("expected same ctx when disabled")
}
}
func TestTruncateRunes(t *testing.T) {
if got := truncateRunes("abc", 10); got != "abc" {
t.Fatalf("got %q", got)
}
if got := truncateRunes("abcdefghij", 4); got != "abcd…" {
t.Fatalf("got %q", got)
}
}
+111
View File
@@ -0,0 +1,111 @@
package einoobserve
import (
"context"
"fmt"
"strings"
"sync"
"cyberstrike-ai/internal/config"
"go.opentelemetry.io/otel"
"go.opentelemetry.io/otel/exporters/otlp/otlptrace/otlptracehttp"
"go.opentelemetry.io/otel/exporters/stdout/stdouttrace"
"go.opentelemetry.io/otel/sdk/resource"
sdktrace "go.opentelemetry.io/otel/sdk/trace"
semconv "go.opentelemetry.io/otel/semconv/v1.26.0"
"go.uber.org/zap"
)
var (
otelMu sync.Mutex
otelShutdown func(context.Context) error
otelInitialized bool
)
// InitOtelFromConfig installs the global OpenTelemetry TracerProvider when
// eino_callbacks.otel is enabled and exporter is not none. Safe to call multiple times.
func InitOtelFromConfig(cfg *config.MultiAgentEinoCallbacksConfig, log *zap.Logger) (shutdown func(context.Context) error, err error) {
shutdown = func(context.Context) error { return nil }
if cfg == nil || !cfg.OtelTracingActive() {
return shutdown, nil
}
otelMu.Lock()
defer otelMu.Unlock()
if otelInitialized {
if otelShutdown != nil {
return otelShutdown, nil
}
return shutdown, nil
}
oc := cfg.Otel
expKind := oc.OtelExporterEffective()
ctx := context.Background()
var exporter sdktrace.SpanExporter
switch expKind {
case "stdout":
exporter, err = stdouttrace.New()
if err != nil {
return shutdown, fmt.Errorf("eino otel stdout exporter: %w", err)
}
case "otlphttp":
ep := strings.TrimSpace(oc.OTLPEndpoint)
if ep == "" {
ep = "localhost:4318"
}
exporter, err = otlptracehttp.New(ctx,
otlptracehttp.WithEndpoint(ep),
otlptracehttp.WithURLPath("/v1/traces"),
)
if err != nil {
return shutdown, fmt.Errorf("eino otel otlphttp exporter: %w", err)
}
default:
return shutdown, nil
}
res, err := resource.New(ctx,
resource.WithAttributes(
semconv.ServiceName(oc.ServiceNameEffective()),
),
)
if err != nil {
return shutdown, fmt.Errorf("eino otel resource: %w", err)
}
sampler := sdktrace.ParentBased(sdktrace.TraceIDRatioBased(oc.SampleRatioEffective()))
tp := sdktrace.NewTracerProvider(
sdktrace.WithBatcher(exporter),
sdktrace.WithResource(res),
sdktrace.WithSampler(sampler),
)
otel.SetTracerProvider(tp)
otelShutdown = tp.Shutdown
otelInitialized = true
if log != nil {
log.Info("eino otel: tracer provider initialized",
zap.String("exporter", expKind),
zap.String("service", oc.ServiceNameEffective()),
zap.Float64("sample_ratio", oc.SampleRatioEffective()),
)
}
return otelShutdown, nil
}
// ShutdownOtel flushes and shuts down the global TracerProvider if it was installed.
func ShutdownOtel(ctx context.Context) error {
otelMu.Lock()
fn := otelShutdown
otelShutdown = nil
inited := otelInitialized
otelInitialized = false
otelMu.Unlock()
if !inited || fn == nil {
return nil
}
return fn(ctx)
}
+85 -81
View File
@@ -19,6 +19,7 @@ import (
"cyberstrike-ai/internal/agent"
"cyberstrike-ai/internal/config"
"cyberstrike-ai/internal/database"
"cyberstrike-ai/internal/reasoning"
"cyberstrike-ai/internal/mcp"
"cyberstrike-ai/internal/mcp/builtin"
"cyberstrike-ai/internal/multiagent"
@@ -201,6 +202,14 @@ type ChatAttachment struct {
ServerPath string `json:"serverPath,omitempty"` // 已保存在 chat_uploads 下的绝对路径(由 POST /api/chat-uploads 返回)
}
// ChatReasoningRequest 对话页「模型推理」意图(仅 Eino 路径消费;原生 agent-loop 忽略)。
type ChatReasoningRequest struct {
// Mode: default(跟随系统)| off | on | auto
Mode string `json:"mode,omitempty"`
// Effort: low | medium | high | max;空表示不指定(由系统默认与各 profile 决定)。
Effort string `json:"effort,omitempty"`
}
// ChatRequest 聊天请求
type ChatRequest struct {
Message string `json:"message" binding:"required"`
@@ -209,10 +218,18 @@ type ChatRequest struct {
Attachments []ChatAttachment `json:"attachments,omitempty"`
WebShellConnectionID string `json:"webshellConnectionId,omitempty"` // WebShell 管理 - AI 助手:当前选中的连接 ID,仅使用 webshell_* 工具
Hitl *HITLRequest `json:"hitl,omitempty"`
Reasoning *ChatReasoningRequest `json:"reasoning,omitempty"`
// Orchestration 仅对 /api/multi-agent、/api/multi-agent/streamdeep | plan_execute | supervisor;空则等同 deep。机器人/批量等无请求体时由服务端默认 deep。/api/eino-agent* 不使用此字段。
Orchestration string `json:"orchestration,omitempty"`
}
func chatReasoningToClientIntent(r *ChatReasoningRequest) *reasoning.ClientIntent {
if r == nil {
return nil
}
return &reasoning.ClientIntent{Mode: r.Mode, Effort: r.Effort}
}
type HITLRequest struct {
Enabled bool `json:"enabled"`
Mode string `json:"mode,omitempty"`
@@ -567,14 +584,7 @@ func (h *AgentHandler) AgentLoop(c *gin.Context) {
h.logger.Warn("获取历史消息失败", zap.Error(err))
agentHistoryMessages = []agent.ChatMessage{}
} else {
// 将数据库消息转换为Agent消息格式
agentHistoryMessages = make([]agent.ChatMessage, 0, len(historyMessages))
for _, msg := range historyMessages {
agentHistoryMessages = append(agentHistoryMessages, agent.ChatMessage{
Role: msg.Role,
Content: msg.Content,
})
}
agentHistoryMessages = dbMessagesToAgentChatMessages(historyMessages)
h.logger.Info("从消息表加载历史消息", zap.Int("count", len(agentHistoryMessages)))
}
} else {
@@ -775,6 +785,7 @@ func (h *AgentHandler) ProcessMessageForRobot(ctx context.Context, conversationI
progressCallback,
h.agentsMarkdownDir,
"deep",
nil,
)
if errMA != nil {
if shouldPersistEinoAgentTraceAfterRunError(ctx) {
@@ -788,17 +799,8 @@ func (h *AgentHandler) ProcessMessageForRobot(ctx context.Context, conversationI
return "", conversationID, errMA
}
if assistantMessageID != "" {
mcpIDsJSON := ""
if len(resultMA.MCPExecutionIDs) > 0 {
jsonData, _ := json.Marshal(resultMA.MCPExecutionIDs)
mcpIDsJSON = string(jsonData)
}
_, err = h.db.Exec(
"UPDATE messages SET content = ?, mcp_execution_ids = ?, updated_at = ? WHERE id = ?",
resultMA.Response, mcpIDsJSON, time.Now(), assistantMessageID,
)
if err != nil {
h.logger.Warn("机器人:更新助手消息失败", zap.Error(err))
if errU := h.db.UpdateAssistantMessageFinalize(assistantMessageID, resultMA.Response, resultMA.MCPExecutionIDs, multiagent.AggregatedReasoningFromTraceJSON(resultMA.LastAgentTraceInput)); errU != nil {
h.logger.Warn("机器人:更新助手消息失败", zap.Error(errU))
}
} else {
if _, err = h.db.AddMessage(conversationID, "assistant", resultMA.Response, resultMA.MCPExecutionIDs); err != nil {
@@ -823,17 +825,8 @@ func (h *AgentHandler) ProcessMessageForRobot(ctx context.Context, conversationI
// 更新助手消息内容与 MCP 执行 ID(与 stream 一致)
if assistantMessageID != "" {
mcpIDsJSON := ""
if len(result.MCPExecutionIDs) > 0 {
jsonData, _ := json.Marshal(result.MCPExecutionIDs)
mcpIDsJSON = string(jsonData)
}
_, err = h.db.Exec(
"UPDATE messages SET content = ?, mcp_execution_ids = ?, updated_at = ? WHERE id = ?",
result.Response, mcpIDsJSON, time.Now(), assistantMessageID,
)
if err != nil {
h.logger.Warn("机器人:更新助手消息失败", zap.Error(err))
if errU := h.db.UpdateAssistantMessageFinalize(assistantMessageID, result.Response, result.MCPExecutionIDs, multiagent.AggregatedReasoningFromTraceJSON(result.LastAgentTraceInput)); errU != nil {
h.logger.Warn("机器人:更新助手消息失败", zap.Error(errU))
}
} else {
if _, err = h.db.AddMessage(conversationID, "assistant", result.Response, result.MCPExecutionIDs); err != nil {
@@ -891,10 +884,12 @@ func (h *AgentHandler) createProgressCallback(runCtx context.Context, cancelRun
return ""
}
// thinking_stream_*:不逐条落库,按 streamId 聚合,在后续关键事件前补一条可持久化的 thinking
// thinking_stream_*(ReAct 等助手正文流)与 reasoning_chain_stream_*Eino ReasoningContent):
// 不逐条落库,按 streamId 聚合,flush 时分别落 thinking / reasoning_chain。
type thinkingBuf struct {
b strings.Builder
meta map[string]interface{}
b strings.Builder
meta map[string]interface{}
persistAs string // "thinking" | "reasoning_chain"
}
thinkingStreams := make(map[string]*thinkingBuf) // streamId -> buf
flushedThinking := make(map[string]bool) // streamId -> flushed
@@ -948,8 +943,12 @@ func (h *AgentHandler) createProgressCallback(runCtx context.Context, cancelRun
}
data[k] = v
}
if err := h.db.AddProcessDetail(assistantMessageID, conversationID, "thinking", content, data); err != nil {
h.logger.Warn("保存过程详情失败", zap.Error(err), zap.String("eventType", "thinking"))
persist := tb.persistAs
if persist != "reasoning_chain" {
persist = "thinking"
}
if err := h.db.AddProcessDetail(assistantMessageID, conversationID, persist, content, data); err != nil {
h.logger.Warn("保存过程详情失败", zap.Error(err), zap.String("eventType", persist))
}
flushedThinking[sid] = true
}
@@ -1177,14 +1176,20 @@ func (h *AgentHandler) createProgressCallback(runCtx context.Context, cancelRun
return
}
// 聚合 thinking_stream_*ReasoningContent,不逐条落库
if eventType == "thinking_stream_start" {
// 聚合 thinking_stream_* / reasoning_chain_stream_*,不逐条落库
if eventType == "thinking_stream_start" || eventType == "reasoning_chain_stream_start" {
persistAs := "thinking"
if eventType == "reasoning_chain_stream_start" {
persistAs = "reasoning_chain"
}
if dataMap, ok := data.(map[string]interface{}); ok {
if sid, ok2 := dataMap["streamId"].(string); ok2 && sid != "" {
tb := thinkingStreams[sid]
if tb == nil {
tb = &thinkingBuf{meta: map[string]interface{}{}}
tb = &thinkingBuf{meta: map[string]interface{}{}, persistAs: persistAs}
thinkingStreams[sid] = tb
} else {
tb.persistAs = persistAs
}
// 记录元信息(source/einoAgent/einoRole/iteration 等)
for k, v := range dataMap {
@@ -1194,15 +1199,21 @@ func (h *AgentHandler) createProgressCallback(runCtx context.Context, cancelRun
}
return
}
if eventType == "thinking_stream_delta" {
if eventType == "thinking_stream_delta" || eventType == "reasoning_chain_stream_delta" {
persistAs := "thinking"
if eventType == "reasoning_chain_stream_delta" {
persistAs = "reasoning_chain"
}
if dataMap, ok := data.(map[string]interface{}); ok {
if sid, ok2 := dataMap["streamId"].(string); ok2 && sid != "" {
tb := thinkingStreams[sid]
if tb == nil {
tb = &thinkingBuf{meta: map[string]interface{}{}}
tb = &thinkingBuf{meta: map[string]interface{}{}, persistAs: persistAs}
thinkingStreams[sid] = tb
} else if tb.persistAs == "" {
tb.persistAs = persistAs
}
// delta 片段直接拼接message 本身就是 reasoning content
// delta 片段直接拼接
tb.b.WriteString(message)
// 有时 delta 先到 start 未到,补充元信息
for k, v := range dataMap {
@@ -1213,10 +1224,9 @@ func (h *AgentHandler) createProgressCallback(runCtx context.Context, cancelRun
return
}
// 当 Agent 同时发送 thinking_stream_* 和 thinking(带同一 streamId时,
// thinking_stream_* 已经会在 flushThinkingStreams() 聚合落库;
// 这里跳过同 streamId 的 thinking,避免 processDetails 双份展示。
if eventType == "thinking" {
// 当 Agent 同时发送 *_stream_* 与同名 streamId 的 thinking/reasoning_chain 时,
// 流式聚合已会在 flushThinkingStreams() 落库;此处跳过逐条重复。
if eventType == "thinking" || eventType == "reasoning_chain" {
if dataMap, ok := data.(map[string]interface{}); ok {
if sid, ok2 := dataMap["streamId"].(string); ok2 && sid != "" {
if tb, exists := thinkingStreams[sid]; exists && tb != nil {
@@ -1239,13 +1249,17 @@ func (h *AgentHandler) createProgressCallback(runCtx context.Context, cancelRun
eventType != "response_start" &&
eventType != "response_delta" &&
eventType != "tool_result_delta" &&
eventType != "eino_trace_run" &&
eventType != "eino_trace_start" &&
eventType != "eino_trace_end" &&
eventType != "eino_trace_error" &&
eventType != "eino_agent_reply_stream_start" &&
eventType != "eino_agent_reply_stream_delta" &&
eventType != "eino_agent_reply_stream_end" {
if eventType == "tool_result" {
discardPlanningIfEchoesToolResult(&respPlan, data)
}
// 在关键过程事件落库前,先把「规划中」与 thinking_stream 落库
// 在关键过程事件落库前,先把「规划中」与聚合中的 thinking / reasoning_chain 流落库
flushResponsePlan()
flushThinkingStreams()
if err := h.db.AddProcessDetail(assistantMessageID, conversationID, eventType, message, data); err != nil {
@@ -1427,14 +1441,7 @@ func (h *AgentHandler) AgentLoopStream(c *gin.Context) {
h.logger.Warn("获取历史消息失败", zap.Error(err))
agentHistoryMessages = []agent.ChatMessage{}
} else {
// 将数据库消息转换为Agent消息格式
agentHistoryMessages = make([]agent.ChatMessage, 0, len(historyMessages))
for _, msg := range historyMessages {
agentHistoryMessages = append(agentHistoryMessages, agent.ChatMessage{
Role: msg.Role,
Content: msg.Content,
})
}
agentHistoryMessages = dbMessagesToAgentChatMessages(historyMessages)
h.logger.Info("从消息表加载历史消息", zap.Int("count", len(agentHistoryMessages)))
}
} else {
@@ -1727,20 +1734,8 @@ func (h *AgentHandler) AgentLoopStream(c *gin.Context) {
// 更新助手消息内容
if assistantMsg != nil {
_, err = h.db.Exec(
"UPDATE messages SET content = ?, mcp_execution_ids = ?, updated_at = ? WHERE id = ?",
result.Response,
func() string {
if len(result.MCPExecutionIDs) > 0 {
jsonData, _ := json.Marshal(result.MCPExecutionIDs)
return string(jsonData)
}
return ""
}(),
time.Now(), assistantMessageID,
)
if err != nil {
h.logger.Error("更新助手消息失败", zap.Error(err))
if errU := h.db.UpdateAssistantMessageFinalize(assistantMessageID, result.Response, result.MCPExecutionIDs, multiagent.AggregatedReasoningFromTraceJSON(result.LastAgentTraceInput)); errU != nil {
h.logger.Error("更新助手消息失败", zap.Error(errU))
}
} else {
// 如果之前创建失败,现在创建
@@ -2664,12 +2659,12 @@ func (h *AgentHandler) executeBatchQueue(queueID string) {
var runErr error
switch {
case useBatchMulti:
resultMA, runErr = multiagent.RunDeepAgent(taskCtx, h.config, &h.config.MultiAgent, h.agent, h.logger, conversationID, finalMessage, []agent.ChatMessage{}, roleTools, progressCallback, h.agentsMarkdownDir, batchOrch)
resultMA, runErr = multiagent.RunDeepAgent(taskCtx, h.config, &h.config.MultiAgent, h.agent, h.logger, conversationID, finalMessage, []agent.ChatMessage{}, roleTools, progressCallback, h.agentsMarkdownDir, batchOrch, nil)
case useEinoSingle:
if h.config == nil {
runErr = fmt.Errorf("服务器配置未加载")
} else {
resultMA, runErr = multiagent.RunEinoSingleChatModelAgent(taskCtx, h.config, &h.config.MultiAgent, h.agent, h.logger, conversationID, finalMessage, []agent.ChatMessage{}, roleTools, progressCallback)
resultMA, runErr = multiagent.RunEinoSingleChatModelAgent(taskCtx, h.config, &h.config.MultiAgent, h.agent, h.logger, conversationID, finalMessage, []agent.ChatMessage{}, roleTools, progressCallback, nil)
}
default:
result, runErr = h.agent.AgentLoopWithProgress(taskCtx, finalMessage, []agent.ChatMessage{}, conversationID, progressCallback, roleTools)
@@ -2768,17 +2763,7 @@ func (h *AgentHandler) executeBatchQueue(queueID string) {
// 更新助手消息内容
if assistantMessageID != "" {
mcpIDsJSON := ""
if len(mcpIDs) > 0 {
jsonData, _ := json.Marshal(mcpIDs)
mcpIDsJSON = string(jsonData)
}
if _, updateErr := h.db.Exec(
"UPDATE messages SET content = ?, mcp_execution_ids = ?, updated_at = ? WHERE id = ?",
resText,
mcpIDsJSON,
time.Now(), assistantMessageID,
); updateErr != nil {
if updateErr := h.db.UpdateAssistantMessageFinalize(assistantMessageID, resText, mcpIDs, multiagent.AggregatedReasoningFromTraceJSON(lastIn)); updateErr != nil {
h.logger.Warn("更新助手消息失败", zap.String("queueId", queueID), zap.String("taskId", task.ID), zap.Error(updateErr))
// 如果更新失败,尝试创建新消息
_, err = h.db.AddMessage(conversationID, "assistant", resText, mcpIDs)
@@ -2870,6 +2855,10 @@ func (h *AgentHandler) loadHistoryFromAgentTrace(conversationID string) ([]agent
if content, ok := msgMap["content"].(string); ok {
msg.Content = content
}
// DeepSeek 思考模式:含工具调用的 assistant 须在后续请求中回传 reasoning_content
if rc, ok := msgMap["reasoning_content"].(string); ok && strings.TrimSpace(rc) != "" {
msg.ReasoningContent = rc
}
// 解析tool_calls(如果存在)
if toolCallsRaw, ok := msgMap["tool_calls"]; ok && toolCallsRaw != nil {
@@ -2975,3 +2964,18 @@ func (h *AgentHandler) loadHistoryFromAgentTrace(conversationID string) ([]agent
)
return agentMessages, nil
}
// dbMessagesToAgentChatMessages maps DB rows to agent ChatMessage for history fallback
// (includes reasoning_content for DeepSeek thinking + tool replay).
func dbMessagesToAgentChatMessages(msgs []database.Message) []agent.ChatMessage {
out := make([]agent.ChatMessage, 0, len(msgs))
for i := range msgs {
m := msgs[i]
out = append(out, agent.ChatMessage{
Role: m.Role,
Content: m.Content,
ReasoningContent: m.ReasoningContent,
})
}
return out
}
+82 -15
View File
@@ -609,15 +609,46 @@ func (h *ConfigHandler) GetTools(c *gin.Context) {
// UpdateConfigRequest 更新配置请求
type UpdateConfigRequest struct {
OpenAI *config.OpenAIConfig `json:"openai,omitempty"`
FOFA *config.FofaConfig `json:"fofa,omitempty"`
MCP *config.MCPConfig `json:"mcp,omitempty"`
Tools []ToolEnableStatus `json:"tools,omitempty"`
Agent *config.AgentConfig `json:"agent,omitempty"`
Knowledge *config.KnowledgeConfig `json:"knowledge,omitempty"`
Robots *config.RobotsConfig `json:"robots,omitempty"`
MultiAgent *config.MultiAgentAPIUpdate `json:"multi_agent,omitempty"`
C2 *config.C2APIUpdate `json:"c2,omitempty"`
OpenAI *config.OpenAIConfig `json:"openai,omitempty"`
FOFA *config.FofaConfig `json:"fofa,omitempty"`
MCP *config.MCPConfig `json:"mcp,omitempty"`
Tools []ToolEnableStatus `json:"tools,omitempty"`
Agent *AgentConfigUpdate `json:"agent,omitempty"`
Knowledge *config.KnowledgeConfig `json:"knowledge,omitempty"`
Robots *config.RobotsConfig `json:"robots,omitempty"`
MultiAgent *config.MultiAgentAPIUpdate `json:"multi_agent,omitempty"`
C2 *config.C2APIUpdate `json:"c2,omitempty"`
}
// AgentConfigUpdate 用于 PATCH /api/config 的 agent 段:仅 JSON 中出现的字段(指针非 nil)覆盖内存配置。
// 避免旧版「整包替换 *AgentConfig」时,未传的整型字段被反序列化为 0 误覆盖(例如 tool_timeout_minutes 变成 0)。
type AgentConfigUpdate struct {
MaxIterations *int `json:"max_iterations,omitempty"`
LargeResultThreshold *int `json:"large_result_threshold,omitempty"`
ResultStorageDir *string `json:"result_storage_dir,omitempty"`
ToolTimeoutMinutes *int `json:"tool_timeout_minutes,omitempty"`
SystemPromptPath *string `json:"system_prompt_path,omitempty"`
}
func applyAgentConfigUpdate(dst *config.AgentConfig, src *AgentConfigUpdate) {
if dst == nil || src == nil {
return
}
if src.MaxIterations != nil {
dst.MaxIterations = *src.MaxIterations
}
if src.LargeResultThreshold != nil {
dst.LargeResultThreshold = *src.LargeResultThreshold
}
if src.ResultStorageDir != nil {
dst.ResultStorageDir = *src.ResultStorageDir
}
if src.ToolTimeoutMinutes != nil {
dst.ToolTimeoutMinutes = *src.ToolTimeoutMinutes
}
if src.SystemPromptPath != nil {
dst.SystemPromptPath = *src.SystemPromptPath
}
}
// ToolEnableStatus 工具启用状态
@@ -664,12 +695,19 @@ func (h *ConfigHandler) UpdateConfig(c *gin.Context) {
)
}
// 更新Agent配置
// 更新Agent配置(按字段合并,避免部分 JSON 把未出现的字段写成 0)
if req.Agent != nil {
h.config.Agent = *req.Agent
applyAgentConfigUpdate(&h.config.Agent, req.Agent)
h.logger.Info("更新Agent配置",
zap.Int("max_iterations", h.config.Agent.MaxIterations),
zap.Int("tool_timeout_minutes", h.config.Agent.ToolTimeoutMinutes),
)
if h.agent != nil && req.Agent.MaxIterations != nil {
h.agent.UpdateMaxIterations(h.config.Agent.MaxIterations)
}
if h.mcpServer != nil {
h.mcpServer.ConfigureHTTPToolCallTimeoutFromAgentMinutes(h.config.Agent.ToolTimeoutMinutes)
}
}
// 更新Knowledge配置
@@ -717,7 +755,9 @@ func (h *ConfigHandler) UpdateConfig(c *gin.Context) {
if req.MultiAgent.PlanExecuteLoopMaxIterations != nil {
h.config.MultiAgent.PlanExecuteLoopMaxIterations = *req.MultiAgent.PlanExecuteLoopMaxIterations
}
h.config.MultiAgent.EinoMiddleware.ToolSearchAlwaysVisibleTools = dedupeToolNameList(req.MultiAgent.ToolSearchAlwaysVisibleTools)
if req.MultiAgent.ToolSearchAlwaysVisibleTools != nil {
h.config.MultiAgent.EinoMiddleware.ToolSearchAlwaysVisibleTools = dedupeToolNameList(*req.MultiAgent.ToolSearchAlwaysVisibleTools)
}
h.logger.Info("更新多代理配置",
zap.Bool("enabled", h.config.MultiAgent.Enabled),
zap.Bool("robot_use_multi_agent", h.config.MultiAgent.RobotUseMultiAgent),
@@ -1116,6 +1156,9 @@ func (h *ConfigHandler) ApplyConfig(c *gin.Context) {
h.agent.UpdateToolDescriptionMode(h.config.Security.ToolDescriptionMode)
h.logger.Info("Agent配置已更新")
}
if h.mcpServer != nil {
h.mcpServer.ConfigureHTTPToolCallTimeoutFromAgentMinutes(h.config.Agent.ToolTimeoutMinutes)
}
// 更新AttackChainHandler的OpenAI配置
if h.attackChainHandler != nil {
@@ -1181,7 +1224,7 @@ func (h *ConfigHandler) saveConfig() error {
return fmt.Errorf("解析配置文件失败: %w", err)
}
updateAgentConfig(root, h.config.Agent.MaxIterations)
updateAgentConfig(root, h.config.Agent)
updateMCPConfig(root, h.config.MCP)
updateOpenAIConfig(root, h.config.OpenAI)
updateFOFAConfig(root, h.config.FOFA)
@@ -1286,10 +1329,14 @@ func writeYAMLDocument(path string, doc *yaml.Node) error {
return os.WriteFile(path, buf.Bytes(), 0644)
}
func updateAgentConfig(doc *yaml.Node, maxIterations int) {
func updateAgentConfig(doc *yaml.Node, agent config.AgentConfig) {
root := doc.Content[0]
agentNode := ensureMap(root, "agent")
setIntInMap(agentNode, "max_iterations", maxIterations)
setIntInMap(agentNode, "max_iterations", agent.MaxIterations)
setIntInMap(agentNode, "tool_timeout_minutes", agent.ToolTimeoutMinutes)
setIntInMap(agentNode, "large_result_threshold", agent.LargeResultThreshold)
setStringInMap(agentNode, "result_storage_dir", agent.ResultStorageDir)
setStringInMap(agentNode, "system_prompt_path", agent.SystemPromptPath)
}
func updateMCPConfig(doc *yaml.Node, cfg config.MCPConfig) {
@@ -1312,6 +1359,19 @@ func updateOpenAIConfig(doc *yaml.Node, cfg config.OpenAIConfig) {
if cfg.MaxTotalTokens > 0 {
setIntInMap(openaiNode, "max_total_tokens", cfg.MaxTotalTokens)
}
rn := ensureMap(openaiNode, "reasoning")
if strings.TrimSpace(cfg.Reasoning.Mode) != "" {
setStringInMap(rn, "mode", cfg.Reasoning.Mode)
}
if strings.TrimSpace(cfg.Reasoning.Effort) != "" {
setStringInMap(rn, "effort", cfg.Reasoning.Effort)
}
if cfg.Reasoning.AllowClientReasoning != nil {
setBoolInMap(rn, "allow_client_reasoning", *cfg.Reasoning.AllowClientReasoning)
}
if strings.TrimSpace(cfg.Reasoning.Profile) != "" {
setStringInMap(rn, "profile", cfg.Reasoning.Profile)
}
}
func updateFOFAConfig(doc *yaml.Node, cfg config.FofaConfig) {
@@ -1416,6 +1476,11 @@ func updateRobotsConfig(doc *yaml.Node, cfg config.RobotsConfig) {
root := doc.Content[0]
robotsNode := ensureMap(root, "robots")
if cfg.Session.StrictUserIdentity != nil {
sessionNode := ensureMap(robotsNode, "session")
setBoolInMap(sessionNode, "strict_user_identity", *cfg.Session.StrictUserIdentity)
}
wecomNode := ensureMap(robotsNode, "wecom")
setBoolInMap(wecomNode, "enabled", cfg.Wecom.Enabled)
setStringInMap(wecomNode, "token", cfg.Wecom.Token)
@@ -1428,12 +1493,14 @@ func updateRobotsConfig(doc *yaml.Node, cfg config.RobotsConfig) {
setBoolInMap(dingtalkNode, "enabled", cfg.Dingtalk.Enabled)
setStringInMap(dingtalkNode, "client_id", cfg.Dingtalk.ClientID)
setStringInMap(dingtalkNode, "client_secret", cfg.Dingtalk.ClientSecret)
setBoolInMap(dingtalkNode, "allow_conversation_id_fallback", cfg.Dingtalk.AllowConversationIDFallback)
larkNode := ensureMap(robotsNode, "lark")
setBoolInMap(larkNode, "enabled", cfg.Lark.Enabled)
setStringInMap(larkNode, "app_id", cfg.Lark.AppID)
setStringInMap(larkNode, "app_secret", cfg.Lark.AppSecret)
setStringInMap(larkNode, "verify_token", cfg.Lark.VerifyToken)
setBoolInMap(larkNode, "allow_chat_id_fallback", cfg.Lark.AllowChatIDFallback)
}
func updateMultiAgentConfig(doc *yaml.Node, cfg config.MultiAgentConfig) {
+2
View File
@@ -117,6 +117,8 @@ func (h *ConversationHandler) GetMessageProcessDetails(c *gin.Context) {
return
}
details = database.DedupeConsecutiveProcessDetails(details)
// 转换为前端期望的 JSON 结构(与 GetConversation 中 processDetails 结构一致)
out := make([]map[string]interface{}, 0, len(details))
for _, d := range details {
+4 -24
View File
@@ -196,6 +196,7 @@ func (h *AgentHandler) EinoSingleAgentLoopStream(c *gin.Context) {
curHistory,
roleTools,
progressCallback,
chatReasoningToClientIntent(req.Reasoning),
)
timeoutCancel()
@@ -297,18 +298,7 @@ func (h *AgentHandler) EinoSingleAgentLoopStream(c *gin.Context) {
}
if assistantMessageID != "" {
mcpIDsJSON := ""
if len(cumulativeMCPExecutionIDs) > 0 {
jsonData, _ := json.Marshal(cumulativeMCPExecutionIDs)
mcpIDsJSON = string(jsonData)
}
_, _ = h.db.Exec(
"UPDATE messages SET content = ?, mcp_execution_ids = ?, updated_at = ? WHERE id = ?",
result.Response,
mcpIDsJSON,
time.Now(),
assistantMessageID,
)
_ = h.db.UpdateAssistantMessageFinalize(assistantMessageID, result.Response, cumulativeMCPExecutionIDs, multiagent.AggregatedReasoningFromTraceJSON(result.LastAgentTraceInput))
}
if result.LastAgentTraceInput != "" || result.LastAgentTraceOutput != "" {
@@ -376,6 +366,7 @@ func (h *AgentHandler) EinoSingleAgentLoop(c *gin.Context) {
prep.History,
prep.RoleTools,
progressCallback,
chatReasoningToClientIntent(req.Reasoning),
)
if runErr != nil {
if shouldPersistEinoAgentTraceAfterRunError(baseCtx) {
@@ -386,18 +377,7 @@ func (h *AgentHandler) EinoSingleAgentLoop(c *gin.Context) {
}
if prep.AssistantMessageID != "" {
mcpIDsJSON := ""
if len(result.MCPExecutionIDs) > 0 {
jsonData, _ := json.Marshal(result.MCPExecutionIDs)
mcpIDsJSON = string(jsonData)
}
_, _ = h.db.Exec(
"UPDATE messages SET content = ?, mcp_execution_ids = ?, updated_at = ? WHERE id = ?",
result.Response,
mcpIDsJSON,
time.Now(),
prep.AssistantMessageID,
)
_ = h.db.UpdateAssistantMessageFinalize(prep.AssistantMessageID, result.Response, result.MCPExecutionIDs, multiagent.AggregatedReasoningFromTraceJSON(result.LastAgentTraceInput))
}
if result.LastAgentTraceInput != "" || result.LastAgentTraceOutput != "" {
_ = h.db.SaveAgentTrace(prep.ConversationID, result.LastAgentTraceInput, result.LastAgentTraceOutput)
+4 -24
View File
@@ -208,6 +208,7 @@ func (h *AgentHandler) MultiAgentLoopStream(c *gin.Context) {
progressCallback,
h.agentsMarkdownDir,
orch,
chatReasoningToClientIntent(req.Reasoning),
)
timeoutCancel()
@@ -309,18 +310,7 @@ func (h *AgentHandler) MultiAgentLoopStream(c *gin.Context) {
}
if assistantMessageID != "" {
mcpIDsJSON := ""
if len(cumulativeMCPExecutionIDs) > 0 {
jsonData, _ := json.Marshal(cumulativeMCPExecutionIDs)
mcpIDsJSON = string(jsonData)
}
_, _ = h.db.Exec(
"UPDATE messages SET content = ?, mcp_execution_ids = ?, updated_at = ? WHERE id = ?",
result.Response,
mcpIDsJSON,
time.Now(),
assistantMessageID,
)
_ = h.db.UpdateAssistantMessageFinalize(assistantMessageID, result.Response, cumulativeMCPExecutionIDs, multiagent.AggregatedReasoningFromTraceJSON(result.LastAgentTraceInput))
}
if result.LastAgentTraceInput != "" || result.LastAgentTraceOutput != "" {
@@ -390,6 +380,7 @@ func (h *AgentHandler) MultiAgentLoop(c *gin.Context) {
progressCallback,
h.agentsMarkdownDir,
strings.TrimSpace(req.Orchestration),
chatReasoningToClientIntent(req.Reasoning),
)
if runErr != nil {
if shouldPersistEinoAgentTraceAfterRunError(baseCtx) {
@@ -405,18 +396,7 @@ func (h *AgentHandler) MultiAgentLoop(c *gin.Context) {
}
if prep.AssistantMessageID != "" {
mcpIDsJSON := ""
if len(result.MCPExecutionIDs) > 0 {
jsonData, _ := json.Marshal(result.MCPExecutionIDs)
mcpIDsJSON = string(jsonData)
}
_, _ = h.db.Exec(
"UPDATE messages SET content = ?, mcp_execution_ids = ?, updated_at = ? WHERE id = ?",
result.Response,
mcpIDsJSON,
time.Now(),
prep.AssistantMessageID,
)
_ = h.db.UpdateAssistantMessageFinalize(prep.AssistantMessageID, result.Response, result.MCPExecutionIDs, multiagent.AggregatedReasoningFromTraceJSON(result.LastAgentTraceInput))
}
if result.LastAgentTraceInput != "" || result.LastAgentTraceOutput != "" {
+1 -7
View File
@@ -55,13 +55,7 @@ func (h *AgentHandler) prepareMultiAgentSession(req *ChatRequest) (*multiAgentPr
if getErr != nil {
agentHistoryMessages = []agent.ChatMessage{}
} else {
agentHistoryMessages = make([]agent.ChatMessage, 0, len(historyMessages))
for _, msg := range historyMessages {
agentHistoryMessages = append(agentHistoryMessages, agent.ChatMessage{
Role: msg.Role,
Content: msg.Content,
})
}
agentHistoryMessages = dbMessagesToAgentChatMessages(historyMessages)
}
}
+38 -1
View File
@@ -44,6 +44,10 @@ type Server struct {
runningCancels map[string]context.CancelFunc
runningCancelsMu sync.Mutex
abortUserNotes map[string]string // 监控页终止时附带的用户说明,与 executionID 对应
// httpToolTimeoutMinutes 同步 agent.tool_timeout_minutes,用于 POST /api/mcp 的 tools/call(不经 Agent 包装的路径)。
// nil 表示未配置,沿用默认 30 分钟;指向 0 表示不限制;>0 为分钟数。
httpToolTimeoutMinutes *int
httpToolTimeoutMu sync.RWMutex
}
type sseClient struct {
@@ -90,6 +94,39 @@ func NewServerWithStorage(logger *zap.Logger, storage MonitorStorage) *Server {
return s
}
// ConfigureHTTPToolCallTimeoutFromAgentMinutes 将 agent.tool_timeout_minutes 同步到经 HTTP POST /api/mcp 触发的 tools/call。
// minutes<=0 表示不设置硬性截止时间(与配置「0 不限制」一致);minutes>0 为该次调用的最长等待时间。
// 未调用前对 tools/call 使用默认 30 分钟(与历史硬编码一致)。
func (s *Server) ConfigureHTTPToolCallTimeoutFromAgentMinutes(minutes int) {
if s == nil {
return
}
v := minutes
if v < 0 {
v = 0
}
s.httpToolTimeoutMu.Lock()
defer s.httpToolTimeoutMu.Unlock()
s.httpToolTimeoutMinutes = &v
}
func (s *Server) effectiveHTTPToolCallDeadline() (context.Context, context.CancelFunc) {
const defaultDur = 30 * time.Minute
if s == nil {
return context.WithTimeout(context.Background(), defaultDur)
}
s.httpToolTimeoutMu.RLock()
mPtr := s.httpToolTimeoutMinutes
s.httpToolTimeoutMu.RUnlock()
if mPtr == nil {
return context.WithTimeout(context.Background(), defaultDur)
}
if *mPtr <= 0 {
return context.WithCancel(context.Background())
}
return context.WithTimeout(context.Background(), time.Duration(*mPtr)*time.Minute)
}
// RegisterTool 注册工具
func (s *Server) RegisterTool(tool Tool, handler ToolHandler) {
s.mu.Lock()
@@ -457,7 +494,7 @@ func (s *Server) handleCallTool(msg *Message) *Message {
}
}
baseCtx, timeoutCancel := context.WithTimeout(context.Background(), 30*time.Minute)
baseCtx, timeoutCancel := s.effectiveHTTPToolCallDeadline()
defer timeoutCancel()
execCtx, runCancel := context.WithCancel(baseCtx)
s.registerRunningCancel(executionID, runCancel)
+86 -28
View File
@@ -14,7 +14,10 @@ import (
"unicode/utf8"
"cyberstrike-ai/internal/agent"
"cyberstrike-ai/internal/config"
"cyberstrike-ai/internal/einomcp"
"cyberstrike-ai/internal/einoobserve"
"cyberstrike-ai/internal/openai"
"github.com/cloudwego/eino/adk"
"github.com/cloudwego/eino/schema"
@@ -94,6 +97,9 @@ type einoADKRunLoopArgs struct {
// ModelFacingTrace 可选:由各 ChatModelAgent Handlers 链末尾中间件写入「即将送入模型」的消息快照;
// 非空时优先用于 LastAgentTraceInput 序列化,使续跑与 summarization/reduction 后的上下文一致。
ModelFacingTrace *modelFacingTraceHolder
// EinoCallbacks 可选:为 ADK Runner 注入 eino [callbacks] 全链路观测(见 internal/einoobserve)。
EinoCallbacks *config.MultiAgentEinoCallbacksConfig
}
func runEinoADKAgentLoop(ctx context.Context, args *einoADKRunLoopArgs, baseMsgs []adk.Message) (*RunResult, error) {
@@ -261,7 +267,16 @@ func runEinoADKAgentLoop(ctx context.Context, args *einoADKRunLoopArgs, baseMsgs
isErr := !success || invokeErr != nil
body := content
if invokeErr != nil {
body = invokeErr.Error()
// 保留已流式累计的 stdout(如 execute 超时前的一半输出),避免 tool_result 只剩错误串、模型与 UI 丢失上下文
tail := friendlyEinoExecuteInvokeTail(invokeErr)
// execute 流式包装可能已把超时句写入 content(供 ADK tool 与流式 delta);勿重复拼接
if tail != "" && strings.Contains(content, tail) {
body = content
} else if strings.TrimSpace(content) != "" {
body = strings.TrimRight(content, "\n") + "\n\n" + tail
} else {
body = tail
}
isErr = true
}
recordPendingExecuteStdoutDup(toolName, body, isErr)
@@ -288,6 +303,16 @@ func runEinoADKAgentLoop(ctx context.Context, args *einoADKRunLoopArgs, baseMsgs
})
}
if args.EinoCallbacks != nil {
ctx = einoobserve.AttachAgentRunCallbacks(ctx, args.EinoCallbacks, einoobserve.Params{
Logger: logger,
Progress: progress,
ConversationID: conversationID,
OrchMode: orchMode,
OrchestratorName: orchestratorName,
})
}
runnerCfg := adk.RunnerConfig{
Agent: da,
EnableStreaming: true,
@@ -548,8 +573,11 @@ func runEinoADKAgentLoop(ctx context.Context, args *einoADKRunLoopArgs, baseMsgs
var subAssistantBuf string
var subReplyStreamID string
var mainAssistantBuf string
// 已通过 response_delta 推到前端的正文(与 monitor.js normalizeStreamingDeltaJs 累积一致)
var mainAssistWireAccum string
var mainAssistDupTarget string // 非空表示本段主助手流需缓冲至 EOF,与 execute 输出比对去重
var reasoningBuf string
var prevReasoningDisplay string // UI 用:剥离 Claude 内部 signature 尾缀后的累计展示
var streamRecvErr error
type streamMsg struct {
chunk *schema.Message
@@ -597,19 +625,29 @@ func runEinoADKAgentLoop(ctx context.Context, args *einoADKRunLoopArgs, baseMsgs
var reasoningDelta string
reasoningBuf, reasoningDelta = normalizeStreamingDelta(reasoningBuf, chunk.ReasoningContent)
if reasoningDelta != "" {
if reasoningStreamID == "" {
reasoningStreamID = fmt.Sprintf("eino-reasoning-%s-%d", conversationID, atomic.AddInt64(&reasoningStreamSeq, 1))
progress("thinking_stream_start", " ", map[string]interface{}{
"streamId": reasoningStreamID,
"source": "eino",
"einoAgent": ev.AgentName,
"einoRole": einoRoleTag(ev.AgentName),
"orchestration": orchMode,
fullDisplay := openai.DisplayReasoningContent(reasoningBuf)
var displayDelta string
if strings.HasPrefix(fullDisplay, prevReasoningDisplay) {
displayDelta = fullDisplay[len(prevReasoningDisplay):]
} else {
displayDelta = fullDisplay
}
prevReasoningDisplay = fullDisplay
if displayDelta != "" {
if reasoningStreamID == "" {
reasoningStreamID = fmt.Sprintf("eino-reasoning-%s-%d", conversationID, atomic.AddInt64(&reasoningStreamSeq, 1))
progress("reasoning_chain_stream_start", " ", map[string]interface{}{
"streamId": reasoningStreamID,
"source": "eino",
"einoAgent": ev.AgentName,
"einoRole": einoRoleTag(ev.AgentName),
"orchestration": orchMode,
})
}
progress("reasoning_chain_stream_delta", displayDelta, map[string]interface{}{
"streamId": reasoningStreamID,
})
}
progress("thinking_stream_delta", reasoningDelta, map[string]interface{}{
"streamId": reasoningStreamID,
})
}
}
if chunk.Content != "" {
@@ -645,6 +683,7 @@ func runEinoADKAgentLoop(ctx context.Context, args *einoADKRunLoopArgs, baseMsgs
"einoAgent": ev.AgentName,
"orchestration": orchMode,
})
mainAssistWireAccum, _ = normalizeStreamingDelta(mainAssistWireAccum, contentDelta)
}
}
} else if !streamsMainAssistant(ev.AgentName) {
@@ -690,21 +729,29 @@ func runEinoADKAgentLoop(ctx context.Context, args *einoADKRunLoopArgs, baseMsgs
}
} else if s != "" {
if progress != nil {
progress("response_start", "", map[string]interface{}{
"conversationId": conversationID,
"mcpExecutionIds": snapshotMCPIDs(),
"messageGeneratedBy": "eino:" + ev.AgentName,
"einoRole": "orchestrator",
"einoAgent": ev.AgentName,
"orchestration": orchMode,
})
progress("response_delta", s, map[string]interface{}{
"conversationId": conversationID,
"mcpExecutionIds": snapshotMCPIDs(),
"einoRole": "orchestrator",
"einoAgent": ev.AgentName,
"orchestration": orchMode,
})
// 仅用 TrimSpace 与 execute 比对;推到 UI 的必须是 mainAssistantBuf
// 否则尾部空白/换行与已流式前缀不一致时,前端 normalize 会走拼接路径造成叠字。
_, eofTail := normalizeStreamingDelta(mainAssistWireAccum, mainAssistantBuf)
if eofTail != "" {
if !streamHeaderSent {
progress("response_start", "", map[string]interface{}{
"conversationId": conversationID,
"mcpExecutionIds": snapshotMCPIDs(),
"messageGeneratedBy": "eino:" + ev.AgentName,
"einoRole": "orchestrator",
"einoAgent": ev.AgentName,
"orchestration": orchMode,
})
}
progress("response_delta", eofTail, map[string]interface{}{
"conversationId": conversationID,
"mcpExecutionIds": snapshotMCPIDs(),
"einoRole": "orchestrator",
"einoAgent": ev.AgentName,
"orchestration": orchMode,
})
mainAssistWireAccum, _ = normalizeStreamingDelta(mainAssistWireAccum, eofTail)
}
}
lastAssistant = s
runAccumulatedMsgs = append(runAccumulatedMsgs, schema.AssistantMessage(s, nil))
@@ -777,7 +824,7 @@ func runEinoADKAgentLoop(ctx context.Context, args *einoADKRunLoopArgs, baseMsgs
if mv.Role == schema.Assistant {
if progress != nil && strings.TrimSpace(msg.ReasoningContent) != "" {
progress("thinking", strings.TrimSpace(msg.ReasoningContent), map[string]interface{}{
progress("reasoning_chain", openai.DisplayReasoningContent(strings.TrimSpace(msg.ReasoningContent)), map[string]interface{}{
"conversationId": conversationID,
"source": "eino",
"einoAgent": ev.AgentName,
@@ -921,6 +968,17 @@ func einoPartialRunLastOutputHint() string {
"[Run ended abnormally; continue from the trace above without repeating completed steps.]"
}
// friendlyEinoExecuteInvokeTail 将 Eino execute 等非 MCP 路径的结尾错误转成简短提示;其它情况保留原 error 文本。
func friendlyEinoExecuteInvokeTail(invokeErr error) string {
if invokeErr == nil {
return ""
}
if errors.Is(invokeErr, context.DeadlineExceeded) {
return einoExecuteTimeoutUserHint()
}
return "[执行未正常结束] " + invokeErr.Error()
}
func buildEinoRunResultFromAccumulated(
orchMode string,
runAccumulatedMsgs []adk.Message,
@@ -6,6 +6,7 @@ import (
"fmt"
"io"
"strings"
"time"
"cyberstrike-ai/internal/einomcp"
"cyberstrike-ai/internal/security"
@@ -15,6 +16,24 @@ import (
"github.com/cloudwego/eino/schema"
)
// prependPythonUnbufferedEnv 为 /bin/sh -c 注入 PYTHONUNBUFFERED=1。
// eino-ext local 对流式 stdout 使用 bufio 按「行」推送;python3 写管道时默认块缓冲,print 长期留在用户态缓冲,
// 管道里收不到换行,表现为长时间无输出直至超时或退出。若命令里已出现 PYTHONUNBUFFERED 则不再覆盖。
func prependPythonUnbufferedEnv(shellCommand string) string {
if strings.TrimSpace(shellCommand) == "" {
return shellCommand
}
if strings.Contains(strings.ToUpper(shellCommand), "PYTHONUNBUFFERED") {
return shellCommand
}
return "export PYTHONUNBUFFERED=1\n" + shellCommand
}
// einoExecuteTimeoutUserHint 与写入 ADK 工具消息(模型可见)及 SSE tool_result 尾标一致。
func einoExecuteTimeoutUserHint() string {
return "已超时终止 · Timed out"
}
// einoStreamingShellWrap 包装 Eino filesystem 使用的 StreamingShellcloudwego eino-ext local.Local)。
// 官方 execute 工具默认走 ExecuteStreaming 且不设 RunInBackendGround;末尾带 & 时子进程仍与管道相连,
// streamStdout 按行读取会在无换行输出时长时间阻塞(与 MCP 工具 exec 的独立实现不同)。
@@ -22,10 +41,17 @@ import (
//
// 使用 Pipe 将内层流转发给调用方:在 inner EOF 后、关闭 Pipe 前同步调用 ToolInvokeNotify.Fire
// 保证 run loop 在模型开始下一轮输出前已记录 execute 结果(用于 UI 与「重复助手复述」去重)。
//
// 若 inner 在校验阶段直接返回 error(未建立 reader),不会进入下方 goroutine,也必须 Fire
// 否则 pending tool_call 要等整轮 run 结束才被 force-close,与已展示的助手/工具软错误文案不同步。
type einoStreamingShellWrap struct {
inner filesystem.StreamingShell
invokeNotify *einomcp.ToolInvokeNotifyHolder
einoAgentName string
// outputChunk 可选;非 nil 时在收到内层 ExecuteResponse 片段时推送,与 MCP 工具的 tool_result_delta 一致(需有效 toolCallId)。
outputChunk func(toolName, toolCallID, chunk string)
// toolTimeoutMinutes 与 agent.tool_timeout_minutes 对齐;>0 时对单次 execute 套用 context 超时(与 MCP 工具经 executeToolViaMCP 行为一致)。0 表示仅依赖上层 ctx(如整任务 10h 上限)。
toolTimeoutMinutes int
// recordMonitor 在 execute 流结束后写入 tool_executions 并 recorder(executionId),使「渗透测试详情」与常规 MCP 一致。
recordMonitor func(command, stdout string, success bool, invokeErr error)
}
@@ -38,24 +64,47 @@ func (w *einoStreamingShellWrap) ExecuteStreaming(ctx context.Context, input *fi
return w.inner.ExecuteStreaming(ctx, nil)
}
req := *input
cmd := strings.TrimSpace(req.Command)
userCmd := strings.TrimSpace(req.Command)
if security.IsBackgroundShellCommand(req.Command) && !req.RunInBackendGround {
req.RunInBackendGround = true
}
sr, err := w.inner.ExecuteStreaming(ctx, &req)
req.Command = prependPythonUnbufferedEnv(req.Command)
tid := strings.TrimSpace(compose.GetToolCallID(ctx))
agentTag := strings.TrimSpace(w.einoAgentName)
execCtx := ctx
var execCancel context.CancelFunc
if w.toolTimeoutMinutes > 0 {
execCtx, execCancel = context.WithTimeout(ctx, time.Duration(w.toolTimeoutMinutes)*time.Minute)
}
sr, err := w.inner.ExecuteStreaming(execCtx, &req)
if err != nil {
if execCancel != nil {
execCancel()
}
if w.recordMonitor != nil {
w.recordMonitor(userCmd, "", false, err)
}
if w.invokeNotify != nil && tid != "" {
w.invokeNotify.Fire(tid, "execute", agentTag, false, "", err)
}
return nil, err
}
tid := strings.TrimSpace(compose.GetToolCallID(ctx))
if sr == nil || w.invokeNotify == nil || tid == "" {
if execCancel != nil {
execCancel()
}
return sr, nil
}
outR, outW := schema.Pipe[*filesystem.ExecuteResponse](32)
agentTag := strings.TrimSpace(w.einoAgentName)
go func(inner *schema.StreamReader[*filesystem.ExecuteResponse], command string) {
go func(inner *schema.StreamReader[*filesystem.ExecuteResponse], command string, cancel context.CancelFunc, tctx context.Context) {
defer inner.Close()
if cancel != nil {
defer cancel()
}
var sb strings.Builder
const maxCapture = 16 * 1024
@@ -80,12 +129,18 @@ func (w *einoStreamingShellWrap) ExecuteStreaming(ctx context.Context, input *fi
hasExitCode = true
exitCode = *resp.ExitCode
}
var appended string
if remain := maxCapture - sb.Len(); remain > 0 {
out := resp.Output
if len(out) > remain {
out = out[:remain]
}
sb.WriteString(out)
appended = out
}
// 仅推送写入 sb 的片段,与末尾 Fire/recordMonitor 的截断累计一致,避免最终 tool_result 短于已展示增量。
if w.outputChunk != nil && strings.TrimSpace(appended) != "" {
w.outputChunk("execute", tid, appended)
}
if outW.Send(resp, nil) {
success = false
@@ -99,12 +154,33 @@ func (w *einoStreamingShellWrap) ExecuteStreaming(ctx context.Context, input *fi
success = false
invokeErr = fmt.Errorf("execute exited with code %d", exitCode)
}
// WithTimeout 触发后,子进程常被信号结束,local 侧多报 exit -1 / canceled,错误链里不一定带 DeadlineExceeded。
// 用执行所用 ctx 归一化,便于 UI 展示「超时」而非含糊的 -1。
if tctx != nil && errors.Is(tctx.Err(), context.DeadlineExceeded) {
success = false
invokeErr = context.DeadlineExceeded
}
// ADK 从本 Pipe 拼出 tool 消息正文;仅 Notify 尾标不会进入模型上下文。超时句写入流,与 UI 一致。
if invokeErr != nil && errors.Is(invokeErr, context.DeadlineExceeded) {
hint := "\n\n" + einoExecuteTimeoutUserHint() + "\n"
_ = outW.Send(&filesystem.ExecuteResponse{Output: hint}, nil)
if w.outputChunk != nil && tid != "" {
w.outputChunk("execute", tid, hint)
}
if remain := maxCapture - sb.Len(); remain > 0 {
h := hint
if len(h) > remain {
h = h[:remain]
}
sb.WriteString(h)
}
}
if w.recordMonitor != nil {
w.recordMonitor(command, sb.String(), success, invokeErr)
}
w.invokeNotify.Fire(tid, "execute", agentTag, success, sb.String(), invokeErr)
outW.Close()
}(sr, cmd)
}(sr, userCmd, execCancel, execCtx)
return outR, nil
}
+11 -8
View File
@@ -161,6 +161,8 @@ func buildReductionMiddleware(ctx context.Context, mw config.MultiAgentEinoMiddl
}
// prependEinoMiddlewares returns handlers to prepend (outermost first) and optionally replaces tools when tool_search is used.
// toolSearchActive is true when the toolsearch middleware was mounted (dynamic tools split off); callers should pass this to
// injectToolNamesOnlyInstruction — tool_search is not part of the pre-middleware tools list, so name-scanning alone cannot detect it.
func prependEinoMiddlewares(
ctx context.Context,
mw *config.MultiAgentEinoMiddlewareConfig,
@@ -170,16 +172,16 @@ func prependEinoMiddlewares(
skillsRoot string,
conversationID string,
logger *zap.Logger,
) (outTools []tool.BaseTool, extraHandlers []adk.ChatModelAgentMiddleware, err error) {
) (outTools []tool.BaseTool, extraHandlers []adk.ChatModelAgentMiddleware, toolSearchActive bool, err error) {
if mw == nil {
return tools, nil, nil
return tools, nil, false, nil
}
outTools = tools
if mw.PatchToolCallsEffective() {
patchMW, perr := patchtoolcalls.New(ctx, &patchtoolcalls.Config{})
if perr != nil {
return nil, nil, fmt.Errorf("patchtoolcalls: %w", perr)
return nil, nil, false, fmt.Errorf("patchtoolcalls: %w", perr)
}
extraHandlers = append(extraHandlers, patchMW)
}
@@ -190,7 +192,7 @@ func prependEinoMiddlewares(
} else {
redMW, rerr := buildReductionMiddleware(ctx, *mw, conversationID, einoLoc, logger)
if rerr != nil {
return nil, nil, rerr
return nil, nil, false, rerr
}
extraHandlers = append(extraHandlers, redMW)
}
@@ -209,10 +211,11 @@ func prependEinoMiddlewares(
if split && len(dynamic) > 0 {
ts, terr := toolsearch.New(ctx, &toolsearch.Config{DynamicTools: dynamic})
if terr != nil {
return nil, nil, fmt.Errorf("toolsearch: %w", terr)
return nil, nil, false, fmt.Errorf("toolsearch: %w", terr)
}
extraHandlers = append(extraHandlers, ts)
outTools = static
toolSearchActive = true
if logger != nil {
logger.Info("eino middleware: tool_search enabled",
zap.Int("static_tools", len(static)),
@@ -233,12 +236,12 @@ func prependEinoMiddlewares(
}
baseDir := filepath.Join(skillsRoot, rel, sanitizeEinoPathSegment(conversationID))
if mk := os.MkdirAll(baseDir, 0o755); mk != nil {
return nil, nil, fmt.Errorf("plantask mkdir: %w", mk)
return nil, nil, toolSearchActive, fmt.Errorf("plantask mkdir: %w", mk)
}
ptBE := &localPlantaskBackend{Local: einoLoc}
pt, perr := plantask.New(ctx, &plantask.Config{Backend: ptBE, BaseDir: baseDir})
if perr != nil {
return nil, nil, fmt.Errorf("plantask: %w", perr)
return nil, nil, toolSearchActive, fmt.Errorf("plantask: %w", perr)
}
extraHandlers = append(extraHandlers, pt)
if logger != nil {
@@ -247,7 +250,7 @@ func prependEinoMiddlewares(
}
}
return outTools, extraHandlers, nil
return outTools, extraHandlers, toolSearchActive, nil
}
func deepExtrasFromConfig(ma *config.MultiAgentConfig) (outputKey string, retry *adk.ModelRetryConfig, taskDesc func(context.Context, []adk.Agent) (string, error)) {
+9 -12
View File
@@ -13,6 +13,7 @@ import (
"cyberstrike-ai/internal/config"
"cyberstrike-ai/internal/einomcp"
"cyberstrike-ai/internal/openai"
"cyberstrike-ai/internal/reasoning"
einoopenai "github.com/cloudwego/eino-ext/components/model/openai"
"github.com/cloudwego/eino/adk"
@@ -37,6 +38,7 @@ func RunEinoSingleChatModelAgent(
history []agent.ChatMessage,
roleTools []string,
progress func(eventType, message string, data interface{}),
reasoningClient *reasoning.ClientIntent,
) (*RunResult, error) {
if appCfg == nil || ag == nil {
return nil, fmt.Errorf("eino single: 配置或 Agent 为空")
@@ -94,7 +96,7 @@ func RunEinoSingleChatModelAgent(
return nil, err
}
mainToolsForCfg, mainOrchestratorPre, err := prependEinoMiddlewares(ctx, &ma.EinoMiddleware, einoMWMain, mainTools, einoLoc, skillsRoot, conversationID, logger)
mainToolsForCfg, mainOrchestratorPre, singleToolSearchActive, err := prependEinoMiddlewares(ctx, &ma.EinoMiddleware, einoMWMain, mainTools, einoLoc, skillsRoot, conversationID, logger)
if err != nil {
return nil, fmt.Errorf("eino single eino 中间件: %w", err)
}
@@ -121,6 +123,7 @@ func RunEinoSingleChatModelAgent(
Model: appCfg.OpenAI.Model,
HTTPClient: httpClient,
}
reasoning.ApplyToEinoChatModelConfig(baseModelCfg, &appCfg.OpenAI, reasoningClient)
mainModel, err := einoopenai.NewChatModel(ctx, baseModelCfg)
if err != nil {
@@ -140,7 +143,7 @@ func RunEinoSingleChatModelAgent(
}
if einoSkillMW != nil {
if einoFSTools && einoLoc != nil {
fsMw, fsErr := subAgentFilesystemMiddleware(ctx, einoLoc, toolInvokeNotify, einoSingleAgentName, einoExecMonitor)
fsMw, fsErr := subAgentFilesystemMiddleware(ctx, einoLoc, toolInvokeNotify, einoSingleAgentName, einoExecMonitor, agentToolTimeoutMinutes(appCfg), toolOutputChunk)
if fsErr != nil {
return nil, fmt.Errorf("eino single filesystem 中间件: %w", fsErr)
}
@@ -170,27 +173,20 @@ func RunEinoSingleChatModelAgent(
UnknownToolsHandler: einomcp.UnknownToolReminderHandler(),
ToolCallMiddlewares: []compose.ToolMiddleware{
hitlToolCallMiddleware(),
{Invokable: softRecoveryToolCallMiddleware()},
softRecoveryToolMiddleware(),
},
},
EmitInternalEvents: true,
}
ins := injectToolNamesOnlyInstruction(ctx, ag.EinoSingleAgentSystemInstruction(), mainTools)
ins := injectToolNamesOnlyInstruction(ctx, ag.EinoSingleAgentSystemInstruction(), mainTools, singleToolSearchActive)
if logger != nil {
names := collectToolNames(ctx, mainTools)
mountedNames := collectToolNames(ctx, mainToolsForCfg)
hasToolSearch := false
for _, n := range names {
if strings.EqualFold(strings.TrimSpace(n), "tool_search") {
hasToolSearch = true
break
}
}
logger.Info("eino tool-name injection",
zap.String("scope", "eino_single"),
zap.Int("tool_names", len(names)),
zap.Int("mounted_tool_names", len(mountedNames)),
zap.Bool("has_tool_search", hasToolSearch),
zap.Bool("tool_search_middleware", singleToolSearchActive),
)
}
@@ -244,6 +240,7 @@ func RunEinoSingleChatModelAgent(
ToolInvokeNotify: toolInvokeNotify,
DA: chatAgent,
ModelFacingTrace: modelFacingTrace,
EinoCallbacks: &ma.EinoCallbacks,
EmptyResponseMessage: "(Eino ADK single-agent session completed but no assistant text was captured. Check process details or logs.) " +
"Eino ADK 单代理会话已完成,但未捕获到助手文本输出。请查看过程详情或日志。)",
}, baseMsgs)
+16 -4
View File
@@ -82,6 +82,8 @@ func subAgentFilesystemMiddleware(
invokeNotify *einomcp.ToolInvokeNotifyHolder,
einoAgentName string,
recordMonitor func(command, stdout string, success bool, invokeErr error),
toolTimeoutMinutes int,
outputChunk func(toolName, toolCallID, chunk string),
) (adk.ChatModelAgentMiddleware, error) {
if loc == nil {
return nil, nil
@@ -89,10 +91,20 @@ func subAgentFilesystemMiddleware(
return filesystem.New(ctx, &filesystem.MiddlewareConfig{
Backend: loc,
StreamingShell: &einoStreamingShellWrap{
inner: loc,
invokeNotify: invokeNotify,
einoAgentName: strings.TrimSpace(einoAgentName),
recordMonitor: recordMonitor,
inner: loc,
invokeNotify: invokeNotify,
einoAgentName: strings.TrimSpace(einoAgentName),
outputChunk: outputChunk,
recordMonitor: recordMonitor,
toolTimeoutMinutes: toolTimeoutMinutes,
},
})
}
// agentToolTimeoutMinutes 返回 agent.tool_timeout_minutes(与 executeToolViaMCP 一致);cfg 为 nil 时 0。
func agentToolTimeoutMinutes(cfg *config.Config) int {
if cfg == nil {
return 0
}
return cfg.Agent.ToolTimeoutMinutes
}
+1 -1
View File
@@ -214,7 +214,7 @@ func summarizeFinalizeWithRecentAssistantToolTrail(
selectedCount++
}
// 还原时间顺序
// 还原时间顺序。round 内为原始 *schema.Message 指针,保留 ReasoningContentDeepSeek 工具续跑所必需)。
selectedMsgs := make([]adk.Message, 0, 8)
for i := len(selectedRoundsReverse) - 1; i >= 0; i-- {
selectedMsgs = append(selectedMsgs, selectedRoundsReverse[i].messages...)
+20 -11
View File
@@ -9,34 +9,43 @@ import (
// injectToolNamesOnlyInstruction prepends a compact tool-name-only section into
// the system instruction so the model can reference current callable names.
func injectToolNamesOnlyInstruction(ctx context.Context, instruction string, tools []tool.BaseTool) string {
// toolSearchMiddlewareActive must be true when prependEinoMiddlewares mounted toolsearch (dynamic tools); do not infer this
// by scanning tool names — tool_search is injected by middleware and is usually absent from the pre-split tools list.
func injectToolNamesOnlyInstruction(ctx context.Context, instruction string, tools []tool.BaseTool, toolSearchMiddlewareActive bool) string {
names := collectToolNames(ctx, tools)
if len(names) == 0 {
return strings.TrimSpace(instruction)
}
hasToolSearch := false
for _, n := range names {
if strings.EqualFold(strings.TrimSpace(n), "tool_search") {
hasToolSearch = true
break
hasToolSearch := toolSearchMiddlewareActive
if !hasToolSearch {
for _, n := range names {
if strings.EqualFold(strings.TrimSpace(n), "tool_search") {
hasToolSearch = true
break
}
}
}
var sb strings.Builder
sb.WriteString("以下是当前会话中可调用的工具名称列表(仅名称,无参数定义):\n")
sb.WriteString("以下是当前会话绑定的工具名称索引(仅名称,无参数 JSON Schema)。\n")
sb.WriteString("说明:若启用了 tool_search,则列表里可能含「非常驻」工具——它们不一定出现在当前轮次下发给模型的工具定义中;在未看到该工具的完整 schema 前,禁止凭名称臆测参数。\n")
for _, name := range names {
sb.WriteString("- ")
sb.WriteString(name)
sb.WriteByte('\n')
}
sb.WriteString("\n使用规则:\n")
sb.WriteString("1) 上仅为名称列表,不含参数定义。\n")
sb.WriteString("1) 上仅为名称索引,不含参数定义。禁止猜测参数名、类型、枚举取值或是否必填。\n")
if hasToolSearch {
sb.WriteString("2) 在调用具体工具前,应先使用 tool_search 查看工具详情与参数要求,再发起调用。\n")
sb.WriteString("【强制 / 最高优先级】本会话已启用 tool_search(动态工具池)。凡名称索引里出现、但你在「当前请求所附 tools 定义」中看不到其完整参数 schema 的工具,一律必须先调用 tool_search;为省 token 或赶进度而跳过 tool_search、直接调用业务工具,属于明确禁止的错误流程。\n")
sb.WriteString("2) 默认策略:只要对目标工具的参数定义有任何不确定,就先 tool_search;宁可多一次 tool_search,也不要在未见 schema 时盲调业务工具。\n")
sb.WriteString("3) 调用顺序:先 tool_search(唯一必填参数 regex_pattern:按工具名匹配的正则,如子串 nuclei 或 ^exact_tool_name$)→ 在后续轮次确认目标工具已出现在 tools 列表且已阅读其 schema → 再发起对该工具的真实调用。\n")
sb.WriteString("4) tool_search 的返回仅为匹配到的工具名列表;schema 在解锁后的下一轮才会下发。禁止在 schema 未出现时编造 JSON 参数。\n")
sb.WriteString("5) 不要臆造不存在的工具名。\n\n")
} else {
sb.WriteString("2) 调用具体工具前,请先确认该工具的参数要求;不确定时先澄清再调用。\n")
sb.WriteString("2) 调用具体工具前,请先确认该工具的参数要求(以当前请求中的工具定义为准);不确定时先澄清再调用。\n")
sb.WriteString("3) 不要臆造不存在的工具名。\n\n")
}
sb.WriteString("3) 不要臆造不存在的工具名。\n\n")
if s := strings.TrimSpace(instruction); s != "" {
sb.WriteString(s)
}
@@ -0,0 +1,22 @@
package multiagent
import (
"strings"
"testing"
)
// Eino execute 去重分支 EOF flush 须以 mainAssistantBuf 为基准计算 tail
// 若误用 TrimSpace(mainAssistantBuf),会与已推前缀在空白处失配,normalize 走拼接路径叠字。
func TestNormalizeStreamingDelta_eofTailUsesRawBufNotTrim(t *testing.T) {
wireAccum := "phrase "
rawFull := "phrase \n"
_, tail := normalizeStreamingDelta(wireAccum, rawFull)
if want := "\n"; tail != want {
t.Fatalf("tail=%q want %q", tail, want)
}
nextWrong, badTail := normalizeStreamingDelta(wireAccum, strings.TrimSpace(rawFull))
if badTail != "phrase" || nextWrong != "phrase phrase" {
t.Fatalf("trimmed full vs wire prefix mismatch should concat-append; got next=%q badTail=%q", nextWrong, badTail)
}
}
+52
View File
@@ -0,0 +1,52 @@
package multiagent
import (
"encoding/json"
"fmt"
"strings"
)
// AggregatedReasoningFromTraceJSON concatenates non-empty assistant `reasoning_content`
// fields from last_react-style JSON (slice of message objects) in document order.
// Used to persist on the single assistant bubble row for audit and for GetMessages fallback
// when the full trace JSON is unavailable. For strict per-message replay, prefer last_react_input.
func AggregatedReasoningFromTraceJSON(traceJSON string) string {
traceJSON = strings.TrimSpace(traceJSON)
if traceJSON == "" {
return ""
}
var arr []map[string]interface{}
if err := json.Unmarshal([]byte(traceJSON), &arr); err != nil {
return ""
}
var b strings.Builder
for _, m := range arr {
role, _ := m["role"].(string)
if !strings.EqualFold(strings.TrimSpace(role), "assistant") {
continue
}
rc := reasoningContentFromMessageMap(m)
if rc == "" {
continue
}
if b.Len() > 0 {
b.WriteByte('\n')
}
b.WriteString(rc)
}
return b.String()
}
func reasoningContentFromMessageMap(m map[string]interface{}) string {
if m == nil {
return ""
}
switch v := m["reasoning_content"].(type) {
case string:
return strings.TrimSpace(v)
case nil:
return ""
default:
return strings.TrimSpace(fmt.Sprint(v))
}
}
@@ -0,0 +1,20 @@
package multiagent
import "testing"
func TestAggregatedReasoningFromTraceJSON(t *testing.T) {
const j = `[
{"role":"user","content":"hi"},
{"role":"assistant","content":"c1","reasoning_content":"r1","tool_calls":[{"id":"1","type":"function","function":{"name":"f","arguments":"{}"}}]},
{"role":"tool","tool_call_id":"1","content":"out"},
{"role":"assistant","content":"c2","reasoning_content":"r2"}
]`
got := AggregatedReasoningFromTraceJSON(j)
want := "r1\nr2"
if got != want {
t.Fatalf("got %q want %q", got, want)
}
if AggregatedReasoningFromTraceJSON("") != "" || AggregatedReasoningFromTraceJSON("[]") != "" {
t.Fatal("empty expected")
}
}
+28 -30
View File
@@ -17,6 +17,7 @@ import (
"cyberstrike-ai/internal/config"
"cyberstrike-ai/internal/einomcp"
"cyberstrike-ai/internal/openai"
"cyberstrike-ai/internal/reasoning"
einoopenai "github.com/cloudwego/eino-ext/components/model/openai"
"github.com/cloudwego/eino/adk"
@@ -48,6 +49,7 @@ type toolCallPendingInfo struct {
// RunDeepAgent 使用 Eino 多代理预置编排执行一轮对话(deep / plan_execute / supervisor;流式事件通过 progress 回调输出)。
// orchestrationOverride 非空时优先(如聊天/WebShell 请求体);否则用 multi_agent.orchestration(遗留 yaml);皆空则按 deep。
// reasoningClient 来自 ChatRequest.reasoning;可为 nil(机器人/批量等走全局 openai.reasoning)。
func RunDeepAgent(
ctx context.Context,
appCfg *config.Config,
@@ -61,6 +63,7 @@ func RunDeepAgent(
progress func(eventType, message string, data interface{}),
agentsMarkdownDir string,
orchestrationOverride string,
reasoningClient *reasoning.ClientIntent,
) (*RunResult, error) {
if appCfg == nil || ma == nil || ag == nil {
return nil, fmt.Errorf("multiagent: 配置或 Agent 为空")
@@ -163,6 +166,7 @@ func RunDeepAgent(
Model: appCfg.OpenAI.Model,
HTTPClient: httpClient,
}
reasoning.ApplyToEinoChatModelConfig(baseModelCfg, &appCfg.OpenAI, reasoningClient)
deepMaxIter := ma.MaxIteration
if deepMaxIter <= 0 {
@@ -219,7 +223,7 @@ func RunDeepAgent(
return nil, fmt.Errorf("子代理 %q 工具: %w", id, err)
}
subToolsForCfg, subPre, err := prependEinoMiddlewares(ctx, &ma.EinoMiddleware, einoMWSub, subTools, einoLoc, skillsRoot, conversationID, logger)
subToolsForCfg, subPre, subToolSearchActive, err := prependEinoMiddlewares(ctx, &ma.EinoMiddleware, einoMWSub, subTools, einoLoc, skillsRoot, conversationID, logger)
if err != nil {
return nil, fmt.Errorf("子代理 %q eino 中间件: %w", id, err)
}
@@ -240,7 +244,7 @@ func RunDeepAgent(
}
if einoSkillMW != nil {
if einoFSTools && einoLoc != nil {
subFs, fsErr := subAgentFilesystemMiddleware(ctx, einoLoc, toolInvokeNotify, id, einoExecMonitor)
subFs, fsErr := subAgentFilesystemMiddleware(ctx, einoLoc, toolInvokeNotify, id, einoExecMonitor, agentToolTimeoutMinutes(appCfg), toolOutputChunk)
if fsErr != nil {
return nil, fmt.Errorf("子代理 %q filesystem 中间件: %w", id, fsErr)
}
@@ -256,23 +260,16 @@ func RunDeepAgent(
subHandlers = append(subHandlers, teleMw)
}
subInstrFinal := injectToolNamesOnlyInstruction(ctx, instr, subTools)
subInstrFinal := injectToolNamesOnlyInstruction(ctx, instr, subTools, subToolSearchActive)
if logger != nil {
subNames := collectToolNames(ctx, subTools)
mountedNames := collectToolNames(ctx, subToolsForCfg)
hasToolSearch := false
for _, n := range subNames {
if strings.EqualFold(strings.TrimSpace(n), "tool_search") {
hasToolSearch = true
break
}
}
logger.Info("eino tool-name injection",
zap.String("scope", "sub_agent"),
zap.String("agent", id),
zap.Int("tool_names", len(subNames)),
zap.Int("mounted_tool_names", len(mountedNames)),
zap.Bool("has_tool_search", hasToolSearch),
zap.Bool("tool_search_middleware", subToolSearchActive),
)
}
sa, err := adk.NewChatModelAgent(ctx, &adk.ChatModelAgentConfig{
@@ -286,7 +283,7 @@ func RunDeepAgent(
UnknownToolsHandler: einomcp.UnknownToolReminderHandler(),
ToolCallMiddlewares: []compose.ToolMiddleware{
hitlToolCallMiddleware(),
{Invokable: softRecoveryToolCallMiddleware()},
softRecoveryToolMiddleware(),
},
},
EmitInternalEvents: true,
@@ -337,28 +334,21 @@ func RunDeepAgent(
if err != nil {
return nil, err
}
mainToolsForCfg, mainOrchestratorPre, err := prependEinoMiddlewares(ctx, &ma.EinoMiddleware, einoMWMain, mainTools, einoLoc, skillsRoot, conversationID, logger)
mainToolsForCfg, mainOrchestratorPre, mainToolSearchActive, err := prependEinoMiddlewares(ctx, &ma.EinoMiddleware, einoMWMain, mainTools, einoLoc, skillsRoot, conversationID, logger)
if err != nil {
return nil, err
}
orchInstruction = injectToolNamesOnlyInstruction(ctx, orchInstruction, mainTools)
orchInstruction = injectToolNamesOnlyInstruction(ctx, orchInstruction, mainTools, mainToolSearchActive)
if logger != nil {
mainNames := collectToolNames(ctx, mainTools)
mountedNames := collectToolNames(ctx, mainToolsForCfg)
hasToolSearch := false
for _, n := range mainNames {
if strings.EqualFold(strings.TrimSpace(n), "tool_search") {
hasToolSearch = true
break
}
}
logger.Info("eino tool-name injection",
zap.String("scope", "orchestrator"),
zap.String("orchestration", orchMode),
zap.Int("tool_names", len(mainNames)),
zap.Int("mounted_tool_names", len(mountedNames)),
zap.Bool("has_tool_search", hasToolSearch),
zap.Bool("tool_search_middleware", mainToolSearchActive),
)
}
@@ -386,10 +376,12 @@ func RunDeepAgent(
if einoLoc != nil && einoFSTools {
deepBackend = einoLoc
deepShell = &einoStreamingShellWrap{
inner: einoLoc,
invokeNotify: toolInvokeNotify,
einoAgentName: orchestratorName,
recordMonitor: einoExecMonitor,
inner: einoLoc,
invokeNotify: toolInvokeNotify,
einoAgentName: orchestratorName,
outputChunk: toolOutputChunk,
recordMonitor: einoExecMonitor,
toolTimeoutMinutes: agentToolTimeoutMinutes(appCfg),
}
}
@@ -435,7 +427,7 @@ func RunDeepAgent(
UnknownToolsHandler: einomcp.UnknownToolReminderHandler(),
ToolCallMiddlewares: []compose.ToolMiddleware{
hitlToolCallMiddleware(),
{Invokable: softRecoveryToolCallMiddleware()},
softRecoveryToolMiddleware(),
},
},
EmitInternalEvents: true,
@@ -453,7 +445,7 @@ func RunDeepAgent(
// 构建 filesystem 中间件(与 Deep sub-agent 一致)
var peFsMw adk.ChatModelAgentMiddleware
if einoSkillMW != nil && einoFSTools && einoLoc != nil {
peFsMw, err = subAgentFilesystemMiddleware(ctx, einoLoc, toolInvokeNotify, "executor", einoExecMonitor)
peFsMw, err = subAgentFilesystemMiddleware(ctx, einoLoc, toolInvokeNotify, "executor", einoExecMonitor, agentToolTimeoutMinutes(appCfg), toolOutputChunk)
if err != nil {
return nil, fmt.Errorf("plan_execute filesystem 中间件: %w", err)
}
@@ -581,6 +573,7 @@ func RunDeepAgent(
ToolInvokeNotify: toolInvokeNotify,
DA: da,
ModelFacingTrace: modelFacingTrace,
EinoCallbacks: &ma.EinoCallbacks,
EmptyResponseMessage: "(Eino multi-agent orchestration completed but no assistant text was captured. Check process details or logs.) " +
"(Eino 多代理编排已完成,但未捕获到助手文本输出。请查看过程详情或日志。)",
}, baseMsgs)
@@ -636,8 +629,13 @@ func historyToMessages(history []agent.ChatMessage, appCfg *config.Config, mwCfg
}
case "assistant":
toolSchema := chatToolCallsToSchema(h.ToolCalls)
if len(toolSchema) > 0 || strings.TrimSpace(h.Content) != "" {
raw = append(raw, schema.AssistantMessage(h.Content, toolSchema))
hasRC := strings.TrimSpace(h.ReasoningContent) != ""
if len(toolSchema) > 0 || strings.TrimSpace(h.Content) != "" || hasRC {
am := schema.AssistantMessage(h.Content, toolSchema)
if hasRC {
am.ReasoningContent = strings.TrimSpace(h.ReasoningContent)
}
raw = append(raw, am)
}
case "tool":
if strings.TrimSpace(h.ToolCallID) == "" && strings.TrimSpace(h.Content) == "" {
@@ -0,0 +1,22 @@
package multiagent
import (
"testing"
"cyberstrike-ai/internal/agent"
)
func TestHistoryToMessagesPreservesReasoningContent(t *testing.T) {
h := []agent.ChatMessage{
{Role: "user", Content: "u"},
{Role: "assistant", Content: "c", ReasoningContent: "r1", ToolCalls: []agent.ToolCall{{ID: "t1", Type: "function", Function: agent.FunctionCall{Name: "f", Arguments: map[string]interface{}{}}}}},
}
msgs := historyToMessages(h, nil, nil)
if len(msgs) != 2 {
t.Fatalf("len=%d", len(msgs))
}
am := msgs[1]
if am.ReasoningContent != "r1" || am.Content != "c" {
t.Fatalf("got reasoning=%q content=%q", am.ReasoningContent, am.Content)
}
}
+42 -2
View File
@@ -8,6 +8,7 @@ import (
"strings"
"github.com/cloudwego/eino/compose"
"github.com/cloudwego/eino/schema"
)
// softRecoveryToolCallMiddleware returns an InvokableToolMiddleware that catches
@@ -16,8 +17,9 @@ import (
// returned to the LLM. This allows the model to self-correct within the same
// iteration rather than crashing the entire graph and requiring a full replay.
//
// Without this middleware, a JSON parse failure in any tool's InvokableRun propagates
// as a hard error through the Eino ToolsNode → [NodeRunError] → ev.Err, which
// Without Invokable (+ Streamable where applicable) registration, a JSON parse failure
// in InvokableRun / StreamableRun propagates as a hard error through the Eino ToolsNode
// → [NodeRunError] → ev.Err, which
// either triggers the full-replay retry loop (expensive) or terminates the run
// entirely once retries are exhausted. With it, the LLM simply sees an error message
// in the tool result and can adjust its next tool call accordingly.
@@ -39,6 +41,44 @@ func softRecoveryToolCallMiddleware() compose.InvokableToolMiddleware {
}
}
// softRecoveryStreamableToolCallMiddleware mirrors softRecoveryToolCallMiddleware for
// tools that implement StreamableTool only (e.g. Eino ADK filesystem execute).
// Eino applies Invokable vs Streamable middleware to disjoint code paths in ToolsNode;
// registering only Invokable leaves streaming tools uncovered — empty/malformed JSON
// then fails inside [LocalStreamFunc] before the inner endpoint runs.
func softRecoveryStreamableToolCallMiddleware() compose.StreamableToolMiddleware {
return func(next compose.StreamableToolEndpoint) compose.StreamableToolEndpoint {
return func(ctx context.Context, input *compose.ToolInput) (*compose.StreamToolOutput, error) {
out, err := next(ctx, input)
if err == nil {
return out, nil
}
if !isSoftRecoverableToolError(err) {
return out, err
}
toolName := ""
args := ""
if input != nil {
toolName = input.Name
args = input.Arguments
}
msg := buildSoftRecoveryMessage(toolName, args, err)
return &compose.StreamToolOutput{
Result: schema.StreamReaderFromArray([]string{msg}),
}, nil
}
}
}
// softRecoveryToolMiddleware returns a ToolMiddleware with both Invokable and Streamable
// soft recovery (same semantics as hitlToolCallMiddleware bundling).
func softRecoveryToolMiddleware() compose.ToolMiddleware {
return compose.ToolMiddleware{
Invokable: softRecoveryToolCallMiddleware(),
Streamable: softRecoveryStreamableToolCallMiddleware(),
}
}
// isSoftRecoverableToolError determines whether a tool execution error should be
// silently converted to a tool-result message rather than crashing the graph.
//
@@ -4,6 +4,8 @@ import (
"context"
"encoding/json"
"errors"
"io"
"strings"
"testing"
"github.com/cloudwego/eino/compose"
@@ -108,6 +110,39 @@ func TestSoftRecoveryToolCallMiddleware_PassesThrough(t *testing.T) {
}
}
func TestSoftRecoveryStreamableToolCallMiddleware_LocalStreamFuncJSONError(t *testing.T) {
mw := softRecoveryStreamableToolCallMiddleware()
next := func(ctx context.Context, input *compose.ToolInput) (*compose.StreamToolOutput, error) {
return nil, errors.New(`[LocalStreamFunc] failed to unmarshal arguments in json, toolName=execute, err="Syntax error no sources available, the input json is empty`)
}
wrapped := mw(next)
out, err := wrapped(context.Background(), &compose.ToolInput{
Name: "execute",
Arguments: "",
})
if err != nil {
t.Fatalf("expected nil error (soft recovery), got: %v", err)
}
if out == nil || out.Result == nil {
t.Fatal("expected stream result")
}
var sb strings.Builder
for {
chunk, rerr := out.Result.Recv()
if errors.Is(rerr, io.EOF) {
break
}
if rerr != nil {
t.Fatalf("recv: %v", rerr)
}
sb.WriteString(chunk)
}
text := sb.String()
if !containsAll(text, "[Tool Error]", "execute", "JSON") {
t.Fatalf("recovery message missing expected content: %s", text)
}
}
func TestSoftRecoveryToolCallMiddleware_ConvertsJSONError(t *testing.T) {
mw := softRecoveryToolCallMiddleware()
next := func(ctx context.Context, input *compose.ToolInput) (*compose.ToolOutput, error) {
+112 -1
View File
@@ -9,6 +9,9 @@ package openai
// Stream: Claude SSE (event: content_block_delta / message_delta) → OpenAI SSE 格式
// Auth: Bearer → x-api-key
// Tools: OpenAI tools[] → Claude tools[] (input_schema)
//
// Extended thinking: 顶层 `thinking` 从 OpenAI 请求体透传;响应中 `thinking` block 映射为
// `reasoning_content`(可读前缀 + 内部 JSON 尾缀以保留 signature,供多轮工具续跑;UI 用 openai.DisplayReasoningContent 剥离)。
import (
"bufio"
@@ -38,6 +41,7 @@ type claudeRequest struct {
Messages []claudeMessage `json:"messages"`
Tools []claudeTool `json:"tools,omitempty"`
Stream bool `json:"stream,omitempty"`
Thinking json.RawMessage `json:"thinking,omitempty"`
}
type claudeMessage struct {
@@ -76,6 +80,10 @@ type claudeContentBlock struct {
// text block
Text string `json:"text,omitempty"`
// thinking block (extended thinking)
Thinking string `json:"thinking,omitempty"`
Signature string `json:"signature,omitempty"`
// tool_use block (assistant 返回)
ID string `json:"id,omitempty"`
Name string `json:"name,omitempty"`
@@ -176,7 +184,13 @@ func convertOpenAIToClaude(payload interface{}) (*claudeRequest, error) {
// tool_calls (assistant 消息中包含工具调用)
if role == "assistant" {
rc, _ := mm["reasoning_content"].(string)
_, thinkingReplay := parseClaudeReasoningAssistantBlocks(rc)
var blocks []claudeContentBlock
for _, tb := range thinkingReplay {
blocks = append(blocks, tb)
}
if content != "" {
blocks = append(blocks, claudeContentBlock{Type: "text", Text: content})
}
@@ -290,6 +304,13 @@ func convertOpenAIToClaude(payload interface{}) (*claudeRequest, error) {
}
}
// Extended thinking (Anthropic top-level); merged from Eino ExtraFields / admin extras.
if th, ok := oai["thinking"]; ok && th != nil {
if raw, err := json.Marshal(th); err == nil && len(raw) > 0 && string(raw) != "null" {
req.Thinking = json.RawMessage(raw)
}
}
return req, nil
}
@@ -318,9 +339,12 @@ func claudeToOpenAIResponseJSON(claudeBody []byte) ([]byte, error) {
var textContent string
var toolCalls []interface{}
var thinkingBlocks []claudeContentBlock
for _, block := range cr.Content {
switch block.Type {
case "thinking":
thinkingBlocks = append(thinkingBlocks, block)
case "text":
textContent += block.Text
case "tool_use":
@@ -344,6 +368,18 @@ func claudeToOpenAIResponseJSON(claudeBody []byte) ([]byte, error) {
if len(toolCalls) > 0 {
message["tool_calls"] = toolCalls
}
if len(thinkingBlocks) > 0 {
var parts []string
for _, tb := range thinkingBlocks {
if strings.TrimSpace(tb.Thinking) != "" {
parts = append(parts, tb.Thinking)
}
}
rc := appendClaudeReasoningRoundTrip(strings.Join(parts, "\n\n"), thinkingBlocks)
if rc != "" {
message["reasoning_content"] = rc
}
}
choice := map[string]interface{}{
"index": 0,
@@ -901,8 +937,16 @@ func (rt *claudeRoundTripper) RoundTrip(req *http.Request) (*http.Response, erro
reader := bufio.NewReader(resp.Body)
blockToToolIndex := make(map[int]int)
blockIndexToType := make(map[int]string)
nextToolIndex := 0
type thinkingAcc struct {
text strings.Builder
sig strings.Builder
}
thinkingByIndex := make(map[int]*thinkingAcc)
var finishedThinking []claudeContentBlock
for {
line, readErr := reader.ReadString('\n')
if readErr != nil {
@@ -947,6 +991,11 @@ func (rt *claudeRoundTripper) RoundTrip(req *http.Request) (*http.Response, erro
blockIdx := int(blockIdxFlt)
cb, _ := event["content_block"].(map[string]interface{})
bt, _ := cb["type"].(string)
blockIndexToType[blockIdx] = bt
if bt == "thinking" {
thinkingByIndex[blockIdx] = &thinkingAcc{}
}
if bt == "tool_use" {
id, _ := cb["id"].(string)
@@ -986,7 +1035,35 @@ func (rt *claudeRoundTripper) RoundTrip(req *http.Request) (*http.Response, erro
delta, _ := event["delta"].(map[string]interface{})
dt, _ := delta["type"].(string)
if dt == "text_delta" {
if dt == "thinking_delta" {
tPart, _ := delta["thinking"].(string)
if tPart != "" {
if acc := thinkingByIndex[blockIdx]; acc != nil {
acc.text.WriteString(tPart)
}
oaiChunk := map[string]interface{}{
"choices": []map[string]interface{}{
{
"delta": map[string]interface{}{
"reasoning_content": tPart,
},
},
},
}
b, _ := json.Marshal(oaiChunk)
if !writeLine("data: " + string(b) + "\n\n") {
pw.Close()
return
}
}
} else if dt == "signature_delta" {
sigPart, _ := delta["signature"].(string)
if sigPart != "" {
if acc := thinkingByIndex[blockIdx]; acc != nil {
acc.sig.WriteString(sigPart)
}
}
} else if dt == "text_delta" {
text, _ := delta["text"].(string)
oaiChunk := map[string]interface{}{
"choices": []map[string]interface{}{
@@ -1031,6 +1108,21 @@ func (rt *claudeRoundTripper) RoundTrip(req *http.Request) (*http.Response, erro
}
}
case "content_block_stop":
blockIdxFlt, _ := event["index"].(float64)
blockIdx := int(blockIdxFlt)
bt := blockIndexToType[blockIdx]
if bt == "thinking" {
if acc := thinkingByIndex[blockIdx]; acc != nil {
finishedThinking = append(finishedThinking, claudeContentBlock{
Type: "thinking",
Thinking: acc.text.String(),
Signature: acc.sig.String(),
})
delete(thinkingByIndex, blockIdx)
}
}
case "message_delta":
d, _ := event["delta"].(map[string]interface{})
if sr, ok := d["stop_reason"].(string); ok {
@@ -1051,6 +1143,25 @@ func (rt *claudeRoundTripper) RoundTrip(req *http.Request) (*http.Response, erro
}
case "message_stop":
if len(finishedThinking) > 0 {
suffix := appendClaudeReasoningRoundTrip("", finishedThinking)
if strings.TrimSpace(suffix) != "" {
oaiChunk := map[string]interface{}{
"choices": []map[string]interface{}{
{
"delta": map[string]interface{}{
"reasoning_content": suffix,
},
},
},
}
b, _ := json.Marshal(oaiChunk)
if !writeLine("data: " + string(b) + "\n\n") {
pw.Close()
return
}
}
}
writeLine("data: [DONE]\n\n")
pw.Close()
return
@@ -0,0 +1,81 @@
package openai
import (
"encoding/json"
"strings"
)
// claudeReasoningRoundTripSep separates human-readable reasoning from a JSON payload of
// Anthropic thinking blocks (with signatures) for multi-turn extended thinking + tools.
// Not shown in UI (see DisplayReasoningContent).
const claudeReasoningRoundTripSep = "\n---CSAI_CLAUDE_THINKING_BLOCKS---\n"
// DisplayReasoningContent returns reasoning text suitable for the UI (strips internal
// Claude round-trip JSON suffix). Safe for DeepSeek/plain reasoning strings (no-op).
func DisplayReasoningContent(s string) string {
s = strings.TrimSpace(s)
if s == "" {
return ""
}
i := strings.LastIndex(s, claudeReasoningRoundTripSep)
if i < 0 {
return s
}
return strings.TrimSpace(s[:i])
}
func appendClaudeReasoningRoundTrip(display string, blocks []claudeContentBlock) string {
var payload []map[string]string
for _, b := range blocks {
if b.Type != "thinking" {
continue
}
payload = append(payload, map[string]string{
"type": b.Type,
"thinking": b.Thinking,
"signature": b.Signature,
})
}
if len(payload) == 0 {
return strings.TrimSpace(display)
}
js, err := json.Marshal(payload)
if err != nil {
return strings.TrimSpace(display)
}
d := strings.TrimSpace(display)
if d == "" {
return claudeReasoningRoundTripSep + string(js)
}
return d + claudeReasoningRoundTripSep + string(js)
}
// parseClaudeReasoningAssistantBlocks extracts Anthropic thinking blocks from an OpenAI-style
// reasoning_content string. When no suffix is present, blocks is nil (caller must not invent signatures).
func parseClaudeReasoningAssistantBlocks(reasoningContent string) (display string, blocks []claudeContentBlock) {
reasoningContent = strings.TrimSpace(reasoningContent)
if reasoningContent == "" {
return "", nil
}
idx := strings.LastIndex(reasoningContent, claudeReasoningRoundTripSep)
if idx < 0 {
return reasoningContent, nil
}
display = strings.TrimSpace(reasoningContent[:idx])
jsonPart := strings.TrimSpace(reasoningContent[idx+len(claudeReasoningRoundTripSep):])
var arr []struct {
Type string `json:"type"`
Thinking string `json:"thinking"`
Signature string `json:"signature"`
}
if err := json.Unmarshal([]byte(jsonPart), &arr); err != nil {
return reasoningContent, nil
}
for _, x := range arr {
if x.Type != "thinking" {
continue
}
blocks = append(blocks, claudeContentBlock{Type: "thinking", Thinking: x.Thinking, Signature: x.Signature})
}
return display, blocks
}
@@ -0,0 +1,102 @@
package openai
import (
"encoding/json"
"strings"
"testing"
)
func TestDisplayReasoningContent(t *testing.T) {
raw := "hello" + claudeReasoningRoundTripSep + `[{"type":"thinking","thinking":"x","signature":"sig"}]`
if d := DisplayReasoningContent(raw); d != "hello" {
t.Fatalf("got %q", d)
}
if DisplayReasoningContent("plain") != "plain" {
t.Fatal()
}
}
func TestAppendParseClaudeReasoningRoundTrip(t *testing.T) {
blocks := []claudeContentBlock{
{Type: "thinking", Thinking: "a", Signature: "s1"},
{Type: "thinking", Thinking: "b", Signature: "s2"},
}
s := appendClaudeReasoningRoundTrip("sum", blocks)
if !strings.Contains(s, claudeReasoningRoundTripSep) {
t.Fatal("missing sep")
}
display, back := parseClaudeReasoningAssistantBlocks(s)
if display != "sum" || len(back) != 2 {
t.Fatalf("display=%q len=%d", display, len(back))
}
if back[0].Signature != "s1" || back[1].Thinking != "b" {
t.Fatalf("%+v", back)
}
}
func TestConvertOpenAIToClaude_AssistantReasoningReplay(t *testing.T) {
rc := appendClaudeReasoningRoundTrip("vis", []claudeContentBlock{
{Type: "thinking", Thinking: "t1", Signature: "sig1"},
})
payload := map[string]interface{}{
"model": "claude-3-5-sonnet-latest",
"messages": []interface{}{
map[string]interface{}{
"role": "assistant",
"content": "out",
"reasoning_content": rc,
},
},
}
req, err := convertOpenAIToClaude(payload)
if err != nil {
t.Fatal(err)
}
if len(req.Messages) != 1 {
t.Fatalf("messages=%d", len(req.Messages))
}
blocks := req.Messages[0].Content.Blocks
if len(blocks) < 2 {
t.Fatalf("blocks=%d", len(blocks))
}
if blocks[0].Type != "thinking" || blocks[0].Signature != "sig1" {
t.Fatalf("first block %+v", blocks[0])
}
foundText := false
for _, b := range blocks {
if b.Type == "text" && b.Text == "out" {
foundText = true
}
}
if !foundText {
t.Fatalf("blocks=%+v", blocks)
}
}
func TestClaudeToOpenAIResponseJSON_Thinking(t *testing.T) {
claudeBody := []byte(`{
"id":"msg_1","type":"message","role":"assistant","model":"x","stop_reason":"end_turn",
"content":[
{"type":"thinking","thinking":"step","signature":"sigx"},
{"type":"text","text":"hi"}
]
}`)
oai, err := claudeToOpenAIResponseJSON(claudeBody)
if err != nil {
t.Fatal(err)
}
var wrap map[string]interface{}
if err := json.Unmarshal(oai, &wrap); err != nil {
t.Fatal(err)
}
choices := wrap["choices"].([]interface{})
ch0 := choices[0].(map[string]interface{})
msg := ch0["message"].(map[string]interface{})
rc, _ := msg["reasoning_content"].(string)
if !strings.Contains(rc, "step") || !strings.Contains(rc, claudeReasoningRoundTripSep) {
t.Fatalf("reasoning_content=%q", rc)
}
if msg["content"] != "hi" {
t.Fatal()
}
}
+250
View File
@@ -0,0 +1,250 @@
// Package reasoning maps user/config intent to CloudWeGo Eino OpenAI ChatModel fields
// (ReasoningEffort, ExtraFields such as thinking / reasoning_effort / output_config).
package reasoning
import (
"strings"
"cyberstrike-ai/internal/config"
einoopenai "github.com/cloudwego/eino-ext/components/model/openai"
)
// ClientIntent is optional per-request override from ChatRequest.reasoning.
type ClientIntent struct {
Mode string
Effort string
}
type wireProfile int
const (
wireNone wireProfile = iota
wireClaude
wireDeepseek
wireOpenAI
wireOutputConfig
)
// ApplyToEinoChatModelConfig merges reasoning-related options into cfg.
// Precondition: cfg already has APIKey, BaseURL, Model, HTTPClient set.
func ApplyToEinoChatModelConfig(cfg *einoopenai.ChatModelConfig, oa *config.OpenAIConfig, client *ClientIntent) {
if cfg == nil || oa == nil {
return
}
sr := &oa.Reasoning
allowClient := sr.AllowClientReasoningEffective()
mode := effectiveMode(sr, client, allowClient)
// Claude (Anthropic): merge admin extras first; optional extended thinking maps to top-level `thinking`
// (see internal/openai convertOpenAIToClaude). DeepSeek/OpenAI-style fields are not sent.
if strings.EqualFold(strings.TrimSpace(oa.Provider), "claude") ||
strings.EqualFold(strings.TrimSpace(oa.Provider), "anthropic") {
if len(sr.ExtraRequestFields) > 0 {
if cfg.ExtraFields == nil {
cfg.ExtraFields = make(map[string]any)
}
for k, v := range sr.ExtraRequestFields {
cfg.ExtraFields[k] = v
}
}
if mode == "off" {
return
}
applyClaudeExtendedThinking(cfg, mode, effectiveEffort(sr, client, allowClient), oa.Model)
return
}
if mode == "off" {
return
}
effort := effectiveEffort(sr, client, allowClient)
prof := resolveWireProfile(oa, sr)
// Admin-defined extra root fields (merged first; automatic keys may follow).
if len(sr.ExtraRequestFields) > 0 {
if cfg.ExtraFields == nil {
cfg.ExtraFields = make(map[string]any)
}
for k, v := range sr.ExtraRequestFields {
cfg.ExtraFields[k] = v
}
}
switch prof {
case wireClaude, wireNone:
return
case wireDeepseek:
applyDeepseek(cfg, mode, effort)
case wireOutputConfig:
applyOutputConfigEffort(cfg, mode, effort)
default: // wireOpenAI
applyOpenAICompat(cfg, mode, effort)
}
}
// applyClaudeExtendedThinking sets Anthropic Messages API `thinking` when absent from ExtraRequestFields.
// Uses adaptive + summarized display by default (per Anthropic guidance for Claude 4.x); Sonnet 3.7 uses enabled+budget.
func applyClaudeExtendedThinking(cfg *einoopenai.ChatModelConfig, mode, effort, model string) {
if cfg == nil || mode == "off" {
return
}
if cfg.ExtraFields == nil {
cfg.ExtraFields = make(map[string]any)
}
if _, exists := cfg.ExtraFields["thinking"]; exists {
return
}
m := strings.ToLower(strings.TrimSpace(model))
thinking := map[string]any{
"type": "adaptive",
"display": "summarized",
}
// Sonnet 3.7: manual extended thinking is the documented path.
if strings.Contains(m, "claude-3-7-sonnet") || strings.Contains(m, "3-7-sonnet") || strings.Contains(m, "sonnet-3.7") {
thinking = map[string]any{
"type": "enabled",
"budget_tokens": 10000,
"display": "summarized",
}
}
// Opus 4.7+: manual enabled+budget rejected — keep adaptive only.
if strings.Contains(m, "opus-4-7") || strings.Contains(m, "opus-4.7") {
thinking = map[string]any{
"type": "adaptive",
"display": "summarized",
}
}
_ = effort // reserved: map to Anthropic effort / output_config when API stabilizes in one place
cfg.ExtraFields["thinking"] = thinking
}
func effectiveMode(sr *config.OpenAIReasoningConfig, client *ClientIntent, allowClient bool) string {
server := strings.ToLower(strings.TrimSpace(sr.ModeEffective()))
if server == "" || server == "default" {
server = "auto"
}
if !allowClient || client == nil {
return server
}
cm := strings.ToLower(strings.TrimSpace(client.Mode))
if cm == "" || cm == "default" {
return server
}
return cm
}
func effectiveEffort(sr *config.OpenAIReasoningConfig, client *ClientIntent, allowClient bool) string {
se := normalizeEffort(sr.Effort)
if !allowClient || client == nil {
return se
}
ce := normalizeEffort(client.Effort)
if ce != "" {
return ce
}
return se
}
func normalizeEffort(s string) string {
e := strings.ToLower(strings.TrimSpace(s))
switch e {
case "low", "medium", "high", "max":
return e
default:
return ""
}
}
func resolveWireProfile(oa *config.OpenAIConfig, sr *config.OpenAIReasoningConfig) wireProfile {
if strings.EqualFold(strings.TrimSpace(oa.Provider), "claude") {
return wireClaude
}
p := strings.ToLower(strings.TrimSpace(sr.ProfileEffective()))
switch p {
case "output_config", "output_config_effort":
return wireOutputConfig
case "openai", "openai_compat":
return wireOpenAI
case "deepseek", "deepseek_compat":
return wireDeepseek
case "auto", "":
bu := strings.ToLower(oa.BaseURL)
mo := strings.ToLower(oa.Model)
if strings.Contains(bu, "deepseek") || strings.Contains(mo, "deepseek") {
return wireDeepseek
}
return wireOpenAI
default:
return wireOpenAI
}
}
func applyDeepseek(cfg *einoopenai.ChatModelConfig, mode, effort string) {
// auto: enable thinking for DeepSeek line; on: same; auto without effort still opens thinking.
if mode == "off" {
return
}
if mode == "auto" || mode == "on" {
if cfg.ExtraFields == nil {
cfg.ExtraFields = make(map[string]any)
}
cfg.ExtraFields["thinking"] = map[string]any{"type": "enabled"}
}
if effort != "" {
if cfg.ExtraFields == nil {
cfg.ExtraFields = make(map[string]any)
}
cfg.ExtraFields["reasoning_effort"] = effortStringForAPI(effort)
}
}
func applyOpenAICompat(cfg *einoopenai.ChatModelConfig, mode, effort string) {
if mode == "auto" && effort == "" {
return
}
e := effort
if mode == "on" && e == "" {
e = "medium"
}
if e == "" {
return
}
if e == "max" {
if cfg.ExtraFields == nil {
cfg.ExtraFields = make(map[string]any)
}
cfg.ExtraFields["reasoning_effort"] = "max"
return
}
switch e {
case "low":
cfg.ReasoningEffort = einoopenai.ReasoningEffortLevelLow
case "medium":
cfg.ReasoningEffort = einoopenai.ReasoningEffortLevelMedium
case "high":
cfg.ReasoningEffort = einoopenai.ReasoningEffortLevelHigh
}
}
func applyOutputConfigEffort(cfg *einoopenai.ChatModelConfig, mode, effort string) {
if mode == "auto" && effort == "" {
return
}
e := effort
if mode == "on" && e == "" {
e = "high"
}
if e == "" {
return
}
if cfg.ExtraFields == nil {
cfg.ExtraFields = make(map[string]any)
}
cfg.ExtraFields["output_config"] = map[string]any{"effort": effortStringForAPI(e)}
}
func effortStringForAPI(e string) string {
// Gateways expect lowercase strings; "max" kept as max.
return strings.ToLower(strings.TrimSpace(e))
}
+34 -13
View File
@@ -153,6 +153,7 @@ func (e *Executor) ExecuteTool(ctx context.Context, toolName string, args map[st
// 执行命令
cmd := exec.CommandContext(ctx, toolConfig.Command, cmdArgs...)
applyDefaultTerminalEnv(cmd)
_ = prepareShellCmdSession(cmd)
e.logger.Info("执行安全工具",
zap.String("tool", toolName),
@@ -163,13 +164,14 @@ func (e *Executor) ExecuteTool(ctx context.Context, toolName string, args map[st
var err error
// 如果上层提供了 stdout/stderr 增量回调,则边执行边读取并回调。
if cb, ok := ctx.Value(ToolOutputCallbackCtxKey).(ToolOutputCallback); ok && cb != nil {
output, err = streamCommandOutput(cmd, cb)
output, err = streamCommandOutput(ctx, cmd, cb)
if err != nil && shouldRetryWithPTY(output) {
e.logger.Info("检测到工具需要 TTY,使用 PTY 重试",
zap.String("tool", toolName),
)
cmd2 := exec.CommandContext(ctx, toolConfig.Command, cmdArgs...)
applyDefaultTerminalEnv(cmd2)
_ = prepareShellCmdSession(cmd2)
output, err = runCommandWithPTY(ctx, cmd2, cb)
}
} else {
@@ -182,6 +184,7 @@ func (e *Executor) ExecuteTool(ctx context.Context, toolName string, args map[st
)
cmd2 := exec.CommandContext(ctx, toolConfig.Command, cmdArgs...)
applyDefaultTerminalEnv(cmd2)
_ = prepareShellCmdSession(cmd2)
output, err = runCommandWithPTY(ctx, cmd2, nil)
}
}
@@ -837,6 +840,8 @@ func (e *Executor) executeSystemCommand(ctx context.Context, args map[string]int
} else {
cmd = exec.CommandContext(ctx, shell, "-c", command)
}
applyDefaultTerminalEnv(cmd)
_ = prepareShellCmdSession(cmd)
// 执行命令
e.logger.Info("执行系统命令",
@@ -865,6 +870,8 @@ func (e *Executor) executeSystemCommand(ctx context.Context, args map[string]int
} else {
pidCmd = exec.CommandContext(ctx, shell, "-c", pidCommand)
}
applyDefaultTerminalEnv(pidCmd)
_ = prepareShellCmdSession(pidCmd)
// 获取stdout管道
stdout, err := pidCmd.StdoutPipe()
@@ -976,7 +983,7 @@ func (e *Executor) executeSystemCommand(ctx context.Context, args map[string]int
var err error
// 若上层提供工具输出增量回调,则边执行边流式读取。
if cb, ok := ctx.Value(ToolOutputCallbackCtxKey).(ToolOutputCallback); ok && cb != nil {
output, err = streamCommandOutput(cmd, cb)
output, err = streamCommandOutput(ctx, cmd, cb)
if err != nil && shouldRetryWithPTY(output) {
e.logger.Info("检测到系统命令需要 TTY,使用 PTY 重试")
cmd2 := exec.CommandContext(ctx, shell, "-c", command)
@@ -984,6 +991,7 @@ func (e *Executor) executeSystemCommand(ctx context.Context, args map[string]int
cmd2.Dir = workDir
}
applyDefaultTerminalEnv(cmd2)
_ = prepareShellCmdSession(cmd2)
output, err = runCommandWithPTY(ctx, cmd2, cb)
}
} else {
@@ -997,6 +1005,7 @@ func (e *Executor) executeSystemCommand(ctx context.Context, args map[string]int
cmd2.Dir = workDir
}
applyDefaultTerminalEnv(cmd2)
_ = prepareShellCmdSession(cmd2)
output, err = runCommandWithPTY(ctx, cmd2, nil)
}
}
@@ -1034,8 +1043,11 @@ func (e *Executor) executeSystemCommand(ctx context.Context, args map[string]int
}
// streamCommandOutput 以“边读边回调”的方式读取命令 stdout/stderr。
// 保持输出内容完整拼接返回,并用 cb(chunk) 向上层持续推送
func streamCommandOutput(cmd *exec.Cmd, cb ToolOutputCallback) (string, error) {
// 使用定长块读取,避免按行读取在无换行输出时永久阻塞;ctx 取消时终止进程树
func streamCommandOutput(ctx context.Context, cmd *exec.Cmd, cb ToolOutputCallback) (string, error) {
if err := prepareShellCmdSession(cmd); err != nil {
return "", err
}
stdoutPipe, err := cmd.StdoutPipe()
if err != nil {
return "", err
@@ -1051,18 +1063,27 @@ func streamCommandOutput(cmd *exec.Cmd, cb ToolOutputCallback) (string, error) {
return "", err
}
stopWatch := make(chan struct{})
go func() {
select {
case <-ctx.Done():
terminateCmdTree(cmd)
case <-stopWatch:
}
}()
defer close(stopWatch)
chunks := make(chan string, 64)
var wg sync.WaitGroup
readFn := func(r io.Reader) {
defer wg.Done()
br := bufio.NewReader(r)
buf := make([]byte, 8192)
for {
s, readErr := br.ReadString('\n')
if s != "" {
chunks <- s
n, readErr := r.Read(buf)
if n > 0 {
chunks <- string(buf[:n])
}
if readErr != nil {
// EOF 正常结束
return
}
}
@@ -1158,12 +1179,14 @@ func runCommandWithPTY(ctx context.Context, cmd *exec.Cmd, cb ToolOutputCallback
if runtime.GOOS == "windows" {
// PTY 方案为类 UnixWindows 走原逻辑
if cb != nil {
return streamCommandOutput(cmd, cb)
return streamCommandOutput(ctx, cmd, cb)
}
_ = prepareShellCmdSession(cmd)
out, err := cmd.CombinedOutput()
return string(out), err
}
_ = prepareShellCmdSession(cmd)
ptmx, err := pty.Start(cmd)
if err != nil {
return "", err
@@ -1176,9 +1199,7 @@ func runCommandWithPTY(ctx context.Context, cmd *exec.Cmd, cb ToolOutputCallback
select {
case <-ctx.Done():
_ = ptmx.Close() // 触发读退出
if cmd.Process != nil {
_ = cmd.Process.Kill()
}
terminateCmdTree(cmd)
case <-done:
}
}()
+31
View File
@@ -0,0 +1,31 @@
//go:build !windows
package security
import (
"os/exec"
"syscall"
)
// prepareShellCmdSession 让 shell 子进程在独立会话中运行,便于超时/取消时整组 SIGKILL(含子进程)。
func prepareShellCmdSession(cmd *exec.Cmd) error {
if cmd == nil {
return nil
}
if cmd.SysProcAttr == nil {
cmd.SysProcAttr = &syscall.SysProcAttr{}
}
cmd.SysProcAttr.Setsid = true
return nil
}
// terminateCmdTree 尽力终止 cmd 及其进程组(Unix 下 Setsid 后 PGID == 首进程 PID)。
func terminateCmdTree(cmd *exec.Cmd) {
if cmd == nil || cmd.Process == nil {
return
}
pid := cmd.Process.Pid
if err := syscall.Kill(-pid, syscall.SIGKILL); err != nil {
_ = cmd.Process.Kill()
}
}
+17
View File
@@ -0,0 +1,17 @@
//go:build windows
package security
import "os/exec"
func prepareShellCmdSession(cmd *exec.Cmd) error {
_ = cmd
return nil
}
func terminateCmdTree(cmd *exec.Cmd) {
if cmd == nil || cmd.Process == nil {
return
}
_ = cmd.Process.Kill()
}
+35 -6
View File
@@ -64,6 +64,7 @@ show_progress() {
echo ""
echo "=========================================="
echo " CyberStrikeAI 一键部署启动脚本"
echo " (默认 HTTPS 自签证书;纯 HTTP 请用: $0 --http"
echo "=========================================="
echo ""
@@ -353,7 +354,18 @@ need_rebuild() {
}
# 主流程
# 默认启动主站 HTTPS(--https 传给二进制);传 --http 则走明文 HTTP。
main() {
USE_HTTPS=1
FORWARD_ARGS=()
for arg in "$@"; do
if [ "$arg" = "--http" ]; then
USE_HTTPS=0
continue
fi
FORWARD_ARGS+=("$arg")
done
# 环境检查
info "检查运行环境..."
check_python
@@ -377,13 +389,30 @@ main() {
# 启动服务器
success "所有准备工作完成!"
echo ""
info "启动 CyberStrikeAI 服务器..."
if [ "$USE_HTTPS" -eq 1 ]; then
info "启动 CyberStrikeAI 服务器(HTTPS + HTTP/2,自签证书)..."
note "纯 HTTP 启动请使用: $0 --http"
else
info "启动 CyberStrikeAI 服务器(HTTP..."
fi
echo "=========================================="
echo ""
# 运行服务器
exec "./$BINARY_NAME"
# 始终传入项目根目录下的 config.yaml,避免 cwd 不在项目根时找不到配置;额外参数仍可追加(如再次 -config 覆盖,以 Go flag 后写为准)。
if [ "$USE_HTTPS" -eq 1 ]; then
if [ "${#FORWARD_ARGS[@]}" -gt 0 ]; then
exec "./$BINARY_NAME" -config "$CONFIG_FILE" --https "${FORWARD_ARGS[@]}"
else
exec "./$BINARY_NAME" -config "$CONFIG_FILE" --https
fi
else
if [ "${#FORWARD_ARGS[@]}" -gt 0 ]; then
exec "./$BINARY_NAME" -config "$CONFIG_FILE" "${FORWARD_ARGS[@]}"
else
exec "./$BINARY_NAME" -config "$CONFIG_FILE"
fi
fi
}
# 执行主流程
main
# 执行主流程(支持参数,如: ./run.sh --http
main "$@"
+333 -1
View File
@@ -440,6 +440,230 @@ args:
print("Body: <empty>")
def compile_response_filter(pattern: str, ignore_case: bool):
flags = 0
if ignore_case:
flags |= re.IGNORECASE
try:
return re.compile(pattern, flags)
except re.error as exc:
print(f"Invalid response_filter regex: {exc}", file=sys.stderr)
sys.exit(2)
def truncate_utf8(text: str, max_bytes: int) -> Tuple[str, bool]:
if max_bytes <= 0 or not text:
return text, False
encoded = text.encode("utf-8", errors="replace")
if len(encoded) <= max_bytes:
return text, False
truncated = encoded[:max_bytes].decode("utf-8", errors="ignore")
return truncated, True
def cap_line_entries(entries: List[Tuple[int, str]], max_lines: int) -> Tuple[List[Tuple[int, str]], bool]:
if max_lines <= 0 or len(entries) <= max_lines:
return entries, False
return entries[:max_lines], True
def expand_line_context(line_numbers: List[int], total_lines: int, context: int) -> List[int]:
if context <= 0:
return sorted(set(line_numbers))
included = set()
for num in line_numbers:
start = max(1, num - context)
end = min(total_lines, num + context)
for i in range(start, end + 1):
included.add(i)
return sorted(included)
def format_line_entries(lines: List[str], indices: List[int], ellipsis_gaps: bool = True) -> str:
if not indices:
return ""
chunks = []
prev = None
for num in indices:
if ellipsis_gaps and prev is not None and num > prev + 1:
chunks.append(" ...")
chunks.append(f" L{num}: {lines[num - 1]}")
prev = num
return "\n".join(chunks)
def filter_body_by_lines(
lines: List[str],
compiled: "re.Pattern",
invert: bool,
context_lines: int,
max_lines: int,
) -> Tuple[str, Dict[str, object]]:
matched_nums = []
for idx, line in enumerate(lines, start=1):
hit = compiled.search(line) is not None
if invert:
hit = not hit
if hit:
matched_nums.append(idx)
total = len(lines)
meta = {
"mode": "line",
"total_lines": total,
"matched_lines": len(matched_nums),
"invert": invert,
"truncated": False,
"byte_truncated": False,
}
if not matched_nums:
return "", meta
display_nums = expand_line_context(matched_nums, total, context_lines)
entries = [(n, lines[n - 1]) for n in display_nums]
entries, line_capped = cap_line_entries(entries, max_lines)
meta["truncated"] = line_capped
meta["display_lines"] = len(entries)
return format_line_entries(lines, [n for n, _ in entries], ellipsis_gaps=context_lines > 0), meta
def filter_body_multiline(
text: str,
compiled: "re.Pattern",
invert: bool,
max_lines: int,
dotall: bool,
) -> Tuple[str, Dict[str, object]]:
flags = compiled.flags
if dotall:
pattern = re.compile(compiled.pattern, flags | re.DOTALL | re.MULTILINE)
else:
pattern = re.compile(compiled.pattern, flags | re.MULTILINE)
matches = list(pattern.finditer(text))
if invert:
if matches:
return "", {"mode": "multiline" if not dotall else "full", "total_lines": text.count("\n") + (1 if text else 0), "matched_lines": 0, "invert": True, "truncated": False, "byte_truncated": False}
output = text
meta = {"mode": "multiline" if not dotall else "full", "matched_lines": 1, "invert": True, "truncated": False, "byte_truncated": False}
lines = text.splitlines()
if max_lines > 0 and len(lines) > max_lines:
output = "\n".join(lines[:max_lines])
meta["truncated"] = True
meta["total_lines"] = len(lines)
meta["display_lines"] = min(len(lines), max_lines) if max_lines > 0 else len(lines)
return output, meta
chunks = []
for match in matches:
snippet = match.group(0)
if "\n" in snippet:
snippet = snippet.replace("\n", "\\n")
start_line = text.count("\n", 0, match.start()) + 1
chunks.append((start_line, f" @{start_line}: {snippet}"))
entries, line_capped = cap_line_entries(chunks, max_lines if max_lines > 0 else len(chunks))
meta = {
"mode": "multiline" if not dotall else "full",
"total_lines": text.count("\n") + (1 if text else 0),
"matched_lines": len(matches),
"invert": False,
"truncated": line_capped,
"byte_truncated": False,
"display_lines": len(entries),
}
return "\n".join(line for _, line in entries), meta
def apply_body_limits_plain(text: str, max_lines: int) -> Tuple[str, Dict[str, object]]:
lines = text.splitlines()
meta = {
"mode": "plain",
"total_lines": len(lines),
"matched_lines": len(lines),
"invert": False,
"truncated": False,
"byte_truncated": False,
"display_lines": len(lines),
}
output = text
if max_lines > 0 and len(lines) > max_lines:
output = "\n".join(lines[:max_lines])
meta["truncated"] = True
meta["display_lines"] = max_lines
return output, meta
def format_response_body_output(
decoded_body: str,
filter_pattern: str,
filter_mode: str,
filter_invert: bool,
filter_ignore_case: bool,
max_lines: int,
max_bytes: int,
preview_lines: int,
context_lines: int,
compiled_filter=None,
) -> Tuple[str, Dict[str, object]]:
text = decoded_body.rstrip("\r\n")
if not text:
return "", {"mode": "empty", "total_lines": 0, "matched_lines": 0, "invert": filter_invert, "truncated": False, "byte_truncated": False, "display_lines": 0}
lines = text.splitlines()
mode = (filter_mode or "line").strip().lower()
if mode not in {"line", "multiline", "full"}:
mode = "line"
if filter_pattern:
compiled = compiled_filter or compile_response_filter(filter_pattern, filter_ignore_case)
if mode == "line":
output, meta = filter_body_by_lines(lines, compiled, filter_invert, context_lines, max_lines)
else:
output, meta = filter_body_multiline(text, compiled, filter_invert, max_lines, dotall=(mode == "full"))
meta["filter_pattern"] = filter_pattern
if not output and not filter_invert:
preview = min(max(preview_lines, 0), len(lines))
if preview > 0:
preview_text = format_line_entries(lines, list(range(1, preview + 1)), ellipsis_gaps=False)
preview_text, byte_truncated = truncate_utf8(preview_text, max_bytes)
return preview_text, {
**meta,
"preview": True,
"matched_lines": 0,
"display_lines": preview,
"byte_truncated": byte_truncated,
}
return "", {**meta, "preview": False, "matched_lines": 0, "display_lines": 0}
else:
output, meta = apply_body_limits_plain(text, max_lines)
output, byte_truncated = truncate_utf8(output, max_bytes)
if byte_truncated:
meta["byte_truncated"] = True
return output, meta
def print_response_body_summary(meta: Dict[str, object]):
mode = meta.get("mode")
if mode == "empty":
return
parts = [f"mode={mode}"]
if meta.get("filter_pattern"):
parts.append(f"pattern={meta['filter_pattern']!r}")
if meta.get("invert"):
parts.append("invert=true")
total = meta.get("total_lines")
matched = meta.get("matched_lines")
displayed = meta.get("display_lines")
if total is not None and matched is not None:
parts.append(f"matched {matched}/{total} lines")
if displayed is not None:
parts.append(f"showing {displayed}")
if meta.get("preview"):
parts.append("preview on zero match")
if meta.get("truncated"):
parts.append("line cap applied")
if meta.get("byte_truncated"):
parts.append("byte cap applied")
print(f"[body] {' | '.join(parts)}")
def main():
parser = argparse.ArgumentParser(description="Pure Python HTTP testing helper powered by httpx")
parser.add_argument("--url", required=True)
@@ -466,6 +690,16 @@ args:
parser.add_argument("--debug", dest="debug", action="store_true")
parser.add_argument("--response-encoding", dest="response_encoding", default="")
parser.add_argument("--download", dest="download", default="")
parser.add_argument("--response-filter", dest="response_filter", default="")
parser.add_argument("--response-filter-mode", dest="response_filter_mode", default="line")
parser.add_argument("--response-filter-invert", dest="response_filter_invert", action="store_true")
parser.add_argument("--no-response-filter-invert", dest="response_filter_invert", action="store_false")
parser.add_argument("--response-filter-ignore-case", dest="response_filter_ignore_case", action="store_true")
parser.add_argument("--no-response-filter-ignore-case", dest="response_filter_ignore_case", action="store_false")
parser.add_argument("--response-max-lines", dest="response_max_lines", type=int, default=0)
parser.add_argument("--response-max-bytes", dest="response_max_bytes", type=int, default=0)
parser.add_argument("--response-preview-lines", dest="response_preview_lines", type=int, default=5)
parser.add_argument("--response-context-lines", dest="response_context_lines", type=int, default=0)
parser.set_defaults(
include_headers=False,
auto_encode_url=False,
@@ -475,9 +709,22 @@ args:
show_command=False,
show_summary=False,
debug=False,
response_filter_invert=False,
response_filter_ignore_case=False,
)
args = parser.parse_args()
response_filter = (args.response_filter or "").strip()
response_max_lines = max(0, args.response_max_lines or 0)
response_max_bytes = max(0, args.response_max_bytes or 0)
response_preview_lines = max(0, args.response_preview_lines if args.response_preview_lines is not None else 5)
response_context_lines = max(0, args.response_context_lines or 0)
compiled_response_filter = None
if response_filter:
compiled_response_filter = compile_response_filter(
response_filter, args.response_filter_ignore_case
)
repeat = max(1, args.repeat)
try:
delay_between = float(args.delay or "0")
@@ -648,9 +895,37 @@ args:
for key, value in response.headers.items():
print(f"{key}: {value}")
print("")
output_body = decoded_body.rstrip()
output_body, body_output_meta = format_response_body_output(
decoded_body,
response_filter,
args.response_filter_mode,
args.response_filter_invert,
args.response_filter_ignore_case,
response_max_lines,
response_max_bytes,
response_preview_lines,
response_context_lines,
compiled_filter=compiled_response_filter,
)
has_filter_or_cap = bool(
response_filter or response_max_lines > 0 or response_max_bytes > 0
)
if has_filter_or_cap and body_output_meta.get("mode") != "empty":
print_response_body_summary(body_output_meta)
if body_output_meta.get("preview") and not body_output_meta.get("matched_lines"):
print("[body] no regex match; showing preview:")
if output_body:
print(output_body)
if body_output_meta.get("truncated") or body_output_meta.get("byte_truncated"):
omitted = (body_output_meta.get("total_lines") or 0) - (
body_output_meta.get("display_lines") or 0
)
if omitted > 0:
print(f"[body] ... {omitted} more line(s) omitted (use --download for full body)")
elif body_output_meta.get("mode") == "empty":
print("[no body]")
elif response_filter and not body_output_meta.get("preview"):
print("[body] no lines matched filter")
else:
print("[no body]")
@@ -729,6 +1004,13 @@ description: |
- 连接探针:在无代理场景下额外进行 DNS/TCP/TLS 探测,粗粒度复刻 curl -w 指标
- 可重复观测:repeat/delay + TTFB/total/speed_download 统计,便于盲注/时序测试
- 扩展开关:additional_args 解析 http2/cert/verify/trust_env/max_redirects 等 httpx 选项
- 响应体瘦身:response_filter 按行/块正则提取,配合 max_lines/max_bytes 限制 stdout,降低 Agent token 消耗
**响应过滤最佳实践:**
- 大页面/HTML:用 `response_filter` 抓 error|exception|password|token|uid 等关键字行
- 无 filter 时:设 `response_max_lines=80` 或 `response_max_bytes=8192` 防止整页灌入上下文
- 0 命中:自动预览前 `response_preview_lines` 行,避免误判「空响应」
- 完整留存:大 body 用 `download` 落盘,stdout 只保留摘要行
parameters:
- name: "url"
type: "string"
@@ -836,6 +1118,56 @@ parameters:
description: "强制响应解码使用的编码(如GBK),覆盖自动探测"
required: false
flag: "--response-encoding"
- name: "response_filter"
type: "string"
description: |
响应体正则过滤(仅影响 stdout,不影响 --download 与指标)。
默认 line 模式按行匹配;示例:'(error|exception|SQL|password|token|uid)'。
与 response_max_lines/response_max_bytes 配合可显著减少 token 消耗。
required: false
flag: "--response-filter"
- name: "response_filter_mode"
type: "string"
description: "过滤模式:line(按行,默认)、multiline(跨行块)、full(整段 DOTALL 匹配)"
required: false
default: "line"
flag: "--response-filter-mode"
- name: "response_filter_invert"
type: "bool"
description: "反向过滤:输出不匹配 regex 的行(用于剔除 HTML 噪音)"
required: false
default: false
flag: "--response-filter-invert"
- name: "response_filter_ignore_case"
type: "bool"
description: "正则忽略大小写"
required: false
default: false
flag: "--response-filter-ignore-case"
- name: "response_max_lines"
type: "int"
description: "stdout 最多输出行数(0=不限制);有 filter 时限制命中行数,无 filter 时截断全文"
required: false
default: 0
flag: "--response-max-lines"
- name: "response_max_bytes"
type: "int"
description: "stdout 响应体 UTF-8 字节上限(0=不限制),超出部分截断"
required: false
default: 0
flag: "--response-max-bytes"
- name: "response_preview_lines"
type: "int"
description: "filter 零命中时预览的前 N 行(默认 5,0=不预览)"
required: false
default: 5
flag: "--response-preview-lines"
- name: "response_context_lines"
type: "int"
description: "line 模式下命中行上下各保留 N 行上下文(类似 grep -C"
required: false
default: 0
flag: "--response-context-lines"
- name: "action"
type: "string"
description: "保留字段:标识调用意图(request, spider等),脚本内部不使用"
+8 -23
View File
@@ -8,11 +8,8 @@ set -euo pipefail
# - data/
# - venv/ (disabled with --no-venv)
# - tools/ (user extensions; never overwritten by upgrade)
#
# Optional preserves (may overwrite upstream updates):
# - roles/
# - skills/
# Enable with --preserve-custom
ROOT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
cd "$ROOT_DIR"
@@ -28,7 +25,6 @@ BACKUP_BASE_DIR="$ROOT_DIR/.upgrade-backup"
GITHUB_REPO="Ed1s0nZ/CyberStrikeAI"
TAG=""
PRESERVE_CUSTOM=0
PRESERVE_VENV=1
STOP_SERVICE=1
FORCE_STOP=0
@@ -37,14 +33,12 @@ YES=0
usage() {
cat <<EOF
Usage:
./upgrade.sh [--tag vX.Y.Z] [--preserve-custom] [--no-venv] [--no-stop]
./upgrade.sh [--tag vX.Y.Z] [--no-venv] [--no-stop]
[--force-stop] [--yes]
Options:
--tag <tag> Specify GitHub Release tag (e.g. v1.3.28).
If omitted, the script uses the latest release.
--preserve-custom Preserve roles/skills (may overwrite upstream files).
tools/ is always preserved. Use with caution.
--no-venv Do not preserve venv/ (Python deps will be re-installed).
--no-stop Do not try to stop the running service.
--force-stop If no process matching current directory is found, also stop
@@ -52,7 +46,7 @@ Options:
--yes Do not ask for confirmation.
Description:
The script backs up config.yaml/data/tools/ (and optionally venv/roles/skills) to
The script backs up config.yaml/data/tools/roles/skills/ (and optionally venv/) to
.upgrade-backup/
EOF
}
@@ -177,11 +171,7 @@ confirm_or_exit() {
info " - Preserve venv/: no (will remove old venv and re-install deps)"
fi
info " - Preserve tools/: yes (always)"
if [[ "$PRESERVE_CUSTOM" -eq 1 ]]; then
info " - Preserve roles/skills: yes (may overwrite upstream updates)"
else
info " - Preserve roles/skills: no (will use upstream versions)"
fi
info " - Preserve roles/skills: yes (always)"
info " - Stop service: ${STOP_SERVICE}"
echo ""
read -r -p "Continue? (y/N) " ans
@@ -299,11 +289,8 @@ sync_code() {
# User tool extensions: never replace or delete during upgrade.
rsync_excludes+=( "--exclude=tools/" )
if [[ "$PRESERVE_CUSTOM" -eq 1 ]]; then
rsync_excludes+=( "--exclude=roles/" )
rsync_excludes+=( "--exclude=skills/" )
fi
rsync_excludes+=( "--exclude=roles/" )
rsync_excludes+=( "--exclude=skills/" )
# Ensure this upgrade script itself is not deleted.
rsync_excludes+=( "--exclude=upgrade.sh" )
@@ -324,10 +311,6 @@ main() {
TAG="${2:-}"
shift 2
;;
--preserve-custom)
PRESERVE_CUSTOM=1
shift 1
;;
--no-venv)
PRESERVE_VENV=0
shift 1
@@ -384,8 +367,10 @@ main() {
if [[ -d "$ROOT_DIR/tools" ]]; then
backup_dir_tgz "tools" "$ROOT_DIR/tools"
fi
if [[ "$PRESERVE_CUSTOM" -eq 1 ]]; then
if [[ -d "$ROOT_DIR/roles" ]]; then
backup_dir_tgz "roles" "$ROOT_DIR/roles"
fi
if [[ -d "$ROOT_DIR/skills" ]]; then
backup_dir_tgz "skills" "$ROOT_DIR/skills"
fi
+6
View File
@@ -260,8 +260,14 @@
gap: 12px;
justify-content: center;
flex-wrap: wrap;
max-width: 420px;
margin-inline: auto;
}
.c2-actions > button {
flex: 1;
min-width: min(100%, 160px);
}
/* ============================================================================
Listener Cards
============================================================================ */
+580 -66
View File
@@ -536,6 +536,10 @@ body {
display: none;
}
.conversation-sidebar.collapsed .conversation-reasoning-card {
display: none;
}
.conversation-sidebar.collapsed .conversation-sidebar-header {
flex-direction: column;
align-items: center;
@@ -1136,13 +1140,13 @@ header {
.hitl-sidebar-card {
border-top: 1px solid var(--border-color);
background: linear-gradient(165deg, #f8fafc 0%, #f1f5f9 55%, #eef2f7 100%);
padding: 14px 12px 16px;
padding: 11px 12px;
flex-shrink: 0;
}
.hitl-sidebar-card-header {
display: flex;
align-items: flex-start;
align-items: center;
justify-content: space-between;
gap: 10px;
cursor: pointer;
@@ -1160,7 +1164,7 @@ header {
overflow: hidden;
max-height: 500px;
opacity: 1;
margin-top: 10px;
margin-top: 8px;
transition: max-height 0.3s ease, opacity 0.2s ease, margin-top 0.3s ease;
}
@@ -1176,19 +1180,19 @@ header {
.hitl-sidebar-heading {
display: flex;
align-items: flex-start;
gap: 10px;
align-items: center;
gap: 8px;
min-width: 0;
}
.hitl-sidebar-icon {
flex-shrink: 0;
width: 36px;
height: 36px;
width: 32px;
height: 32px;
display: flex;
align-items: center;
justify-content: center;
border-radius: 10px;
border-radius: 9px;
background: linear-gradient(145deg, rgba(0, 102, 255, 0.12), rgba(0, 102, 255, 0.06));
color: var(--accent-color);
border: 1px solid rgba(0, 102, 255, 0.18);
@@ -1197,27 +1201,27 @@ header {
.hitl-sidebar-heading-text {
display: flex;
flex-direction: column;
gap: 2px;
gap: 1px;
min-width: 0;
}
.hitl-sidebar-title {
font-size: 15px;
font-size: 14px;
font-weight: 700;
letter-spacing: -0.02em;
color: var(--text-primary);
line-height: 1.25;
line-height: 1.2;
}
.hitl-sidebar-subtitle {
font-size: 11px;
font-weight: 500;
color: var(--text-secondary);
line-height: 1.3;
line-height: 1.25;
}
.hitl-apply-btn {
padding: 8px 14px;
padding: 6px 12px;
border-radius: 10px;
font-size: 12px;
font-weight: 600;
@@ -2391,7 +2395,168 @@ header {
box-sizing: border-box;
}
.chat-input-container > .chat-input-with-files {
.chat-input-primary-row {
display: flex;
flex-direction: row;
align-items: flex-end;
gap: 8px;
flex: 1;
min-width: 0;
width: 100%;
}
.chat-input-leading {
display: flex;
flex-direction: row;
align-items: flex-end;
gap: 8px;
flex-shrink: 0;
}
/* Eino:模型推理在对话列表侧栏底部,默认折叠 */
.conversation-sidebar .chat-reasoning-wrapper {
width: 100%;
box-sizing: border-box;
flex-shrink: 0;
}
.conversation-reasoning-card {
border-top: 1px solid var(--border-color);
background: linear-gradient(165deg, #f8fafc 0%, #f1f5f9 55%, #eef2f7 100%);
padding: 11px 12px;
flex-shrink: 0;
}
.conversation-reasoning-card-header {
display: flex;
align-items: center;
justify-content: flex-start;
gap: 0;
width: 100%;
padding: 0;
margin: 0;
border: none;
background: transparent;
cursor: pointer;
text-align: left;
font: inherit;
color: inherit;
-webkit-appearance: none;
appearance: none;
border-radius: 0;
}
.conversation-reasoning-card-header:hover .conversation-reasoning-title {
color: var(--accent-color);
}
.conversation-reasoning-heading {
display: flex;
align-items: center;
gap: 8px;
min-width: 0;
flex: 1;
}
.conversation-reasoning-icon {
flex-shrink: 0;
width: 32px;
height: 32px;
display: flex;
align-items: center;
justify-content: center;
border-radius: 9px;
background: linear-gradient(145deg, rgba(0, 102, 255, 0.12), rgba(0, 102, 255, 0.06));
color: var(--accent-color);
border: 1px solid rgba(0, 102, 255, 0.18);
}
.conversation-reasoning-icon svg {
display: block;
}
.conversation-reasoning-heading-text {
flex: 1;
min-width: 0;
display: flex;
flex-direction: column;
gap: 1px;
}
.conversation-reasoning-title {
font-size: 14px;
font-weight: 700;
letter-spacing: -0.02em;
color: var(--text-primary);
line-height: 1.2;
}
.conversation-reasoning-summary {
font-size: 11px;
font-weight: 500;
color: var(--text-secondary);
line-height: 1.25;
overflow: hidden;
text-overflow: ellipsis;
white-space: nowrap;
}
.conversation-reasoning-body {
overflow: hidden;
max-height: 280px;
opacity: 1;
margin-top: 8px;
padding-bottom: 0;
transition: max-height 0.3s ease, opacity 0.2s ease, margin-top 0.3s ease;
}
.conversation-reasoning-card.conversation-reasoning-collapsed .conversation-reasoning-body {
max-height: 0;
opacity: 0;
margin-top: 0;
padding-bottom: 0;
pointer-events: none;
}
.conversation-reasoning-body .chat-reasoning-panel-hint {
margin-top: 0;
margin-bottom: 8px;
}
.chat-reasoning-panel-hint {
font-size: 0.75rem;
color: var(--text-muted, #718096);
margin: 0;
line-height: 1.45;
}
.chat-reasoning-fields {
display: flex;
flex-direction: column;
gap: 12px;
}
.chat-reasoning-field-label {
display: block;
font-size: 0.75rem;
font-weight: 600;
color: var(--text-muted, #718096);
margin-bottom: 6px;
}
.chat-reasoning-select {
width: 100%;
box-sizing: border-box;
padding: 0.45rem 0.6rem;
font-size: 0.8125rem;
border: 1px solid var(--border-color, #e2e8f0);
border-radius: 8px;
background: var(--card-bg, #fff);
color: var(--text-color, #2d3748);
}
.chat-input-container .chat-input-with-files,
.chat-input-primary-row .chat-input-with-files {
flex: 1;
display: flex;
flex-direction: column;
@@ -2399,7 +2564,8 @@ header {
gap: 6px;
}
.chat-input-container > .chat-input-field {
.chat-input-container > .chat-input-field,
.chat-input-primary-row .chat-input-field {
flex: 1;
display: flex;
min-width: 0;
@@ -3542,18 +3708,14 @@ header {
.timeline-item-iteration {
border-left-color: var(--accent-color);
background: rgba(0, 102, 255, 0.05);
background: rgba(0, 102, 255, 0.06);
}
/* Eino 多代理:主编排器 vs 子代理时间线区分 */
.timeline-eino-role-orchestrator {
border-left-color: #5c6bc0 !important;
background: rgba(92, 107, 192, 0.09) !important;
}
.timeline-eino-role-sub {
border-left-color: #00897b !important;
background: rgba(0, 137, 123, 0.08) !important;
}
/*
* Eino /子代理保留 timeline-eino-role-* class applyEinoTimelineRole 写入
* 但不再在此处整卡铺色 + !important否则会盖住工具调用/结果/思考的类型色
* 主编排 vs 子代理的区分由迭代轮次上的 timeline-eino-scope-* 负责
*/
.timeline-item-iteration.timeline-eino-scope-main {
border-left-color: #3949ab !important;
background: rgba(57, 73, 171, 0.1) !important;
@@ -3563,24 +3725,72 @@ header {
background: rgba(0, 105, 92, 0.09) !important;
}
/* 模型内部思考:弱化灰紫,避免与「助手输出」抢视觉 */
.timeline-item-thinking {
border-left-color: #9c27b0;
background: rgba(156, 39, 176, 0.05);
border-left-color: #7e57c2;
background: rgba(103, 58, 183, 0.06);
}
/* 迭代中主通道流式正文(标题常为「助手输出」等):中性底 + 主色条,表示对用户可见的答复流 */
.timeline-item-thinking[data-response-stream-placeholder="1"] {
border-left-color: var(--accent-color);
background: rgba(0, 102, 255, 0.04);
}
.timeline-item-reasoning_chain {
border-left-color: #5e35b1;
background: rgba(94, 53, 177, 0.07);
}
.timeline-item-planning {
border-left-color: #00838f;
background: rgba(0, 131, 143, 0.06);
}
/* 工具调用:信息色(蓝),与「结果绿/红」分离;完成态不再用绿色以免与成功结果混淆 */
.timeline-item-tool_call {
border-left-color: #ff9800;
background: rgba(255, 152, 0, 0.05);
border-left-color: #1565c0;
background: rgba(21, 101, 192, 0.07);
}
.timeline-item-tool_result {
border-left-color: #78909c;
background: rgba(120, 144, 156, 0.06);
}
.timeline-item-tool_result:has(.tool-result-section.success) {
border-left-color: var(--success-color);
background: rgba(40, 167, 69, 0.05);
background: rgba(40, 167, 69, 0.07);
}
.timeline-item-tool_result:has(.tool-result-section.error) {
border-left-color: var(--error-color);
background: rgba(220, 53, 69, 0.07);
}
.timeline-item-tool_result.error {
border-left-color: var(--error-color);
background: rgba(220, 53, 69, 0.05);
background: rgba(220, 53, 69, 0.07);
}
.timeline-item-eino_agent_reply {
border-left-color: #6a1b9a;
background: rgba(106, 27, 154, 0.07);
}
.timeline-item-progress {
border-left-color: #607d8b;
background: rgba(96, 125, 139, 0.08);
}
.timeline-item-warning {
border-left-color: #f57c00;
background: rgba(245, 124, 0, 0.09);
}
.timeline-item-tool_calls_detected {
border-left-color: #0277bd;
background: rgba(2, 119, 189, 0.06);
}
.timeline-item-error {
@@ -3770,20 +3980,36 @@ header {
border: 1px solid rgba(220, 53, 69, 0.3);
}
/* 工具调用项状态样式 */
/* 工具调用项状态:全程保持「信息蓝」系,完成态不用绿色(避免与工具成功结果混淆) */
.timeline-item-tool_call.tool-call-running {
border-left-color: var(--accent-color);
background: rgba(0, 102, 255, 0.08);
border-left-color: #42a5f5;
background: rgba(66, 165, 245, 0.1);
}
.timeline-item-tool_call.tool-call-completed {
border-left-color: var(--success-color);
background: rgba(40, 167, 69, 0.08);
border-left-color: #0d47a1;
background: rgba(13, 71, 161, 0.08);
}
.timeline-item-tool_call.tool-call-failed {
border-left-color: var(--error-color);
background: rgba(220, 53, 69, 0.08);
background: rgba(220, 53, 69, 0.1);
}
/* 参数块与卡片类型色弱对齐,扫读时一眼归到「调用」 */
.timeline-item-tool_call .tool-args {
background: rgba(21, 101, 192, 0.06);
border-color: rgba(21, 101, 192, 0.22);
}
.timeline-item-tool_result:has(.tool-result-section.success) .tool-result {
background: rgba(40, 167, 69, 0.08);
border-color: rgba(40, 167, 69, 0.35);
}
.timeline-item-tool_result:has(.tool-result-section.error) .tool-result {
background: rgba(220, 53, 69, 0.1);
border-color: rgba(220, 53, 69, 0.45);
}
/* 活跃任务栏 */
@@ -12294,6 +12520,9 @@ header {
.webshell-ai-process-block .webshell-ai-timeline-thinking {
border-left-color: #9c27b0;
}
.webshell-ai-process-block .webshell-ai-timeline-reasoning_chain {
border-left-color: #5c6bc0;
}
.webshell-ai-process-block .webshell-ai-timeline-tool_call,
.webshell-ai-process-block .webshell-ai-timeline-tool_calls_detected {
border-left-color: #ff9800;
@@ -14520,6 +14749,76 @@ header {
max-width: 480px;
margin: 0 auto;
aspect-ratio: 480 / 260;
isolation: isolate;
}
/* 底部氛围光:轻微呼吸 + 悬停扇区时整体染上该等级色调 */
.dashboard-severity-chart::before {
content: '';
position: absolute;
inset: -14% -12% -10%;
border-radius: 50%;
pointer-events: none;
z-index: 0;
opacity: 0.92;
background:
radial-gradient(ellipse 82% 64% at 50% 74%, rgba(99, 102, 241, 0.17), transparent 58%),
radial-gradient(ellipse 52% 42% at 14% 94%, rgba(56, 189, 248, 0.11), transparent 52%),
radial-gradient(ellipse 48% 38% at 88% 90%, rgba(244, 114, 182, 0.08), transparent 50%);
animation: dashboard-donut-aura 7s ease-in-out infinite alternate;
}
.dashboard-severity-chart[data-hover-severity="critical"]::before {
opacity: 1;
animation: none;
background: radial-gradient(ellipse 82% 64% at 50% 74%, rgba(239, 68, 68, 0.38), transparent 58%),
radial-gradient(ellipse 50% 44% at 22% 92%, rgba(249, 115, 22, 0.18), transparent 54%);
}
.dashboard-severity-chart[data-hover-severity="high"]::before {
opacity: 1;
animation: none;
background: radial-gradient(ellipse 82% 64% at 50% 74%, rgba(249, 115, 22, 0.36), transparent 58%),
radial-gradient(ellipse 48% 40% at 78% 88%, rgba(234, 179, 8, 0.14), transparent 52%);
}
.dashboard-severity-chart[data-hover-severity="medium"]::before {
opacity: 1;
animation: none;
background: radial-gradient(ellipse 82% 64% at 50% 74%, rgba(234, 179, 8, 0.34), transparent 58%),
radial-gradient(ellipse 46% 38% at 18% 88%, rgba(250, 204, 21, 0.16), transparent 52%);
}
.dashboard-severity-chart[data-hover-severity="low"]::before {
opacity: 1;
animation: none;
background: radial-gradient(ellipse 82% 64% at 50% 74%, rgba(45, 212, 191, 0.34), transparent 58%),
radial-gradient(ellipse 46% 38% at 86% 88%, rgba(14, 165, 233, 0.14), transparent 52%);
}
.dashboard-severity-chart[data-hover-severity="info"]::before {
opacity: 1;
animation: none;
background: radial-gradient(ellipse 82% 64% at 50% 74%, rgba(59, 130, 246, 0.34), transparent 58%),
radial-gradient(ellipse 46% 38% at 30% 86%, rgba(129, 140, 248, 0.16), transparent 52%);
}
@keyframes dashboard-donut-aura {
0% {
opacity: 0.78;
transform: scale(0.97);
filter: saturate(0.92);
}
100% {
opacity: 1;
transform: scale(1.03);
filter: saturate(1.08);
}
}
.dashboard-severity-chart > .dashboard-severity-donut {
position: relative;
z-index: 1;
}
.dashboard-severity-donut {
@@ -14529,30 +14828,170 @@ header {
overflow: visible;
}
.dashboard-severity-donut .donut-track {
fill: #f1f5f9;
.dashboard-severity-donut .donut-track-shadow {
fill: #c9d4e3;
opacity: 0.85;
}
.dashboard-severity-donut .donut-track-vignette {
pointer-events: none;
}
.dashboard-severity-donut .donut-segment-gloss {
mix-blend-mode: soft-light;
opacity: 0.48;
transition: opacity 0.26s ease;
pointer-events: none;
}
.dashboard-severity-donut .donut-segment-gloss.is-active {
opacity: 0.72;
}
.dashboard-severity-donut .donut-segment {
/* 段与段之间用白色描边制造切割线效果与参考图二一致
环回到黄金比例厚度 50描边也用回 4切割线感更强 */
filter: url(#donut-segment-soften);
stroke: #ffffff;
stroke-width: 4;
stroke-linejoin: round;
transition: opacity 0.2s ease;
cursor: default;
pointer-events: none;
transition: opacity 0.22s ease, filter 0.22s ease;
}
.dashboard-severity-donut .donut-segment.is-empty {
display: none;
/* 透明命中层:几何固定,悬停时只改视觉层,避免 scale/描边导致边缘频闪 */
.dashboard-severity-donut .donut-segment-hit {
fill: transparent;
stroke: transparent;
stroke-width: 0;
cursor: pointer;
outline: none;
pointer-events: visible;
}
.dashboard-severity-donut .donut-segment-hit:focus-visible {
outline: 2px solid rgba(0, 102, 255, 0.55);
outline-offset: 2px;
}
.dashboard-severity-donut.donut-ready .donut-segment {
animation: donut-segment-in 0.72s cubic-bezier(0.22, 1.18, 0.36, 1) backwards;
}
.dashboard-severity-donut.donut-ready .donut-segment.seg-critical { animation-delay: 0.03s; }
.dashboard-severity-donut.donut-ready .donut-segment.seg-high { animation-delay: 0.07s; }
.dashboard-severity-donut.donut-ready .donut-segment.seg-medium { animation-delay: 0.11s; }
.dashboard-severity-donut.donut-ready .donut-segment.seg-low { animation-delay: 0.15s; }
.dashboard-severity-donut.donut-ready .donut-segment.seg-info { animation-delay: 0.19s; }
@keyframes donut-segment-in {
from {
opacity: 0;
transform: scale(0.72) translateY(10px);
}
72% {
opacity: 1;
transform: scale(1.06) translateY(0);
}
to {
opacity: 1;
transform: scale(1) translateY(0);
}
}
.dashboard-severity-donut.is-highlighting .donut-segment.is-dimmed,
.dashboard-severity-donut.is-highlighting .donut-label-text.is-dimmed,
.dashboard-severity-donut.is-highlighting .donut-leader.is-dimmed,
.dashboard-severity-donut.is-highlighting .donut-segment-gloss.is-dimmed {
opacity: 0.26;
}
.dashboard-severity-donut .donut-segment.is-active {
/* 不用 scale / stroke-width,防止命中区抖动 */
z-index: 1;
}
.dashboard-severity-donut[data-hover-severity="critical"] .donut-segment.is-active {
filter: url(#donut-segment-soften) drop-shadow(0 0 28px rgba(239, 68, 68, 0.55)) drop-shadow(0 10px 26px rgba(239, 68, 68, 0.28));
}
.dashboard-severity-donut[data-hover-severity="high"] .donut-segment.is-active {
filter: url(#donut-segment-soften) drop-shadow(0 0 26px rgba(249, 115, 22, 0.52)) drop-shadow(0 10px 24px rgba(249, 115, 22, 0.26));
}
.dashboard-severity-donut[data-hover-severity="medium"] .donut-segment.is-active {
filter: url(#donut-segment-soften) drop-shadow(0 0 26px rgba(234, 179, 8, 0.48)) drop-shadow(0 10px 22px rgba(202, 138, 4, 0.22));
}
.dashboard-severity-donut[data-hover-severity="low"] .donut-segment.is-active {
filter: url(#donut-segment-soften) drop-shadow(0 0 26px rgba(45, 212, 191, 0.48)) drop-shadow(0 10px 22px rgba(13, 148, 136, 0.22));
}
.dashboard-severity-donut[data-hover-severity="info"] .donut-segment.is-active {
filter: url(#donut-segment-soften) drop-shadow(0 0 26px rgba(59, 130, 246, 0.48)) drop-shadow(0 10px 22px rgba(37, 99, 235, 0.22));
}
.dashboard-severity-donut .donut-leader {
stroke: rgba(148, 163, 184, 0.45);
stroke-width: 1.25;
pointer-events: none;
stroke-linecap: round;
transition: opacity 0.22s ease, stroke 0.22s ease;
}
.dashboard-severity-donut.donut-ready .donut-leader {
stroke-dasharray: 100;
stroke-dashoffset: 100;
animation: donut-leader-draw 0.75s cubic-bezier(0.22, 1, 0.36, 1) forwards;
}
.dashboard-severity-donut.donut-ready .donut-leader.label-critical { animation-delay: 0.12s; }
.dashboard-severity-donut.donut-ready .donut-leader.label-high { animation-delay: 0.18s; }
.dashboard-severity-donut.donut-ready .donut-leader.label-medium { animation-delay: 0.24s; }
.dashboard-severity-donut.donut-ready .donut-leader.label-low { animation-delay: 0.30s; }
.dashboard-severity-donut.donut-ready .donut-leader.label-info { animation-delay: 0.36s; }
@keyframes donut-leader-draw {
to { stroke-dashoffset: 0; }
}
.dashboard-severity-donut .donut-leader.is-active {
stroke: rgba(71, 85, 105, 0.95);
stroke-width: 2;
}
.dashboard-severity-donut .donut-label-text {
pointer-events: none;
transition: opacity 0.22s ease, transform 0.28s cubic-bezier(0.34, 1.35, 0.48, 1);
font-size: 14px;
font-weight: 700;
font-family: -apple-system, BlinkMacSystemFont, "Segoe UI", Roboto, "Helvetica Neue", Arial, "PingFang SC", "Hiragino Sans GB", "Microsoft YaHei", sans-serif;
}
.dashboard-severity-donut.donut-ready .donut-label-text {
animation: donut-label-pop 0.58s cubic-bezier(0.34, 1.25, 0.48, 1) backwards;
}
.dashboard-severity-donut.donut-ready .donut-label-text.label-critical { animation-delay: 0.2s; }
.dashboard-severity-donut.donut-ready .donut-label-text.label-high { animation-delay: 0.26s; }
.dashboard-severity-donut.donut-ready .donut-label-text.label-medium { animation-delay: 0.32s; }
.dashboard-severity-donut.donut-ready .donut-label-text.label-low { animation-delay: 0.38s; }
.dashboard-severity-donut.donut-ready .donut-label-text.label-info { animation-delay: 0.44s; }
@keyframes donut-label-pop {
from {
opacity: 0;
transform: translateY(8px);
}
to {
opacity: 1;
transform: translateY(0);
}
}
.dashboard-severity-donut .donut-label-text.is-active {
font-weight: 800;
transform: translateY(-2px);
}
.dashboard-severity-donut .donut-label-text .donut-label-pct {
font-weight: 500;
font-size: 11px;
@@ -14570,45 +15009,86 @@ header {
.dashboard-severity-donut .donut-label-text.label-low { fill: #14b8a6; }
.dashboard-severity-donut .donut-label-text.label-info { fill: #3b82f6; }
/* 半环形配色:保持原有浅色基调(红→橙→黄→青→蓝) */
.dashboard-severity-donut .donut-segment.seg-critical { fill: #f87171; }
.dashboard-severity-donut .donut-segment.seg-high { fill: #fb923c; }
.dashboard-severity-donut .donut-segment.seg-medium { fill: #facc15; }
.dashboard-severity-donut .donut-segment.seg-low { fill: #2dd4bf; }
.dashboard-severity-donut .donut-segment.seg-info { fill: #60a5fa; }
/* 半环形主体配色由 SVG linearGradient#donut-grad-*)提供 */
.dashboard-severity-donut .donut-segment.is-empty {
display: none;
}
@media (prefers-reduced-motion: reduce) {
.dashboard-severity-chart::before {
animation: none;
}
.dashboard-severity-donut.donut-ready .donut-segment,
.dashboard-severity-donut.donut-ready .donut-leader,
.dashboard-severity-donut.donut-ready .donut-label-text {
animation: none !important;
}
.dashboard-severity-center.is-hovering {
transform: translateX(-50%);
}
}
/* 中心数字:纯文字,贴在半圆开口下方(直径线附近),不遮挡彩色弧带 */
.dashboard-severity-center {
position: absolute;
left: 50%;
/* cy viewBox(0,0,480,260) 中是 215 83%
这里把中心文字放在内圈靠下靠近直径线的位置让数字看起来"坐"在半圆里 */
top: 76%;
transform: translate(-50%, -50%);
bottom: 6%;
transform: translateX(-50%);
text-align: center;
pointer-events: none;
width: 60%;
width: auto;
max-width: 7rem;
padding: 0;
margin: 0;
background: none;
border: none;
box-shadow: none;
backdrop-filter: none;
-webkit-backdrop-filter: none;
transition: transform 0.28s cubic-bezier(0.34, 1.35, 0.48, 1);
z-index: 2;
}
.dashboard-severity-center.is-hovering {
transform: translateX(-50%) scale(1.06);
}
.dashboard-severity-center-label.is-severity {
font-weight: 700;
letter-spacing: 0.02em;
}
.dashboard-severity-center-value {
font-size: 2.75rem;
font-size: 2.5rem;
font-weight: 800;
line-height: 1;
color: var(--text-primary);
letter-spacing: -0.03em;
letter-spacing: -0.04em;
font-variant-numeric: tabular-nums;
text-shadow:
0 0 20px rgba(255, 255, 255, 0.95),
0 1px 2px rgba(255, 255, 255, 0.8);
}
.dashboard-severity-center-label {
font-size: 0.8125rem;
font-size: 0.75rem;
color: var(--text-secondary);
margin-top: 8px;
letter-spacing: 0.04em;
margin-top: 4px;
letter-spacing: 0.06em;
font-weight: 500;
text-shadow: 0 0 12px rgba(255, 255, 255, 0.9);
}
.dashboard-severity-center-label[data-severity="critical"] { color: #dc2626; }
.dashboard-severity-center-label[data-severity="high"] { color: #ea580c; }
.dashboard-severity-center-label[data-severity="medium"] { color: #b45309; }
.dashboard-severity-center-label[data-severity="low"] { color: #0f766e; }
.dashboard-severity-center-label[data-severity="info"] { color: #2563eb; }
@media (max-width: 720px) {
.dashboard-severity-center-value { font-size: 2.25rem; }
.dashboard-severity-center-label { font-size: 0.75rem; }
.dashboard-severity-center-value { font-size: 2.1rem; }
.dashboard-severity-center-label { font-size: 0.6875rem; }
}
.dashboard-severity-legend {
@@ -14627,12 +15107,46 @@ header {
padding: 10px 4px;
font-size: 0.9375rem;
border-bottom: 1px solid transparent;
transition: background 0.2s, border-color 0.2s;
transition: background 0.2s, border-color 0.2s, box-shadow 0.2s, opacity 0.2s;
border-radius: 4px;
cursor: pointer;
}
.dashboard-severity-legend-item:hover {
background: rgba(0, 0, 0, 0.025);
.dashboard-severity-legend-item:hover,
.dashboard-severity-legend-item.is-active {
background: rgba(0, 102, 255, 0.06);
border-radius: 8px;
}
.dashboard-severity-legend-item.is-active {
box-shadow: inset 3px 0 0 var(--accent-color, #0066ff);
}
.dashboard-severity-legend-item.is-zero {
opacity: 0.55;
}
.dashboard-severity-legend-item:focus-visible {
outline: 2px solid rgba(0, 102, 255, 0.45);
outline-offset: 2px;
}
.dashboard-severity-donut-tooltip {
display: none;
position: fixed;
left: 0;
top: 0;
z-index: 10000;
max-width: 280px;
padding: 8px 12px;
font-size: 0.8125rem;
line-height: 1.45;
color: #fff;
background: rgba(15, 23, 42, 0.94);
border-radius: 8px;
box-shadow: 0 6px 20px rgba(0, 0, 0, 0.22);
pointer-events: none;
white-space: nowrap;
}
.dashboard-severity-legend-dot {
+20
View File
@@ -77,6 +77,7 @@
"settings": "System settings",
"hitl": "Human-in-the-loop",
"c2": "C2",
"c2Manage": "C2 management",
"c2Listeners": "Listeners",
"c2Sessions": "Sessions",
"c2Tasks": "Tasks",
@@ -146,6 +147,7 @@
"active": "Active",
"highFreq": "High frequency",
"noCallData": "No call data",
"severityClickHint": "Click to view",
"lastUpdated": "Last updated",
"viewAll": "View all →",
"recentVulns": "Recent vulnerabilities",
@@ -277,6 +279,7 @@
"planExecuteStreamPhase": "Phase output",
"einoSubAgentStep": "Sub-agent {{agent}} · step {{n}}",
"aiThinking": "AI thinking",
"reasoningChain": "Reasoning process",
"planning": "Planning",
"assistantStreamPhase": "Assistant output",
"toolCallsDetected": "Detected {{count}} tool call(s)",
@@ -329,6 +332,19 @@
"agentModeMulti": "Multi-agent",
"agentModeSingleHint": "Single-model ReAct loop for chat and tool use",
"agentModeMultiHint": "Eino prebuilt orchestration (deep / plan_execute / supervisor) for complex tasks",
"reasoningModeLabel": "Model reasoning",
"reasoningEffortLabel": "Reasoning effort",
"reasoningModeDefault": "Use system default",
"reasoningModeOff": "Off",
"reasoningModeOn": "On",
"reasoningModeAuto": "Auto",
"reasoningEffortUnset": "Unspecified",
"reasoningCompactLabel": "Reasoning",
"reasoningCompactAria": "Open model reasoning options",
"reasoningPanelTitle": "Model reasoning",
"reasoningPanelHint": "Only Eino single- and multi-agent requests use these; merged with defaults in Settings.",
"reasoningSummaryFollow": "System",
"reasoningSummaryDash": "—",
"agentModeOrchPlanExecute": "Plan-Exec",
"agentModeOrchSupervisor": "Supervisor",
"hitlTitle": "Human-in-the-loop",
@@ -1592,6 +1608,10 @@
"maxTotalTokens": "Max Context Tokens",
"maxTotalTokensPlaceholder": "120000",
"maxTotalTokensHint": "Shared by memory compression and attack chain building. Default: 120000",
"openaiReasoningTitle": "Model reasoning (Eino)",
"openaiReasoningHint": "Applies to Eino single-agent and multi-agent only; works with chat-page reasoning controls.",
"openaiReasoningProfile": "Wire profile",
"openaiReasoningAllowClient": "Allow chat page to override reasoning options",
"fofaBaseUrlPlaceholder": "https://fofa.info/api/v1/search/all (optional)",
"fofaBaseUrlHint": "Leave empty for default.",
"email": "Email",
+20
View File
@@ -77,6 +77,7 @@
"settings": "系统设置",
"hitl": "人机协同",
"c2": "C2",
"c2Manage": "C2 管理",
"c2Listeners": "监听器",
"c2Sessions": "会话",
"c2Tasks": "任务",
@@ -146,6 +147,7 @@
"active": "活跃",
"highFreq": "高频",
"noCallData": "暂无调用数据",
"severityClickHint": "点击查看",
"lastUpdated": "上次更新",
"viewAll": "查看全部 →",
"recentVulns": "最近漏洞",
@@ -266,6 +268,7 @@
"planExecuteStreamPhase": "阶段输出",
"einoSubAgentStep": "子代理 {{agent}} · 第 {{n}} 步",
"aiThinking": "AI思考",
"reasoningChain": "推理过程",
"planning": "规划中",
"assistantStreamPhase": "助手输出",
"toolCallsDetected": "检测到 {{count}} 个工具调用",
@@ -318,6 +321,19 @@
"agentModeMulti": "多代理",
"agentModeSingleHint": "单模型 ReAct 循环,适合常规对话与工具调用",
"agentModeMultiHint": "Eino 预置编排(deep / plan_execute / supervisor),适合复杂任务",
"reasoningModeLabel": "模型推理",
"reasoningEffortLabel": "推理强度",
"reasoningModeDefault": "跟随系统",
"reasoningModeOff": "关闭",
"reasoningModeOn": "开启",
"reasoningModeAuto": "自动",
"reasoningEffortUnset": "不指定",
"reasoningCompactLabel": "推理",
"reasoningCompactAria": "打开模型推理选项",
"reasoningPanelTitle": "模型推理",
"reasoningPanelHint": "仅 Eino 单代理与多代理请求会带上这些参数;与系统设置中的默认值合并。",
"reasoningSummaryFollow": "系统",
"reasoningSummaryDash": "—",
"agentModeOrchPlanExecute": "Plan-Exec",
"agentModeOrchSupervisor": "Supervisor",
"hitlTitle": "人机协同",
@@ -1581,6 +1597,10 @@
"maxTotalTokens": "最大上下文 Token 数",
"maxTotalTokensPlaceholder": "120000",
"maxTotalTokensHint": "内存压缩和攻击链构建共用此配置,默认 120000",
"openaiReasoningTitle": "模型推理(Eino",
"openaiReasoningHint": "仅 Eino 单代理与多代理请求生效;与对话页「模型推理」下拉配合使用。",
"openaiReasoningProfile": "线路 profile",
"openaiReasoningAllowClient": "允许对话页覆盖推理选项",
"fofaBaseUrlPlaceholder": "https://fofa.info/api/v1/search/all(可选)",
"fofaBaseUrlHint": "留空则使用默认地址。",
"email": "Email",
+13 -8
View File
@@ -342,22 +342,27 @@ function formatMarkdown(text) {
ALLOWED_ATTR: ['href', 'title', 'alt', 'src', 'class'],
ALLOW_DATA_ATTR: false,
};
const raw = text == null ? '' : String(text);
const src = typeof window.normalizeAssistantMarkdownSource === 'function'
? window.normalizeAssistantMarkdownSource(raw)
: raw;
if (typeof DOMPurify !== 'undefined') {
if (typeof marked !== 'undefined' && !/<[a-z][\s\S]*>/i.test(text)) {
if (typeof marked !== 'undefined' && !/<[a-z][\s\S]*>/i.test(src)) {
try {
marked.setOptions({
breaks: true,
gfm: true,
});
let parsedContent = marked.parse(text);
const parsedContent = marked.parse(src, { async: false });
return DOMPurify.sanitize(parsedContent, sanitizeConfig);
} catch (e) {
console.error('Markdown 解析失败:', e);
return DOMPurify.sanitize(text, sanitizeConfig);
return DOMPurify.sanitize(src, sanitizeConfig);
}
} else {
return DOMPurify.sanitize(text, sanitizeConfig);
return DOMPurify.sanitize(src, sanitizeConfig);
}
} else if (typeof marked !== 'undefined') {
try {
@@ -365,13 +370,13 @@ function formatMarkdown(text) {
breaks: true,
gfm: true,
});
return marked.parse(text);
return marked.parse(src, { async: false });
} catch (e) {
console.error('Markdown 解析失败:', e);
return escapeHtml(text).replace(/\n/g, '<br>');
return escapeHtml(src).replace(/\n/g, '<br>');
}
} else {
return escapeHtml(text).replace(/\n/g, '<br>');
return escapeHtml(src).replace(/\n/g, '<br>');
}
}
+191 -2
View File
@@ -40,6 +40,8 @@ let chatAttachmentSeq = 0;
// 对话模式:react = 原生 ReAct/agent-loop);eino_single = Eino ADK 单代理(/api/eino-agent/stream);deep / plan_execute / supervisor = Eino 多代理(/api/multi-agent/stream,请求体 orchestration
const AGENT_MODE_STORAGE_KEY = 'cyberstrike-chat-agent-mode';
const REASONING_MODE_LS = 'cyberstrike-chat-reasoning-mode';
const REASONING_EFFORT_LS = 'cyberstrike-chat-reasoning-effort';
const CHAT_AGENT_MODE_REACT = 'react';
const CHAT_AGENT_MODE_EINO_SINGLE = 'eino_single';
const CHAT_AGENT_EINO_MODES = ['deep', 'plan_execute', 'supervisor'];
@@ -492,6 +494,131 @@ function syncAgentModeFromValue(value) {
const v = el.getAttribute('data-value');
el.classList.toggle('selected', v === value);
});
syncReasoningRowVisibility(value);
}
function syncReasoningRowVisibility(modeVal) {
const wrap = document.getElementById('chat-reasoning-wrapper');
if (!wrap) return;
const show = modeVal === CHAT_AGENT_MODE_EINO_SINGLE || (multiAgentAPIEnabled && chatAgentModeIsEino(modeVal));
wrap.style.display = show ? '' : 'none';
if (!show) {
closeChatReasoningPanel();
} else {
updateChatReasoningSummary();
}
}
function reasoningSummaryModeLabel(mode) {
const m = (mode || 'default').trim();
const t = (typeof window.t === 'function') ? window.t : function (k) { return k; };
switch (m) {
case 'off': return t('chat.reasoningModeOff');
case 'on': return t('chat.reasoningModeOn');
case 'auto': return t('chat.reasoningModeAuto');
default: return t('chat.reasoningSummaryFollow');
}
}
function updateChatReasoningSummary() {
const el = document.getElementById('chat-reasoning-summary');
const modeEl = document.getElementById('chat-reasoning-mode');
const effEl = document.getElementById('chat-reasoning-effort');
if (!el || !modeEl) return;
const mode = (modeEl.value || 'default').trim();
const effort = effEl && effEl.value ? String(effEl.value).trim() : '';
const t = (typeof window.t === 'function') ? window.t : function (k) { return k; };
const modePart = reasoningSummaryModeLabel(mode);
const effPart = effort || t('chat.reasoningSummaryDash');
el.textContent = modePart + ' / ' + effPart;
}
function closeChatReasoningPanel() {
const wrap = document.getElementById('chat-reasoning-wrapper');
const toggle = document.getElementById('conversation-reasoning-toggle');
if (wrap) wrap.classList.add('conversation-reasoning-collapsed');
if (toggle) toggle.setAttribute('aria-expanded', 'false');
}
function toggleConversationReasoningCard() {
const wrap = document.getElementById('chat-reasoning-wrapper');
const toggle = document.getElementById('conversation-reasoning-toggle');
if (!wrap || !toggle) return;
wrap.classList.toggle('conversation-reasoning-collapsed');
const collapsed = wrap.classList.contains('conversation-reasoning-collapsed');
toggle.setAttribute('aria-expanded', collapsed ? 'false' : 'true');
if (!collapsed) {
if (typeof closeAgentModePanel === 'function') {
closeAgentModePanel();
}
if (typeof closeRoleSelectionPanel === 'function') {
closeRoleSelectionPanel();
}
updateChatReasoningSummary();
}
}
function toggleChatReasoningPanel() {
toggleConversationReasoningCard();
}
function restoreChatReasoningControlsFromStorage() {
try {
const m = document.getElementById('chat-reasoning-mode');
const e = document.getElementById('chat-reasoning-effort');
if (m) {
const v = localStorage.getItem(REASONING_MODE_LS);
if (v && ['default', 'off', 'on', 'auto'].indexOf(v) !== -1) {
m.value = v;
}
}
if (e) {
const v = localStorage.getItem(REASONING_EFFORT_LS);
if (v !== null && ['', 'low', 'medium', 'high', 'max'].indexOf(v) !== -1) {
e.value = v;
}
}
updateChatReasoningSummary();
} catch (err) { /* ignore */ }
}
function persistChatReasoningPrefs() {
try {
const m = document.getElementById('chat-reasoning-mode');
const elEff = document.getElementById('chat-reasoning-effort');
if (m) localStorage.setItem(REASONING_MODE_LS, m.value || 'default');
if (elEff) localStorage.setItem(REASONING_EFFORT_LS, elEff.value || '');
updateChatReasoningSummary();
} catch (err) { /* ignore */ }
}
/** 供 WebShell 等复用:在 Eino 路径下返回 reasoning 请求片段或 undefined */
function buildReasoningRequestPayload() {
const wrap = document.getElementById('chat-reasoning-wrapper');
if (!wrap || wrap.style.display === 'none') {
return undefined;
}
const modeEl = document.getElementById('chat-reasoning-mode');
const effEl = document.getElementById('chat-reasoning-effort');
if (!modeEl) return undefined;
const mode = (modeEl.value || 'default').trim();
const effort = effEl && effEl.value ? String(effEl.value).trim() : '';
if (mode === 'default' && !effort) {
return undefined;
}
const o = {};
if (mode !== 'default') o.mode = mode;
if (effort) o.effort = effort;
return Object.keys(o).length ? o : undefined;
}
if (typeof window !== 'undefined') {
window.persistChatReasoningPrefs = persistChatReasoningPrefs;
window.buildReasoningRequestPayload = buildReasoningRequestPayload;
window.closeChatReasoningPanel = closeChatReasoningPanel;
window.toggleChatReasoningPanel = toggleChatReasoningPanel;
window.toggleConversationReasoningCard = toggleConversationReasoningCard;
window.updateChatReasoningSummary = updateChatReasoningSummary;
}
function closeAgentModePanel() {
@@ -513,6 +640,9 @@ function toggleAgentModePanel() {
closeAgentModePanel();
return;
}
if (typeof closeChatReasoningPanel === 'function') {
closeChatReasoningPanel();
}
if (typeof closeRoleSelectionPanel === 'function') {
closeRoleSelectionPanel();
}
@@ -563,6 +693,8 @@ async function initChatAgentModeFromConfig() {
} catch (e) { /* ignore */ }
sel.value = stored;
syncAgentModeFromValue(stored);
restoreChatReasoningControlsFromStorage();
syncReasoningRowVisibility(stored);
} catch (e) {
console.warn('initChatAgentModeFromConfig', e);
}
@@ -575,6 +707,9 @@ document.addEventListener('languagechange', function () {
if (v === CHAT_AGENT_MODE_REACT || chatAgentModeIsEinoSingle(v) || chatAgentModeIsEino(v)) {
syncAgentModeFromValue(v);
}
if (typeof updateChatReasoningSummary === 'function') {
updateChatReasoningSummary();
}
});
// 保存输入框草稿到localStorage(防抖版本)
@@ -760,6 +895,10 @@ async function sendMessage() {
serverPath: a.serverPath
}));
}
const reasoningPayload = buildReasoningRequestPayload();
if (reasoningPayload) {
body.reasoning = reasoningPayload;
}
// 发送后清空附件列表
chatAttachments = [];
renderChatFileChips();
@@ -1705,7 +1844,10 @@ function refreshSystemReadyMessageBubbles() {
if (typeof marked !== 'undefined') {
try {
marked.setOptions({ breaks: true, gfm: true });
const parsed = marked.parse(text);
const src = typeof window.normalizeAssistantMarkdownSource === 'function'
? window.normalizeAssistantMarkdownSource(text)
: text;
const parsed = marked.parse(src, { async: false });
formattedContent = typeof DOMPurify !== 'undefined'
? DOMPurify.sanitize(parsed, defaultSanitizeConfig)
: parsed;
@@ -1796,7 +1938,10 @@ function addMessage(role, content, mcpExecutionIds = null, progressId = null, cr
breaks: true,
gfm: true,
});
return marked.parse(raw);
const src = typeof window.normalizeAssistantMarkdownSource === 'function'
? window.normalizeAssistantMarkdownSource(raw)
: raw;
return marked.parse(src, { async: false });
} catch (e) {
console.error('Markdown 解析失败:', e);
return null;
@@ -2079,6 +2224,39 @@ function showCopySuccess(button) {
}
}
/** 相邻且类型/正文/data 完全一致的过程详情只保留一条(与后端去重一致,避免时间线叠多条相同块) */
function dedupeConsecutiveProcessDetailRows(details) {
if (!Array.isArray(details) || details.length < 2) {
return details;
}
const out = [details[0]];
for (let i = 1; i < details.length; i++) {
const cur = details[i];
if (processDetailRowFingerprint(out[out.length - 1]) === processDetailRowFingerprint(cur)) {
continue;
}
out.push(cur);
}
return out;
}
function processDetailRowFingerprint(d) {
if (!d || typeof d !== 'object') {
return '';
}
const et = String(d.eventType || '');
const msg = String(d.message != null ? d.message : '').trim();
let dataKey = '';
try {
if (d.data != null) {
dataKey = JSON.stringify(d.data);
}
} catch (e) {
dataKey = String(d.data);
}
return et + '\0' + msg + '\0' + dataKey;
}
// 渲染过程详情
function renderProcessDetails(messageId, processDetails) {
const messageElement = document.getElementById(messageId);
@@ -2178,6 +2356,7 @@ function renderProcessDetails(messageId, processDetails) {
}
detailsContainer.dataset.lazyNotLoaded = '0';
detailsContainer.dataset.loaded = '1';
processDetails = dedupeConsecutiveProcessDetailRows(processDetails);
// 如果没有processDetails或为空,显示空状态
if (!processDetails || processDetails.length === 0) {
// 显示空状态提示
@@ -2228,6 +2407,8 @@ function renderProcessDetails(messageId, processDetails) {
}
} else if (eventType === 'thinking') {
itemTitle = agPx + '🤔 ' + (typeof window.t === 'function' ? window.t('chat.aiThinking') : 'AI思考');
} else if (eventType === 'reasoning_chain') {
itemTitle = agPx + '🔗 ' + (typeof window.t === 'function' ? window.t('chat.reasoningChain') : '推理过程');
} else if (eventType === 'planning') {
if (typeof window.einoMainStreamPlanningTitle === 'function') {
itemTitle = window.einoMainStreamPlanningTitle(data);
@@ -7233,6 +7414,14 @@ document.addEventListener('click', function(event) {
closeAgentModePanel();
}
}
const reasoningWrap = document.getElementById('chat-reasoning-wrapper');
if (reasoningWrap && reasoningWrap.style.display !== 'none' &&
!reasoningWrap.classList.contains('conversation-reasoning-collapsed')) {
if (!reasoningWrap.contains(event.target)) {
closeChatReasoningPanel();
}
}
});
// 创建分组
+431 -21
View File
@@ -202,7 +202,6 @@ async function refreshDashboard() {
openHighCount = pickOpenCount(openHighRes, highCount);
openMediumCount = pickOpenCount(openMediumRes, mediumCount);
openLowCount = pickOpenCount(openLowRes, lowCount);
if (severityTotalEl) severityTotalEl.textContent = String(total);
severityIds.forEach(sev => {
const count = bySeverity[sev] || 0;
const el = document.getElementById('dashboard-severity-' + sev);
@@ -726,8 +725,8 @@ function renderDashboardAlertBanner(stats) {
try { sessionStorage.setItem(DASH_SESSION_ALERT_LAST_REASONS, reasonPartJoined); } catch (_) {}
}
// External MCP 健康度:从 /api/external-mcp/stats 解析出 running / total / down
// 决定是否在「能力总览」第 6 行显示,并把 down 数返回给 alert banner 驱动告警
// External MCP 健康度:从 /api/external-mcp/stats 解析(后端字段为 total/enabled/disabled/connected
// 决定是否在「能力总览」第 6 行显示,并把「已启用但未连接」的数量返回给 alert banner。
function renderExternalMcpHealth(stats) {
var row = document.getElementById('dashboard-resource-external-mcp-row');
var textEl = document.getElementById('dashboard-resource-external-mcp-text');
@@ -738,22 +737,29 @@ function renderExternalMcpHealth(stats) {
row.hidden = true;
return 0;
}
// 兼容多种返回字段:{ total, running, stopped/error };常见命名都尝试一下
var total = Number(stats.total ?? stats.Total ?? 0) || 0;
var running = Number(stats.running ?? stats.Running ?? 0) || 0;
var enabled = Number(stats.enabled ?? stats.Enabled ?? 0) || 0;
// 后端用 connected 表示已连接数;兼容旧字段 running
var connected = Number(stats.connected ?? stats.Connected ??
stats.running ?? stats.Running ?? 0) || 0;
if (total === 0) {
row.hidden = true;
return 0;
}
var down = Math.max(0, total - running);
// 未配置任何「已启用」的外部 MCP 时不展示健康行,也不告警(与 MCP 管理页口径一致)
if (enabled === 0) {
row.hidden = true;
return 0;
}
var down = Math.max(0, enabled - connected);
row.hidden = false;
textEl.textContent = formatNumber(running) + ' / ' + formatNumber(total);
textEl.textContent = formatNumber(connected) + ' / ' + formatNumber(enabled);
if (healthEl) {
healthEl.classList.remove('is-ok', 'is-warning', 'is-danger');
if (down === 0) {
healthEl.classList.add('is-ok');
healthEl.textContent = dt('dashboard.mcpAllRunning', null, '全部运行');
} else if (down < total) {
} else if (down < enabled) {
healthEl.classList.add('is-warning');
healthEl.textContent = dt('dashboard.mcpPartialDown', { count: down },
down + ' 个未运行');
@@ -1383,6 +1389,17 @@ function dashboardBarTooltipOnOut(ev) {
if (dashboardBarTooltipEl) dashboardBarTooltipEl.style.display = 'none';
}
// 仪表盘 → 漏洞管理:带严重程度/状态筛选跳转
function navigateToVulnerabilitiesWithFilter(opts) {
opts = opts || {};
var params = new URLSearchParams();
if (opts.severity) params.set('severity', opts.severity);
if (opts.status) params.set('status', opts.status);
var qs = params.toString();
window.location.hash = qs ? 'vulnerabilities?' + qs : 'vulnerabilities';
}
window.navigateToVulnerabilitiesWithFilter = navigateToVulnerabilitiesWithFilter;
// 漏洞严重程度分布:半环形(donut)渲染
// 几何参数固定,便于配合 viewBox 0 0 560 320 的 SVG 容器
// 段间分隔由 CSS 的白色 stroke 完成,不再使用 gapRad
@@ -1395,9 +1412,31 @@ var SEVERITY_DONUT_CFG = {
rOuter: 165,
rInner: 115, // 环厚 = 50(介于原 90 和上一版 35 之间,自然且有质感)
labelOffset: 14,
gapRad: 0
gapRad: 0.012
};
// 三段渐变:[高光浅调, 中段饱和色, 深色边缘] —— 做出类似 3D 釉面的层次
var SEVERITY_DONUT_GRADIENTS = {
critical: ['#fecaca', '#f87171', '#dc2626'],
high: ['#fed7aa', '#fb923c', '#ea580c'],
medium: ['#fef08a', '#facc15', '#ca8a04'],
low: ['#99f6e4', '#2dd4bf', '#0f766e'],
info: ['#bfdbfe', '#60a5fa', '#2563eb']
};
var severityDonutCenterDisplayed = { total: null, hoverCount: null };
var severityDonutState = {
bySeverity: {},
total: 0,
hoverId: null,
bound: false
};
var severityDonutTooltipEl = null;
var severityDonutTooltipTimer = null;
var severityDonutHoverClearTimer = null;
var SEVERITY_DEFAULT_LABELS = {
critical: '严重',
high: '高危',
@@ -1415,17 +1454,65 @@ function severityLabel(id) {
return SEVERITY_DEFAULT_LABELS[id] || id;
}
function ensureSeverityDonutDefs() {
var defsEl = document.getElementById('dashboard-severity-donut-defs');
if (!defsEl || defsEl.hasChildNodes()) return;
var html = '';
html += '<linearGradient id="donut-track-face" x1="0%" y1="0%" x2="0%" y2="100%">';
html += '<stop offset="0%" stop-color="#f8fafc"/>';
html += '<stop offset="55%" stop-color="#e8eef5"/>';
html += '<stop offset="100%" stop-color="#dce5ef"/>';
html += '</linearGradient>';
html += '<radialGradient id="donut-track-vignette" cx="50%" cy="85%" r="75%" fx="50%" fy="85%">';
html += '<stop offset="0%" stop-color="#ffffff" stop-opacity="0.35"/>';
html += '<stop offset="70%" stop-color="#ffffff" stop-opacity="0"/>';
html += '</radialGradient>';
html += '<radialGradient id="donut-inner-gloss" cx="35%" cy="75%" r="55%">';
html += '<stop offset="0%" stop-color="#ffffff" stop-opacity="0.45"/>';
html += '<stop offset="55%" stop-color="#ffffff" stop-opacity="0.08"/>';
html += '<stop offset="100%" stop-color="#ffffff" stop-opacity="0"/>';
html += '</radialGradient>';
html += '<filter id="donut-segment-soften" x="-18%" y="-18%" width="136%" height="136%" color-interpolation-filters="sRGB">';
html += '<feGaussianBlur in="SourceAlpha" stdDeviation="0.8" result="blur"/>';
html += '<feOffset dx="0" dy="1.5" in="blur" result="off"/>';
html += '<feFlood flood-color="#0f172a" flood-opacity="0.13" result="flood"/>';
html += '<feComposite in="flood" in2="off" operator="in" result="shadow"/>';
html += '<feMerge><feMergeNode in="shadow"/><feMergeNode in="SourceGraphic"/></feMerge>';
html += '</filter>';
Object.keys(SEVERITY_DONUT_GRADIENTS).forEach(function (id) {
var stops = SEVERITY_DONUT_GRADIENTS[id];
html += '<linearGradient id="donut-grad-' + id + '" x1="18%" y1="12%" x2="88%" y2="94%">';
html += '<stop offset="0%" stop-color="' + stops[0] + '"/>';
html += '<stop offset="52%" stop-color="' + stops[1] + '"/>';
html += '<stop offset="100%" stop-color="' + stops[2] + '"/>';
html += '</linearGradient>';
});
defsEl.innerHTML = html;
}
function renderSeverityDonut(bySeverity, total) {
var svgEl = document.getElementById('dashboard-severity-donut');
var trackEl = document.getElementById('dashboard-severity-donut-track');
var leadersEl = document.getElementById('dashboard-severity-donut-leaders');
var segmentsEl = document.getElementById('dashboard-severity-donut-segments');
var hitsEl = document.getElementById('dashboard-severity-donut-hits');
var labelsEl = document.getElementById('dashboard-severity-donut-labels');
if (!trackEl || !segmentsEl || !labelsEl) return;
var cfg = SEVERITY_DONUT_CFG;
severityDonutState.bySeverity = bySeverity && typeof bySeverity === 'object' ? bySeverity : {};
severityDonutState.total = total || 0;
severityDonutState.hoverId = null;
// 背景轨迹(完整半环)只渲染一次
var cfg = SEVERITY_DONUT_CFG;
ensureSeverityDonutDefs();
// 背景轨迹(完整半环):双层填充营造凹槽 + 高光
if (!trackEl.hasChildNodes()) {
trackEl.innerHTML = '<path class="donut-track" d="' + halfRingPath(cfg.cx, cfg.cy, cfg.rOuter, cfg.rInner) + '"/>';
var trackPath = halfRingPath(cfg.cx, cfg.cy, cfg.rOuter, cfg.rInner);
trackEl.innerHTML =
'<path class="donut-track-shadow" d="' + trackPath + '"/>' +
'<path class="donut-track" fill="url(#donut-track-face)" d="' + trackPath + '"/>' +
'<path class="donut-track-vignette" fill="url(#donut-track-vignette)" d="' + trackPath + '"/>';
}
var ids = ['critical', 'high', 'medium', 'low', 'info'];
@@ -1434,12 +1521,24 @@ function renderSeverityDonut(bySeverity, total) {
});
var visible = severities.filter(function (s) { return s.value > 0; });
if (svgEl) {
svgEl.classList.remove('is-highlighting');
svgEl.removeAttribute('data-hover-severity');
}
if (!total || total <= 0 || visible.length === 0) {
segmentsEl.innerHTML = '';
if (hitsEl) hitsEl.innerHTML = '';
labelsEl.innerHTML = '';
if (leadersEl) leadersEl.innerHTML = '';
clearSeverityDonutLegendHighlight();
resetSeverityDonutCenter(false);
_clearSeverityDonutChartWrapHover();
if (svgEl) svgEl.classList.remove('donut-ready');
return;
}
resetSeverityDonutCenter(true);
// 弧长按 value/total 计算;若严重度求和 < total(存在未分级),右侧会保留背景轨迹的空白
var sumVisible = visible.reduce(function (s, seg) { return s + seg.value; }, 0);
var coverage = sumVisible / total; // 半环被实际段覆盖的比例
@@ -1449,7 +1548,10 @@ function renderSeverityDonut(bySeverity, total) {
var arcsTotalRad = Math.max(0, Math.PI * coverage - totalGapRad);
var segmentsHtml = '';
var hitsHtml = '';
var glossHtml = '';
var labelsHtml = '';
var leadersHtml = '';
var cumRad = 0;
visible.forEach(function (seg, i) {
@@ -1459,17 +1561,21 @@ function renderSeverityDonut(bySeverity, total) {
var angleEnd = angleStart - segRad;
var path = arcSegmentPath(cfg.cx, cfg.cy, cfg.rOuter, cfg.rInner, angleStart, angleEnd);
segmentsHtml += '<path class="donut-segment seg-' + seg.id + '" d="' + path + '"/>';
var pctOfTotal = (seg.value / total) * 100;
var pctRounded = Math.round(pctOfTotal);
var name = esc(severityLabel(seg.id));
var ariaLabel = name + ' ' + seg.value + ' (' + pctRounded + '%)';
segmentsHtml += '<path class="donut-segment seg-' + seg.id + '" data-severity="' + seg.id + '" data-count="' + seg.value + '" data-pct="' + pctRounded + '" fill="url(#donut-grad-' + seg.id + ')" d="' + path + '"/>';
hitsHtml += '<path class="donut-segment-hit seg-' + seg.id + '" data-severity="' + seg.id + '" fill="transparent" d="' + path + '" tabindex="0" role="button" aria-label="' + ariaLabel + '"/>';
glossHtml += '<path class="donut-segment-gloss seg-' + seg.id + '" data-severity="' + seg.id + '" fill="url(#donut-inner-gloss)" d="' + arcSegmentPath(cfg.cx, cfg.cy, cfg.rOuter - 2, cfg.rInner + 6, angleStart, angleEnd) + '" pointer-events="none"/>';
// 仅当占比 >= 5% 时显示外置标签,避免小段标签互相重叠
var pctOfTotal = (seg.value / total) * 100;
if (pctOfTotal >= 5) {
var midAngle = (angleStart + angleEnd) / 2;
var labelR = cfg.rOuter + cfg.labelOffset;
var labelR = cfg.rOuter + cfg.labelOffset + 6;
var sinMid = Math.sin(midAngle);
var cosMid = Math.cos(midAngle);
var lx = cfg.cx + labelR * cosMid;
// 顶部区域标签整体向上抬一些,避免与外弧贴住;侧边标签则不调整
var topLift = sinMid > 0.4 ? Math.round((sinMid - 0.3) * 10) : 0;
var ly = cfg.cy - labelR * sinMid - topLift;
@@ -1477,11 +1583,15 @@ function renderSeverityDonut(bySeverity, total) {
if (cosMid < -0.15) anchor = 'end';
else if (cosMid > 0.15) anchor = 'start';
var pctText = Math.round(pctOfTotal) + '%';
var name = esc(severityLabel(seg.id));
var pctText = pctRounded + '%';
var arcR = cfg.rOuter + 4;
var lineX1 = cfg.cx + arcR * cosMid;
var lineY1 = cfg.cy - arcR * sinMid;
var lineX2 = cfg.cx + (cfg.rOuter + cfg.labelOffset - 2) * cosMid;
var lineY2 = cfg.cy - (cfg.rOuter + cfg.labelOffset - 2) * sinMid;
leadersHtml += '<line class="donut-leader label-' + seg.id + '" data-severity="' + seg.id + '" pathLength="100" x1="' + lineX1.toFixed(1) + '" y1="' + lineY1.toFixed(1) + '" x2="' + lineX2.toFixed(1) + '" y2="' + lineY2.toFixed(1) + '"/>';
// 两行:第一行 "数量 (百分比)"(弧色),第二行 "严重度名称"(同色但稍小)
labelsHtml += '<text class="donut-label-text label-' + seg.id + '" text-anchor="' + anchor + '" x="' + lx.toFixed(1) + '" y="' + ly.toFixed(1) + '">';
labelsHtml += '<text class="donut-label-text label-' + seg.id + '" data-severity="' + seg.id + '" text-anchor="' + anchor + '" x="' + lx.toFixed(1) + '" y="' + ly.toFixed(1) + '">';
labelsHtml += '<tspan x="' + lx.toFixed(1) + '" dy="0">' + seg.value + ' <tspan class="donut-label-pct">(' + pctText + ')</tspan></tspan>';
labelsHtml += '<tspan class="donut-label-name" x="' + lx.toFixed(1) + '" dy="14">' + name + '</tspan>';
labelsHtml += '</text>';
@@ -1491,8 +1601,308 @@ function renderSeverityDonut(bySeverity, total) {
if (i < visibleCount - 1) cumRad += cfg.gapRad;
});
segmentsEl.innerHTML = segmentsHtml;
if (leadersEl) leadersEl.innerHTML = leadersHtml;
segmentsEl.innerHTML = segmentsHtml + glossHtml;
if (hitsEl) hitsEl.innerHTML = hitsHtml;
labelsEl.innerHTML = labelsHtml;
if (svgEl) {
svgEl.classList.remove('donut-ready');
void svgEl.offsetWidth;
requestAnimationFrame(function () {
svgEl.classList.add('donut-ready');
});
}
scheduleSeverityCenterCountUp(total);
attachSeverityDonutInteractivity();
}
function scheduleSeverityCenterCountUp(targetTotal) {
if (window.matchMedia && window.matchMedia('(prefers-reduced-motion: reduce)').matches) {
var totalEl = document.getElementById('dashboard-severity-total');
if (totalEl) totalEl.textContent = String(targetTotal);
severityDonutCenterDisplayed.total = targetTotal;
return;
}
var totalEl = document.getElementById('dashboard-severity-total');
if (!totalEl || severityDonutState.hoverId) return;
var from = typeof severityDonutCenterDisplayed.total === 'number' ? severityDonutCenterDisplayed.total : 0;
var to = targetTotal;
if (from === to) {
totalEl.textContent = String(to);
severityDonutCenterDisplayed.total = to;
return;
}
var start = null;
var dur = Math.min(520, 180 + Math.abs(to - from) * 28);
function tick(now) {
if (!start) start = now;
var t = Math.min(1, (now - start) / dur);
var eased = 1 - Math.pow(1 - t, 3);
var val = Math.round(from + (to - from) * eased);
totalEl.textContent = String(val);
if (t < 1) {
requestAnimationFrame(tick);
} else {
totalEl.textContent = String(to);
severityDonutCenterDisplayed.total = to;
}
}
requestAnimationFrame(tick);
}
function resetSeverityDonutCenter(skipTotalSnapshot) {
var totalEl = document.getElementById('dashboard-severity-total');
var labelEl = document.getElementById('dashboard-severity-center-label');
var centerEl = document.getElementById('dashboard-severity-center');
var n = severityDonutState.total || 0;
if (!skipTotalSnapshot && totalEl) totalEl.textContent = String(n);
if (!skipTotalSnapshot) severityDonutCenterDisplayed.total = n;
severityDonutCenterDisplayed.hoverCount = null;
if (labelEl) {
labelEl.textContent = (typeof window.t === 'function' ? window.t('dashboard.totalVulns') : '总漏洞数');
labelEl.classList.remove('is-severity');
labelEl.removeAttribute('data-severity');
}
if (centerEl) centerEl.classList.remove('is-hovering');
}
function setSeverityDonutHover(severityId) {
var svgEl = document.getElementById('dashboard-severity-donut');
var centerEl = document.getElementById('dashboard-severity-center');
var totalEl = document.getElementById('dashboard-severity-total');
var labelEl = document.getElementById('dashboard-severity-center-label');
if (!severityId) {
severityDonutState.hoverId = null;
if (svgEl) {
svgEl.classList.remove('is-highlighting');
svgEl.removeAttribute('data-hover-severity');
}
clearSeverityDonutLegendHighlight();
resetSeverityDonutCenter(false);
_clearSeverityDonutChartWrapHover();
return;
}
var count = (severityDonutState.bySeverity && severityDonutState.bySeverity[severityId]) || 0;
severityDonutState.hoverId = severityId;
if (svgEl) {
svgEl.classList.add('is-highlighting');
svgEl.setAttribute('data-hover-severity', severityId);
}
highlightSeverityDonutParts(severityId);
highlightSeverityLegendItem(severityId);
if (totalEl) {
totalEl.textContent = String(count);
severityDonutCenterDisplayed.hoverCount = count;
}
if (labelEl) {
labelEl.textContent = severityLabel(severityId);
labelEl.classList.add('is-severity');
labelEl.setAttribute('data-severity', severityId);
}
if (centerEl) centerEl.classList.add('is-hovering');
var chartWrap = document.querySelector('.dashboard-severity-chart');
if (chartWrap) chartWrap.setAttribute('data-hover-severity', severityId);
}
function _clearSeverityDonutChartWrapHover() {
var chartWrap = document.querySelector('.dashboard-severity-chart');
if (chartWrap) chartWrap.removeAttribute('data-hover-severity');
}
function highlightSeverityDonutParts(severityId) {
var svgEl = document.getElementById('dashboard-severity-donut');
if (!svgEl) return;
svgEl.querySelectorAll('.donut-segment[data-severity], .donut-segment-gloss[data-severity], .donut-leader[data-severity], .donut-label-text[data-severity]').forEach(function (el) {
var match = el.getAttribute('data-severity') === severityId;
el.classList.toggle('is-active', match);
el.classList.toggle('is-dimmed', !match);
});
}
function highlightSeverityLegendItem(severityId) {
var legend = document.getElementById('dashboard-vuln-bars');
if (!legend) return;
legend.querySelectorAll('.dashboard-severity-legend-item').forEach(function (item) {
var match = item.getAttribute('data-severity') === severityId;
item.classList.toggle('is-active', match);
});
}
function clearSeverityDonutLegendHighlight() {
var legend = document.getElementById('dashboard-vuln-bars');
if (legend) {
legend.querySelectorAll('.dashboard-severity-legend-item.is-active').forEach(function (el) {
el.classList.remove('is-active');
});
}
var svgEl = document.getElementById('dashboard-severity-donut');
if (svgEl) {
svgEl.querySelectorAll('.is-active, .is-dimmed').forEach(function (el) {
el.classList.remove('is-active', 'is-dimmed');
});
}
}
function severityDonutTooltipText(severityId) {
var count = (severityDonutState.bySeverity && severityDonutState.bySeverity[severityId]) || 0;
var pct = severityDonutState.total > 0 ? Math.round((count / severityDonutState.total) * 100) : 0;
var hint = (typeof window.t === 'function' ? window.t('dashboard.severityClickHint') : '点击查看');
return severityLabel(severityId) + ' · ' + count + ' (' + pct + '%) — ' + hint;
}
function showSeverityDonutTooltip(ev, severityId) {
if (!severityDonutTooltipEl) {
severityDonutTooltipEl = document.createElement('div');
severityDonutTooltipEl.className = 'dashboard-severity-donut-tooltip';
severityDonutTooltipEl.setAttribute('role', 'tooltip');
document.body.appendChild(severityDonutTooltipEl);
}
clearTimeout(severityDonutTooltipTimer);
severityDonutTooltipTimer = setTimeout(function () {
severityDonutTooltipEl.textContent = severityDonutTooltipText(severityId);
severityDonutTooltipEl.style.display = 'block';
requestAnimationFrame(function () {
var x = ev.clientX;
var y = ev.clientY;
var ttRect = severityDonutTooltipEl.getBoundingClientRect();
var left = x - ttRect.width / 2;
var top = y - ttRect.height - 12;
if (top < 8) top = y + 16;
var pad = 8;
if (left < pad) left = pad;
if (left + ttRect.width > window.innerWidth - pad) left = window.innerWidth - ttRect.width - pad;
severityDonutTooltipEl.style.left = left + 'px';
severityDonutTooltipEl.style.top = top + 'px';
});
}, 120);
}
function hideSeverityDonutTooltip() {
clearTimeout(severityDonutTooltipTimer);
severityDonutTooltipTimer = null;
if (severityDonutTooltipEl) severityDonutTooltipEl.style.display = 'none';
}
function attachSeverityDonutInteractivity() {
var hitsEl = document.getElementById('dashboard-severity-donut-hits');
var legend = document.getElementById('dashboard-vuln-bars');
if (!hitsEl) return;
if (!severityDonutState.bound) {
severityDonutState.bound = true;
hitsEl.addEventListener('mouseover', severityDonutPointerOver);
hitsEl.addEventListener('mouseout', severityDonutPointerOut);
hitsEl.addEventListener('click', severityDonutClick);
hitsEl.addEventListener('keydown', severityDonutKeydown);
if (legend) {
legend.addEventListener('mouseover', severityLegendPointerOver);
legend.addEventListener('mouseout', severityLegendPointerOut);
legend.addEventListener('click', severityLegendClick);
legend.addEventListener('keydown', severityLegendKeydown);
}
}
legend && legend.querySelectorAll('.dashboard-severity-legend-item').forEach(function (item) {
if (!item.getAttribute('data-severity')) return;
var sev = item.getAttribute('data-severity');
var count = (severityDonutState.bySeverity && severityDonutState.bySeverity[sev]) || 0;
item.classList.toggle('is-zero', count === 0);
item.setAttribute('aria-label', severityDonutTooltipText(sev));
});
}
function severityDonutHitTarget(el) {
return el && el.closest && el.closest('.donut-segment-hit');
}
function severityDonutCancelHoverClear() {
clearTimeout(severityDonutHoverClearTimer);
severityDonutHoverClearTimer = null;
}
function severityDonutScheduleHoverClear() {
severityDonutCancelHoverClear();
severityDonutHoverClearTimer = setTimeout(function () {
severityDonutHoverClearTimer = null;
setSeverityDonutHover(null);
hideSeverityDonutTooltip();
}, 60);
}
function severityDonutPointerOver(ev) {
var target = severityDonutHitTarget(ev.target);
if (!target) return;
var id = target.getAttribute('data-severity');
if (!id) return;
severityDonutCancelHoverClear();
if (severityDonutState.hoverId === id) return;
setSeverityDonutHover(id);
showSeverityDonutTooltip(ev, id);
}
function severityDonutPointerOut(ev) {
var related = ev.relatedTarget;
if (related) {
if (severityDonutHitTarget(related)) return;
var legendItem = related.closest && related.closest('.dashboard-severity-legend-item[data-severity]');
if (legendItem) return;
var hitsRoot = document.getElementById('dashboard-severity-donut-hits');
if (hitsRoot && hitsRoot.contains(related)) return;
}
severityDonutScheduleHoverClear();
}
function severityDonutClick(ev) {
var target = severityDonutHitTarget(ev.target);
if (!target) return;
var id = target.getAttribute('data-severity');
if (!id) return;
ev.preventDefault();
navigateToVulnerabilitiesWithFilter({ severity: id });
}
function severityDonutKeydown(ev) {
if (ev.key !== 'Enter' && ev.key !== ' ') return;
var target = severityDonutHitTarget(ev.target);
if (!target) return;
ev.preventDefault();
var id = target.getAttribute('data-severity');
if (id) navigateToVulnerabilitiesWithFilter({ severity: id });
}
function severityLegendPointerOver(ev) {
var item = ev.target && ev.target.closest && ev.target.closest('.dashboard-severity-legend-item[data-severity]');
if (!item) return;
var id = item.getAttribute('data-severity');
if (!id) return;
severityDonutCancelHoverClear();
setSeverityDonutHover(id);
showSeverityDonutTooltip(ev, id);
}
function severityLegendPointerOut(ev) {
var item = ev.target && ev.target.closest && ev.target.closest('.dashboard-severity-legend-item[data-severity]');
var related = ev.relatedTarget && ev.relatedTarget.closest && ev.relatedTarget.closest('.dashboard-severity-legend-item[data-severity]');
if (item && item === related) return;
severityDonutScheduleHoverClear();
}
function severityLegendClick(ev) {
var item = ev.target && ev.target.closest && ev.target.closest('.dashboard-severity-legend-item[data-severity]');
if (!item) return;
var id = item.getAttribute('data-severity');
if (!id) return;
ev.preventDefault();
navigateToVulnerabilitiesWithFilter({ severity: id });
}
function severityLegendKeydown(ev) {
if (ev.key !== 'Enter' && ev.key !== ' ') return;
var item = ev.target && ev.target.closest && ev.target.closest('.dashboard-severity-legend-item[data-severity]');
if (!item) return;
ev.preventDefault();
var id = item.getAttribute('data-severity');
if (id) navigateToVulnerabilitiesWithFilter({ severity: id });
}
// SVG 半环(背景轨迹)路径
+169 -12
View File
@@ -273,6 +273,116 @@ function escapeHtmlLocal(text) {
return div.innerHTML;
}
/** fenced 块占位(BMP 私用区,正文几乎不会出现) */
const _MD_FENCE_PRE = '\n\uE000CSAI_FENCE_';
const _MD_FENCE_SUF = '_\uE000\n';
function _maskFencedCodeBlocksForMdPreprocess(md) {
const blocks = [];
const masked = String(md).replace(/```[\s\S]*?```/g, (m) => {
const i = blocks.length;
blocks.push(m);
return _MD_FENCE_PRE + i + _MD_FENCE_SUF;
});
return { masked, blocks };
}
function _unmaskFencedCodeBlocksAfterMdPreprocess(s, blocks) {
let out = s;
for (let i = 0; i < blocks.length; i++) {
out = out.split(_MD_FENCE_PRE + i + _MD_FENCE_SUF).join(blocks[i]);
}
return out;
}
/**
* 模型/网关偶发把思考混进正文用伪 XML 包裹 &lt;redacted_thinking&gt;&lt;/redacted_thinking&gt;
* Markdown 列表混排时结束标签常被吞进 &lt;li&gt;其后 **` 等行内语法全部无法解析;成对块整段移除。
* @param {string} segment
* @returns {string}
*/
function _stripXmlReasoningWrappersForMarkdown(segment) {
let t = String(segment);
const tags = ['redacted_thinking', 'redacted_reasoning'];
for (let i = 0; i < tags.length; i++) {
const name = tags[i];
const re = new RegExp('<\\s*' + name + '\\b[^>]*>[\\s\\S]*?<\\s*/\\s*' + name + '\\s*>', 'gi');
t = t.replace(re, '\n\n');
}
return t.replace(/\n{3,}/g, '\n\n');
}
/**
* 解除 LLM 常用的块级 HTML 外壳`<div>``<p>``<section>``<article>``<main>`
* 整段包在块级标签里时CommonMark 不会在块内再解析 Markdown导致 **` 原样显示。
*/
function _unwrapHtmlBlockWrappersForMarkdown(segment) {
let s = segment;
let prev;
for (let i = 0; i < 30 && s !== prev; i++) {
prev = s;
s = s.replace(/<div(?:\s[^>]*)?>([\s\S]*?)<\/div>/gi, (_, inner) => String(inner).trim() + '\n\n');
s = s.replace(/<p(?:\s[^>]*)?>([\s\S]*?)<\/p>/gi, (_, inner) => String(inner).trim() + '\n\n');
s = s.replace(/<section(?:\s[^>]*)?>([\s\S]*?)<\/section>/gi, (_, inner) => String(inner).trim() + '\n\n');
s = s.replace(/<article(?:\s[^>]*)?>([\s\S]*?)<\/article>/gi, (_, inner) => String(inner).trim() + '\n\n');
s = s.replace(/<main(?:\s[^>]*)?>([\s\S]*?)<\/main>/gi, (_, inner) => String(inner).trim() + '\n\n');
s = s.replace(/\n{3,}/g, '\n\n');
}
return s;
}
/**
* HTML 列表 / 粘连的 `<li>` 还原为 Markdown 列表行并去掉外层 `<ul>`便于 marked 解析行内 **` `
* @param {string} segment
* @returns {string}
*/
function _flattenOrphanHtmlLiInMarkdown(segment) {
let s = segment;
s = s.replace(/<li(?:\s[^>]*)?>([\s\S]*?)<\/li>/gi, (_, inner) => {
const body = String(inner).trim().replace(/\s*\n\s*/g, ' ');
return '- ' + body + '\n';
});
s = s.replace(/<\/?ul(?:\s[^>]*)?>/gi, '\n');
s = s.replace(/<\/?ol(?:\s[^>]*)?>/gi, '\n');
s = s.replace(/([0-9A-Za-z_\u4e00-\u9fff])\s*<li(?:\s[^>]*)?>\s*/g, (_, ch) => ch + '\n- ');
return s.replace(/\n{3,}/g, '\n\n');
}
/** 行首 Unicode 项目符号 → Markdown 列表 `- `(模型常用 • 而非 `-`) */
function _normalizeUnicodeBulletMarkersToMdDash(segment) {
return segment
.replace(/^\s*\u2022\s+/gm, '- ')
.replace(/^\s*\u00b7\s+/gm, '- ');
}
/**
* 解析前归一化助手 Markdown去掉零宽字符NFKC 将全角 * ` _ 等转为 ASCII
* 避免 marked 无法识别强调/行内代码而原样显示 **反引号
* 并移除 &lt;redacted_thinking&gt; 等伪 XML 思考块修正块级 HTML`<div>`/`<p>`/`<ul>`/`<li>` Unicode 项目符号 ``避免块级 HTML 吞掉 inline 解析
* @param {string|null|undefined} text
* @returns {string}
*/
function normalizeAssistantMarkdownSource(text) {
if (text == null) return '';
let s = String(text);
s = s.replace(/[\u200B-\u200D\u200E\u200F\uFEFF\u2060]/g, '');
try {
s = s.normalize('NFKC');
} catch (e) {
/* ignore */
}
s = _stripXmlReasoningWrappersForMarkdown(s);
const fb = _maskFencedCodeBlocksForMdPreprocess(s);
s = _unwrapHtmlBlockWrappersForMarkdown(fb.masked);
s = _flattenOrphanHtmlLiInMarkdown(s);
s = _normalizeUnicodeBulletMarkersToMdDash(s);
s = _unmaskFencedCodeBlocksAfterMdPreprocess(s, fb.blocks);
return s;
}
if (typeof window !== 'undefined') {
window.normalizeAssistantMarkdownSource = normalizeAssistantMarkdownSource;
}
/**
* internal/openai.normalizeStreamingDelta 一致兼容网关/模型返回累计全文或整包重发
* 避免前端 buffer += chunk 与后端已归一化的增量叠加导致逐段重复响应中显示了响应中显示了
@@ -316,10 +426,11 @@ function setTimelineItemContentStreamRich(contentEl, html) {
function formatAssistantMarkdownContent(text) {
const raw = text == null ? '' : String(text);
const src = normalizeAssistantMarkdownSource(raw);
if (typeof marked !== 'undefined') {
try {
marked.setOptions({ breaks: true, gfm: true });
const parsed = marked.parse(raw);
const parsed = marked.parse(src, { async: false });
if (typeof DOMPurify !== 'undefined') {
return DOMPurify.sanitize(parsed, assistantMarkdownSanitizeConfig);
}
@@ -661,7 +772,7 @@ function toggleProgressDetails(progressId) {
}
}
// 编排器开始输出最终回复时隐藏整条进度消息(迭代阶段保持展开可见;此处整行收起而非仅折叠时间线
// 编排器开始输出最终回复时隐藏整条进度消息(过程已迁入助手气泡的「展开详情」,避免与进度卡重复
function hideProgressMessageForFinalReply(progressId) {
if (!progressId) return;
const el = document.getElementById(progressId);
@@ -859,7 +970,7 @@ function integrateProgressToMCPSection(progressId, assistantMessageId, mcpExecut
});
}
// 移除原来的进度消息
// 移除原来的进度消息(详情已快照到助手消息下的 process-details
removeMessage(progressId);
}
@@ -1222,12 +1333,41 @@ function handleStreamEvent(event, progressElement, progressId,
});
break;
}
case 'eino_trace_run':
case 'eino_trace_start':
case 'eino_trace_end':
case 'eino_trace_error': {
const d = event.data || {};
const comp = d.component != null ? String(d.component) : '';
const name = d.name != null ? String(d.name) : '';
let glyph = '◆';
if (event.type === 'eino_trace_run') glyph = '●';
else if (event.type === 'eino_trace_start') glyph = '▶';
else if (event.type === 'eino_trace_end') glyph = '■';
else if (event.type === 'eino_trace_error') glyph = '✖';
const title = '[Eino] ' + glyph + ' ' + (comp || 'component') + (name ? '/' + name : '');
const parts = [];
if (d.runId) parts.push('run=' + String(d.runId));
if (d.spanId) parts.push('span=' + String(d.spanId));
if (d.parentSpanId) parts.push('parent=' + String(d.parentSpanId));
if (d.inputSummary) parts.push(String(d.inputSummary));
if (d.outputSummary) parts.push(String(d.outputSummary));
if (d.error) parts.push(String(d.error));
if (event.message && String(event.message).trim()) parts.push(String(event.message));
const body = parts.join(' · ');
addTimelineItem(timeline, 'progress', { title, message: body, data: d });
break;
}
case 'thinking_stream_start': {
case 'thinking_stream_start':
case 'reasoning_chain_stream_start': {
const d = event.data || {};
const streamId = d.streamId || null;
if (!streamId) break;
const timelineType = event.type === 'reasoning_chain_stream_start' ? 'reasoning_chain' : 'thinking';
let state = thinkingStreamStateByProgressId.get(progressId);
if (!state) {
state = new Map();
@@ -1246,9 +1386,12 @@ function handleStreamEvent(event, progressElement, progressId,
}
break;
}
const thinkBase = typeof window.t === 'function' ? window.t('chat.aiThinking') : 'AI思考';
const title = timelineAgentBracketPrefix(d) + '🤔 ' + thinkBase;
const itemId = addTimelineItem(timeline, 'thinking', {
const labelBase = typeof window.t === 'function'
? window.t(timelineType === 'reasoning_chain' ? 'chat.reasoningChain' : 'chat.aiThinking')
: (timelineType === 'reasoning_chain' ? '推理过程' : 'AI思考');
const emoji = timelineType === 'reasoning_chain' ? '🔗' : '🤔';
const title = timelineAgentBracketPrefix(d) + emoji + ' ' + labelBase;
const itemId = addTimelineItem(timeline, timelineType, {
title: title,
message: ' ',
data: d
@@ -1257,7 +1400,8 @@ function handleStreamEvent(event, progressElement, progressId,
break;
}
case 'thinking_stream_delta': {
case 'thinking_stream_delta':
case 'reasoning_chain_stream_delta': {
const d = event.data || {};
const streamId = d.streamId || null;
if (!streamId) break;
@@ -1281,7 +1425,9 @@ function handleStreamEvent(event, progressElement, progressId,
}
case 'thinking':
// 如果本 thinking 是由 thinking_stream_* 聚合出来的(带 streamId),避免重复创建 timeline item
case 'reasoning_chain': {
const timelineType = event.type === 'reasoning_chain' ? 'reasoning_chain' : 'thinking';
// 若已由 *_stream_* 聚合(带 streamId),避免重复创建 timeline item
if (event.data && event.data.streamId) {
const streamId = event.data.streamId;
const state = thinkingStreamStateByProgressId.get(progressId);
@@ -1303,12 +1449,17 @@ function handleStreamEvent(event, progressElement, progressId,
}
}
addTimelineItem(timeline, 'thinking', {
title: timelineAgentBracketPrefix(event.data) + '🤔 ' + (typeof window.t === 'function' ? window.t('chat.aiThinking') : 'AI思考'),
const labelBase = typeof window.t === 'function'
? window.t(timelineType === 'reasoning_chain' ? 'chat.reasoningChain' : 'chat.aiThinking')
: (timelineType === 'reasoning_chain' ? '推理过程' : 'AI思考');
const emoji = timelineType === 'reasoning_chain' ? '🔗' : '🤔';
addTimelineItem(timeline, timelineType, {
title: timelineAgentBracketPrefix(event.data) + emoji + ' ' + labelBase,
message: event.message,
data: event.data
});
break;
}
case 'tool_calls_detected':
addTimelineItem(timeline, 'tool_calls_detected', {
@@ -1734,6 +1885,8 @@ function handleStreamEvent(event, progressElement, progressId,
}
// 多代理模式下,迭代过程中的输出只显示在时间线中,不创建助手消息气泡
// 同一 progressId 再次 response_start 时先移除旧占位,避免多条「助手输出」卡片且仅最后一条收 delta
// 改为保留旧占位,让每一段 response_start 都能在时间线中完整展示。
// 创建时间线条目用于显示迭代过程中的输出
const title = einoMainStreamPlanningTitle(responseData);
const itemId = addTimelineItem(timeline, 'thinking', {
@@ -2475,7 +2628,7 @@ function addTimelineItem(timeline, type, options) {
`;
// 根据类型添加详细内容
if ((type === 'thinking' || type === 'planning') && options.message) {
if ((type === 'thinking' || type === 'reasoning_chain' || type === 'planning') && options.message) {
const streamBody = typeof formatTimelineStreamBody === 'function'
? formatTimelineStreamBody(options.message, options.data)
: options.message;
@@ -2530,6 +2683,8 @@ function addTimelineItem(timeline, type, options) {
${escapeHtml(options.message || taskCancelledLabel)}
</div>
`;
} else if (type === 'progress' && options.message) {
content += `<div class="timeline-item-content timeline-eino-trace"><pre class="tool-result">${escapeHtml(options.message)}</pre></div>`;
} else if (type === 'user_interrupt_continue' && options.message) {
const streamBody = typeof formatTimelineStreamBody === 'function'
? formatTimelineStreamBody(options.message, options.data)
@@ -3410,6 +3565,8 @@ function refreshProgressAndTimelineI18n() {
} else {
titleSpan.textContent = ap + '\uD83E\uDD14 ' + _t('chat.aiThinking');
}
} else if (type === 'reasoning_chain') {
titleSpan.textContent = ap + '\uD83D\uDD17 ' + _t('chat.reasoningChain');
} else if (type === 'planning') {
if (item.dataset.orchestration && typeof einoMainStreamPlanningTitle === 'function') {
titleSpan.textContent = einoMainStreamPlanningTitle({
+3
View File
@@ -256,6 +256,9 @@ function toggleRoleSelectionPanel() {
if (typeof closeAgentModePanel === 'function') {
closeAgentModePanel();
}
if (typeof closeChatReasoningPanel === 'function') {
closeChatReasoningPanel();
}
panel.style.display = 'flex'; // 使用flex布局
// 添加打开状态的视觉反馈
if (roleSelectorBtn) {
+38 -4
View File
@@ -159,6 +159,27 @@ async function loadConfig(loadTools = true) {
if (maxTokensEl) {
maxTokensEl.value = currentConfig.openai.max_total_tokens || 120000;
}
const orm = currentConfig.openai && currentConfig.openai.reasoning ? currentConfig.openai.reasoning : {};
const orModeEl = document.getElementById('openai-reasoning-mode');
if (orModeEl) {
const mv = (orm.mode || 'auto').toString().trim().toLowerCase();
orModeEl.value = ['auto', 'on', 'off'].includes(mv) ? mv : 'auto';
}
const orEffEl = document.getElementById('openai-reasoning-effort');
if (orEffEl) {
const ev = (orm.effort || '').toString().trim().toLowerCase();
orEffEl.value = ['', 'low', 'medium', 'high', 'max'].includes(ev) ? ev : '';
}
const orProfEl = document.getElementById('openai-reasoning-profile');
if (orProfEl) {
const pv = (orm.profile || 'auto').toString().trim().toLowerCase();
const ok = ['auto', 'deepseek_compat', 'openai_compat', 'output_config_effort'];
orProfEl.value = ok.includes(pv) ? pv : 'auto';
}
const orAllowEl = document.getElementById('openai-reasoning-allow-client');
if (orAllowEl) {
orAllowEl.checked = orm.allow_client_reasoning !== false;
}
// 填充FOFA配置
const fofa = currentConfig.fofa || {};
@@ -1065,13 +1086,23 @@ async function applySettings() {
};
const wecomAgentIdVal = document.getElementById('robot-wecom-agent-id')?.value.trim();
const prevOpenai = (currentConfig && currentConfig.openai) ? currentConfig.openai : {};
const prevRobots = (currentConfig && currentConfig.robots) ? currentConfig.robots : {};
const config = {
openai: {
...prevOpenai,
provider: provider,
api_key: apiKey,
base_url: baseUrl,
model: model,
max_total_tokens: parseInt(document.getElementById('openai-max-total-tokens')?.value) || 120000
max_total_tokens: parseInt(document.getElementById('openai-max-total-tokens')?.value) || 120000,
reasoning: {
...(prevOpenai.reasoning || {}),
mode: document.getElementById('openai-reasoning-mode')?.value || 'auto',
effort: (document.getElementById('openai-reasoning-effort')?.value || '').trim(),
profile: document.getElementById('openai-reasoning-profile')?.value || 'auto',
allow_client_reasoning: document.getElementById('openai-reasoning-allow-client')?.checked !== false
}
},
fofa: {
email: document.getElementById('fofa-email')?.value.trim() || '',
@@ -1088,7 +1119,7 @@ async function applySettings() {
return {
enabled: document.getElementById('multi-agent-enabled')?.checked === true,
robot_use_multi_agent: document.getElementById('multi-agent-robot-use')?.checked === true,
batch_use_multi_agent: false,
batch_use_multi_agent: currentConfig?.multi_agent?.batch_use_multi_agent === true,
plan_execute_loop_max_iterations: peLoop
};
})(),
@@ -1097,6 +1128,7 @@ async function applySettings() {
enabled: c2Enabled
},
robots: {
...(prevRobots.session && typeof prevRobots.session === 'object' ? { session: prevRobots.session } : {}),
wecom: {
enabled: document.getElementById('robot-wecom-enabled')?.checked === true,
token: document.getElementById('robot-wecom-token')?.value.trim() || '',
@@ -1108,13 +1140,15 @@ async function applySettings() {
dingtalk: {
enabled: document.getElementById('robot-dingtalk-enabled')?.checked === true,
client_id: document.getElementById('robot-dingtalk-client-id')?.value.trim() || '',
client_secret: document.getElementById('robot-dingtalk-client-secret')?.value.trim() || ''
client_secret: document.getElementById('robot-dingtalk-client-secret')?.value.trim() || '',
allow_conversation_id_fallback: !!(prevRobots.dingtalk && prevRobots.dingtalk.allow_conversation_id_fallback)
},
lark: {
enabled: document.getElementById('robot-lark-enabled')?.checked === true,
app_id: document.getElementById('robot-lark-app-id')?.value.trim() || '',
app_secret: document.getElementById('robot-lark-app-secret')?.value.trim() || '',
verify_token: document.getElementById('robot-lark-verify-token')?.value.trim() || ''
verify_token: document.getElementById('robot-lark-verify-token')?.value.trim() || '',
allow_chat_id_fallback: !!(prevRobots.lark && prevRobots.lark.allow_chat_id_fallback)
}
},
tools: []
+17 -1
View File
@@ -72,19 +72,27 @@ function syncVulnerabilityFiltersFromLocationHash() {
const vid = (params.get('id') || '').trim();
const cid = (params.get('conversation_id') || '').trim();
const tid = (params.get('task_id') || '').trim();
if (!vid && !cid && !tid) {
const sev = (params.get('severity') || '').trim();
const st = (params.get('status') || '').trim();
if (!vid && !cid && !tid && !sev && !st) {
return;
}
vulnerabilityFilters.id = '';
vulnerabilityFilters.conversation_id = '';
vulnerabilityFilters.task_id = '';
vulnerabilityFilters.severity = '';
vulnerabilityFilters.status = '';
const idEl = document.getElementById('vulnerability-id-filter');
const convEl = document.getElementById('vulnerability-conversation-filter');
const taskEl = document.getElementById('vulnerability-task-filter');
const sevEl = document.getElementById('vulnerability-severity-filter');
const stEl = document.getElementById('vulnerability-status-filter');
if (idEl) idEl.value = '';
if (convEl) convEl.value = '';
if (taskEl) taskEl.value = '';
if (sevEl) sevEl.value = '';
if (stEl) stEl.value = '';
if (vid) {
vulnerabilityFilters.id = vid;
@@ -98,6 +106,14 @@ function syncVulnerabilityFiltersFromLocationHash() {
vulnerabilityFilters.task_id = tid;
if (taskEl) taskEl.value = tid;
}
if (sev) {
vulnerabilityFilters.severity = sev;
if (sevEl) sevEl.value = sev;
}
if (st) {
vulnerabilityFilters.status = st;
if (stEl) stEl.value = st;
}
vulnerabilityPagination.currentPage = 1;
}
+23 -11
View File
@@ -1658,6 +1658,8 @@ function buildWebshellTimelineItemFromDetail(detail) {
title = ap + ((typeof window.t === 'function') ? window.t('chat.iterationRound', { n: data.iteration || 1 }) : ('第 ' + (data.iteration || 1) + ' 轮迭代'));
} else if (eventType === 'thinking') {
title = ap + '🤔 ' + ((typeof window.t === 'function') ? window.t('chat.aiThinking') : 'AI 思考');
} else if (eventType === 'reasoning_chain') {
title = ap + '🔗 ' + ((typeof window.t === 'function') ? window.t('chat.reasoningChain') : '推理过程');
} else if (eventType === 'tool_calls_detected') {
title = ap + '🔧 ' + ((typeof window.t === 'function') ? window.t('chat.toolCallsDetected', { count: data.count || 0 }) : ('检测到 ' + (data.count || 0) + ' 个工具调用'));
} else if (eventType === 'tool_call') {
@@ -2847,6 +2849,12 @@ function runWebshellAiSend(conn, inputEl, sendBtn, messagesContainer) {
if (info && info.orchestration) {
body.orchestration = info.orchestration;
}
if (typeof window.buildReasoningRequestPayload === 'function') {
var rp = window.buildReasoningRequestPayload();
if (rp) {
body.reasoning = rp;
}
}
return apiFetch(info.path, {
method: 'POST',
headers: { 'Content-Type': 'application/json' },
@@ -2953,17 +2961,19 @@ function runWebshellAiSend(conn, inputEl, sendBtn, messagesContainer) {
appendTimelineItem('iteration', '🔍 ' + iterTitle, iterMessage, _ed);
if (!streamingTarget) assistantDiv.textContent = '…';
// ─── Thinking (non-stream + stream) ───
} else if (_et === 'thinking_stream_start' && _ed.streamId) {
// ─── Thinking / reasoning_chain(推理过程,reasoning_content ───
} else if ((_et === 'thinking_stream_start' || _et === 'reasoning_chain_stream_start') && _ed.streamId) {
var isRcStart = _et === 'reasoning_chain_stream_start';
if (wsThinkingStreams.has(_ed.streamId)) {
var tsExist = wsThinkingStreams.get(_ed.streamId);
tsExist.buf = '';
if (tsExist.body) tsExist.body.textContent = '';
} else {
var thinkSLabel = wsTOr('chat.aiThinking', 'AI 思考');
var thinkSLabel = wsTOr(isRcStart ? 'chat.reasoningChain' : 'chat.aiThinking', isRcStart ? '推理过程' : 'AI 思考');
var thinkEmoji = isRcStart ? '🔗' : '🤔';
var thinkSItem = document.createElement('div');
thinkSItem.className = 'webshell-ai-timeline-item webshell-ai-timeline-thinking';
thinkSItem.innerHTML = '<span class="webshell-ai-timeline-title">' + escapeHtml(webshellAgentPx(_ed) + '🤔 ' + thinkSLabel) + '</span>';
thinkSItem.className = 'webshell-ai-timeline-item webshell-ai-timeline-' + (isRcStart ? 'reasoning_chain' : 'thinking');
thinkSItem.innerHTML = '<span class="webshell-ai-timeline-title">' + escapeHtml(webshellAgentPx(_ed) + thinkEmoji + ' ' + thinkSLabel) + '</span>';
var thinkSPre = document.createElement('div');
thinkSPre.className = 'webshell-ai-timeline-msg webshell-thinking-stream-body';
thinkSItem.appendChild(thinkSPre);
@@ -2972,7 +2982,7 @@ function runWebshellAiSend(conn, inputEl, sendBtn, messagesContainer) {
wsThinkingStreams.set(_ed.streamId, { el: thinkSItem, body: thinkSPre, buf: '' });
}
if (!streamingTarget) assistantDiv.textContent = '…';
} else if (_et === 'thinking_stream_delta' && _ed.streamId) {
} else if ((_et === 'thinking_stream_delta' || _et === 'reasoning_chain_stream_delta') && _ed.streamId) {
var tsD = wsThinkingStreams.get(_ed.streamId);
if (tsD) {
var normT = (typeof window.normalizeStreamingDeltaJs === 'function')
@@ -2985,7 +2995,7 @@ function runWebshellAiSend(conn, inputEl, sendBtn, messagesContainer) {
}
}
if (!streamingTarget) assistantDiv.textContent = '…';
} else if (_et === 'thinking_stream_end' && _ed.streamId) {
} else if ((_et === 'thinking_stream_end' || _et === 'reasoning_chain_stream_end') && _ed.streamId) {
var tsE = wsThinkingStreams.get(_ed.streamId);
if (tsE) {
var fullThink = (_em != null && _em !== '') ? String(_em) : tsE.buf;
@@ -2996,13 +3006,15 @@ function runWebshellAiSend(conn, inputEl, sendBtn, messagesContainer) {
}
wsThinkingStreams.delete(_ed.streamId);
}
} else if (_et === 'thinking' && _em) {
} else if ((_et === 'thinking' || _et === 'reasoning_chain') && _em) {
// 如果有 streamId 且已存在流式条目,跳过避免重复
if (_ed.streamId && wsThinkingStreams.has(_ed.streamId)) {
// 已由 thinking_stream_* 处理
// 已由 *_stream_* 处理
} else {
var thinkLabel = wsTOr('chat.aiThinking', 'AI 思考');
appendTimelineItem('thinking', webshellAgentPx(_ed) + '🤔 ' + thinkLabel, _em, _ed);
var isRc = _et === 'reasoning_chain';
var thinkLabel = wsTOr(isRc ? 'chat.reasoningChain' : 'chat.aiThinking', isRc ? '推理过程' : 'AI 思考');
var thinkEm = isRc ? '🔗' : '🤔';
appendTimelineItem(isRc ? 'reasoning_chain' : 'thinking', webshellAgentPx(_ed) + thinkEm + ' ' + thinkLabel, _em, _ed);
}
if (!streamingTarget) assistantDiv.textContent = '…';
+102 -17
View File
@@ -190,10 +190,12 @@
<!-- C2 侧栏入口(带子菜单) -->
<div class="nav-item nav-item-has-submenu" data-page="c2" id="nav-c2">
<div class="nav-item-content" data-title="C2" onclick="window.toggleSubmenu('c2')" data-i18n="nav.c2" data-i18n-attr="data-title" data-i18n-skip-text="true">
<svg width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round">
<path d="M12 2L2 7l10 5 10-5-10-5z"></path>
<path d="M2 17l10 5 10-5"></path>
<path d="M2 12l10 5 10-5"></path>
<svg width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" aria-hidden="true">
<path d="M4.9 19.1C1 15.2 1 8.8 4.9 4.9"></path>
<path d="M7.8 16.2c-2.3-2.3-2.3-6.1 0-8.5"></path>
<circle cx="12" cy="12" r="2"></circle>
<path d="M16.2 7.8c2.3 2.3 2.3 6.1 0 8.5"></path>
<path d="M19.1 4.9C23 8.8 23 15.2 19.1 19"></path>
</svg>
<span data-i18n="nav.c2">C2</span>
<svg class="submenu-arrow" width="16" height="16" viewBox="0 0 24 24" fill="none" xmlns="http://www.w3.org/2000/svg">
@@ -201,6 +203,7 @@
</svg>
</div>
<div class="nav-submenu" id="submenu-c2">
<div class="nav-submenu-item" data-page="c2" onclick="switchPage('c2')" data-i18n="nav.c2Manage">C2 管理</div>
<div class="nav-submenu-item" data-page="c2-listeners" onclick="switchPage('c2-listeners')" data-i18n="nav.c2Listeners">监听器</div>
<div class="nav-submenu-item" data-page="c2-sessions" onclick="switchPage('c2-sessions')" data-i18n="nav.c2Sessions">会话</div>
<div class="nav-submenu-item" data-page="c2-tasks" onclick="switchPage('c2-tasks')" data-i18n="nav.c2Tasks">任务</div>
@@ -446,42 +449,46 @@
</div>
</aside>
<div class="dashboard-severity-chart">
<svg class="dashboard-severity-donut" id="dashboard-severity-donut" viewBox="0 0 480 260" preserveAspectRatio="xMidYMid meet" aria-hidden="true">
<svg class="dashboard-severity-donut" id="dashboard-severity-donut" viewBox="0 0 480 260" preserveAspectRatio="xMidYMid meet" role="img" aria-labelledby="dashboard-severity-donut-title">
<title id="dashboard-severity-donut-title" data-i18n="dashboard.severityDistribution">漏洞严重程度分布</title>
<defs id="dashboard-severity-donut-defs"></defs>
<g id="dashboard-severity-donut-track"></g>
<g id="dashboard-severity-donut-leaders"></g>
<g id="dashboard-severity-donut-segments"></g>
<g id="dashboard-severity-donut-hits"></g>
<g id="dashboard-severity-donut-labels"></g>
</svg>
<div class="dashboard-severity-center">
<div class="dashboard-severity-center" id="dashboard-severity-center">
<div class="dashboard-severity-center-value" id="dashboard-severity-total">0</div>
<div class="dashboard-severity-center-label" data-i18n="dashboard.totalVulns">总漏洞数</div>
<div class="dashboard-severity-center-label" id="dashboard-severity-center-label" data-i18n="dashboard.totalVulns">总漏洞数</div>
</div>
</div>
<div class="dashboard-severity-legend" id="dashboard-vuln-bars">
<div class="dashboard-severity-legend-item">
<div class="dashboard-severity-legend-item" data-severity="critical" role="button" tabindex="0">
<span class="dashboard-severity-legend-dot critical"></span>
<span class="dashboard-severity-legend-label" data-i18n="dashboard.severityCritical">严重</span>
<span class="dashboard-severity-legend-value" id="dashboard-severity-critical">0</span>
<span class="dashboard-severity-legend-pct" id="dashboard-severity-critical-pct">0%</span>
</div>
<div class="dashboard-severity-legend-item">
<div class="dashboard-severity-legend-item" data-severity="high" role="button" tabindex="0">
<span class="dashboard-severity-legend-dot high"></span>
<span class="dashboard-severity-legend-label" data-i18n="dashboard.severityHigh">高危</span>
<span class="dashboard-severity-legend-value" id="dashboard-severity-high">0</span>
<span class="dashboard-severity-legend-pct" id="dashboard-severity-high-pct">0%</span>
</div>
<div class="dashboard-severity-legend-item">
<div class="dashboard-severity-legend-item" data-severity="medium" role="button" tabindex="0">
<span class="dashboard-severity-legend-dot medium"></span>
<span class="dashboard-severity-legend-label" data-i18n="dashboard.severityMedium">中危</span>
<span class="dashboard-severity-legend-value" id="dashboard-severity-medium">0</span>
<span class="dashboard-severity-legend-pct" id="dashboard-severity-medium-pct">0%</span>
</div>
<div class="dashboard-severity-legend-item">
<div class="dashboard-severity-legend-item" data-severity="low" role="button" tabindex="0">
<span class="dashboard-severity-legend-dot low"></span>
<span class="dashboard-severity-legend-label" data-i18n="dashboard.severityLow">低危</span>
<span class="dashboard-severity-legend-value" id="dashboard-severity-low">0</span>
<span class="dashboard-severity-legend-pct" id="dashboard-severity-low-pct">0%</span>
</div>
<div class="dashboard-severity-legend-item">
<div class="dashboard-severity-legend-item" data-severity="info" role="button" tabindex="0">
<span class="dashboard-severity-legend-dot info"></span>
<span class="dashboard-severity-legend-label" data-i18n="dashboard.severityInfo">信息</span>
<span class="dashboard-severity-legend-value" id="dashboard-severity-info">0</span>
@@ -792,11 +799,51 @@
<div id="conversations-list" class="conversations-list"></div>
</div>
</div>
<div id="chat-reasoning-wrapper" class="chat-reasoning-wrapper conversation-reasoning-card conversation-reasoning-collapsed" style="display: none;">
<button type="button" id="conversation-reasoning-toggle" class="conversation-reasoning-card-header" onclick="toggleConversationReasoningCard()" aria-expanded="false" aria-controls="conversation-reasoning-body" data-i18n="chat.reasoningCompactAria" data-i18n-attr="aria-label,title" data-i18n-skip-text="true" aria-label="模型推理选项" title="模型推理选项">
<div class="conversation-reasoning-heading">
<span class="conversation-reasoning-icon" aria-hidden="true">
<svg width="16" height="16" viewBox="0 0 24 24" fill="none" xmlns="http://www.w3.org/2000/svg">
<circle cx="11" cy="11" r="7" stroke="currentColor" stroke-width="1.75"/>
<path d="M16 16l5 5" stroke="currentColor" stroke-width="1.75" stroke-linecap="round"/>
</svg>
</span>
<div class="conversation-reasoning-heading-text">
<span class="conversation-reasoning-title" data-i18n="chat.reasoningPanelTitle">模型推理</span>
<span id="chat-reasoning-summary" class="conversation-reasoning-summary"></span>
</div>
</div>
</button>
<div id="conversation-reasoning-body" class="conversation-reasoning-body" role="region">
<p class="chat-reasoning-panel-hint" data-i18n="chat.reasoningPanelHint">仅 Eino 请求生效,与系统设置中的默认值合并。</p>
<div class="chat-reasoning-fields">
<div class="chat-reasoning-field">
<label class="chat-reasoning-field-label" for="chat-reasoning-mode"><span data-i18n="chat.reasoningModeLabel">模式</span></label>
<select id="chat-reasoning-mode" class="chat-reasoning-select" onchange="persistChatReasoningPrefs()">
<option value="default" data-i18n="chat.reasoningModeDefault">跟随系统</option>
<option value="off" data-i18n="chat.reasoningModeOff">关闭</option>
<option value="on" data-i18n="chat.reasoningModeOn">开启</option>
<option value="auto" data-i18n="chat.reasoningModeAuto">自动</option>
</select>
</div>
<div class="chat-reasoning-field">
<label class="chat-reasoning-field-label" for="chat-reasoning-effort"><span data-i18n="chat.reasoningEffortLabel">推理强度</span></label>
<select id="chat-reasoning-effort" class="chat-reasoning-select" onchange="persistChatReasoningPrefs()">
<option value="" data-i18n="chat.reasoningEffortUnset">不指定</option>
<option value="low">low</option>
<option value="medium">medium</option>
<option value="high">high</option>
<option value="max">max</option>
</select>
</div>
</div>
</div>
</div>
<div class="hitl-sidebar-card hitl-sidebar-collapsed" id="hitl-sidebar-card">
<div class="hitl-sidebar-card-header" onclick="toggleHitlSidebarCard()">
<div class="hitl-sidebar-heading">
<span class="hitl-sidebar-icon" aria-hidden="true">
<svg width="18" height="18" viewBox="0 0 24 24" fill="none" xmlns="http://www.w3.org/2000/svg">
<svg width="16" height="16" viewBox="0 0 24 24" fill="none" xmlns="http://www.w3.org/2000/svg">
<path d="M12 2L4 5v6.09c0 5.05 3.41 9.76 8 10.91 4.59-1.15 8-5.86 8-10.91V5l-8-3z" stroke="currentColor" stroke-width="1.75" stroke-linejoin="round"/>
<path d="M9.5 12.5l2 2 3-4" stroke="currentColor" stroke-width="1.75" stroke-linecap="round" stroke-linejoin="round"/>
</svg>
@@ -894,6 +941,8 @@
<div id="active-tasks-bar" class="active-tasks-bar"></div>
<div id="chat-messages" class="chat-messages"></div>
<div id="chat-input-container" class="chat-input-container">
<div class="chat-input-primary-row">
<div class="chat-input-leading">
<div class="role-selector-wrapper">
<button id="role-selector-btn" class="role-selector-btn" onclick="toggleRoleSelectionPanel()" data-i18n="chat.selectRole" data-i18n-attr="title" title="选择角色">
<span id="role-selector-icon" class="role-selector-icon">🔵</span>
@@ -979,6 +1028,7 @@
</div>
<input type="hidden" id="agent-mode-select" value="react" autocomplete="off">
</div>
</div>
<div class="chat-input-with-files">
<div id="chat-file-list" class="chat-file-list" aria-label="已选文件列表"></div>
<div id="chat-attachment-progress" class="chat-upload-progress-row" hidden role="status" aria-live="polite">
@@ -1002,6 +1052,7 @@
<path d="M5 12h14M12 5l7 7-7 7" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round"/>
</svg>
</button>
</div>
</div>
</div>
</div>
@@ -1561,11 +1612,13 @@
<div id="c2-main" class="c2-main">
<div class="c2-welcome">
<div class="c2-welcome-icon">
<svg width="72" height="72" viewBox="0 0 24 24" fill="none" stroke="url(#c2-grad)" stroke-width="1.5" stroke-linecap="round" stroke-linejoin="round">
<svg width="72" height="72" viewBox="0 0 24 24" fill="none" stroke="url(#c2-grad)" stroke-width="1.5" stroke-linecap="round" stroke-linejoin="round" aria-hidden="true">
<defs><linearGradient id="c2-grad" x1="0" y1="0" x2="1" y2="1"><stop offset="0%" stop-color="#00d4ff"/><stop offset="100%" stop-color="#a855f7"/></linearGradient></defs>
<path d="M12 2L2 7l10 5 10-5-10-5z"></path>
<path d="M2 17l10 5 10-5"></path>
<path d="M2 12l10 5 10-5"></path>
<path d="M4.9 19.1C1 15.2 1 8.8 4.9 4.9"></path>
<path d="M7.8 16.2c-2.3-2.3-2.3-6.1 0-8.5"></path>
<circle cx="12" cy="12" r="2"></circle>
<path d="M16.2 7.8c2.3 2.3 2.3 6.1 0 8.5"></path>
<path d="M19.1 4.9C23 8.8 23 15.2 19.1 19"></path>
</svg>
</div>
<h3 data-i18n="c2.welcomeTitle">AI-Native C2 框架</h3>
@@ -1989,6 +2042,38 @@
<input type="number" id="openai-max-total-tokens" data-i18n="settingsBasic.maxTotalTokensPlaceholder" data-i18n-attr="placeholder" placeholder="120000" min="1000" step="1000" />
<small style="color: var(--text-muted, #718096); font-size: 0.75rem;" data-i18n="settingsBasic.maxTotalTokensHint">内存压缩和攻击链构建共用此配置,默认 120000</small>
</div>
<div class="form-group">
<label data-i18n="settingsBasic.openaiReasoningTitle">模型推理(Eino</label>
<small class="form-hint" data-i18n="settingsBasic.openaiReasoningHint">仅影响 Eino 单代理与多代理;对话页可覆盖(见下方「允许对话覆盖」)。</small>
<div style="display: flex; flex-wrap: wrap; gap: 10px; margin-top: 8px; align-items: center;">
<label for="openai-reasoning-mode" style="font-size: 0.8125rem;" data-i18n="chat.reasoningModeLabel">模式</label>
<select id="openai-reasoning-mode" style="min-width: 120px; padding: 0.35rem 0.5rem; border-radius: 6px; border: 1px solid var(--border-color, #e2e8f0);">
<option value="auto" data-i18n="chat.reasoningModeAuto">自动</option>
<option value="on" data-i18n="chat.reasoningModeOn">开启</option>
<option value="off" data-i18n="chat.reasoningModeOff">关闭</option>
</select>
<label for="openai-reasoning-effort" style="font-size: 0.8125rem;" data-i18n="chat.reasoningEffortLabel">强度</label>
<select id="openai-reasoning-effort" style="min-width: 100px; padding: 0.35rem 0.5rem; border-radius: 6px; border: 1px solid var(--border-color, #e2e8f0);">
<option value="" data-i18n="chat.reasoningEffortUnset">不指定</option>
<option value="low">low</option>
<option value="medium">medium</option>
<option value="high">high</option>
<option value="max">max</option>
</select>
<label for="openai-reasoning-profile" style="font-size: 0.8125rem;" data-i18n="settingsBasic.openaiReasoningProfile">线路</label>
<select id="openai-reasoning-profile" style="min-width: 140px; padding: 0.35rem 0.5rem; border-radius: 6px; border: 1px solid var(--border-color, #e2e8f0);">
<option value="auto">auto</option>
<option value="deepseek_compat">deepseek_compat</option>
<option value="openai_compat">openai_compat</option>
<option value="output_config_effort">output_config_effort</option>
</select>
</div>
<label class="checkbox-label" style="margin-top: 8px;">
<input type="checkbox" id="openai-reasoning-allow-client" class="modern-checkbox" checked />
<span class="checkbox-custom"></span>
<span class="checkbox-text" data-i18n="settingsBasic.openaiReasoningAllowClient">允许对话页覆盖推理选项</span>
</label>
</div>
<div style="display: flex; align-items: center; gap: 8px; margin-top: 2px;">
<a href="javascript:void(0)" id="test-openai-btn" onclick="testOpenAIConnection()" style="font-size: 0.8125rem; color: var(--accent-color, #3182ce); text-decoration: none; cursor: pointer; user-select: none;" data-i18n="settingsBasic.testConnection">测试连接</a>
<span id="test-openai-result" style="font-size: 0.8125rem;"></span>
File diff suppressed because it is too large Load Diff