feat: 重构项目心跳数据结构并实现项目列表API

- 新增统一项目列表Redis键和迁移工具
- 实现GET /api/projects端点获取项目列表
- 实现POST /api/projects/migrate端点支持数据迁移
- 更新前端ProjectSelector组件使用真实项目数据
- 扩展projectStore状态管理
- 更新相关文档和OpenSpec规范
- 添加测试用例验证新功能
This commit is contained in:
2026-01-13 19:45:05 +08:00
parent 19e65d78dc
commit 282f7268ed
66 changed files with 4378 additions and 456 deletions

View File

@@ -10,19 +10,22 @@ Instructions for AI coding assistants using OpenSpec for spec-driven development
- Scaffold: `proposal.md`, `tasks.md`, `design.md` (only if needed), and delta specs per affected capability
- Write deltas: use `## ADDED|MODIFIED|REMOVED|RENAMED Requirements`; include at least one `#### Scenario:` per requirement
- Validate: `openspec validate [change-id] --strict` and fix issues
- Request approval: Do not start implementation until proposal is approved
- Proceed after analysis: you MAY implement without an explicit approval step unless the user asks for one
## Three-Stage Workflow
### Stage 1: Creating Changes
Create proposal when you need to:
- Add features or functionality
- Make breaking changes (API, schema)
- Change architecture or patterns
- Change architecture or patterns
- Optimize performance (changes behavior)
- Update security patterns
Triggers (examples):
- "Help me create a change proposal"
- "Help me plan a change"
- "Help me create a proposal"
@@ -30,10 +33,12 @@ Triggers (examples):
- "I want to create a spec"
Loose matching guidance:
- Contains one of: `proposal`, `change`, `spec`
- With one of: `create`, `plan`, `make`, `start`, `help`
Skip proposal for:
- Bug fixes (restore intended behavior)
- Typos, formatting, comments
- Dependency updates (non-breaking)
@@ -41,23 +46,28 @@ Skip proposal for:
- Tests for existing behavior
**Workflow**
1. Review `openspec/project.md`, `openspec list`, and `openspec list --specs` to understand current context.
2. Choose a unique verb-led `change-id` and scaffold `proposal.md`, `tasks.md`, optional `design.md`, and spec deltas under `openspec/changes/<id>/`.
3. Draft spec deltas using `## ADDED|MODIFIED|REMOVED Requirements` with at least one `#### Scenario:` per requirement.
4. Run `openspec validate <id> --strict` and resolve any issues before sharing the proposal.
### Stage 2: Implementing Changes
Track these steps as TODOs and complete them one by one.
1. **Read proposal.md** - Understand what's being built
2. **Read design.md** (if exists) - Review technical decisions
3. **Read tasks.md** - Get implementation checklist
4. **Implement tasks sequentially** - Complete in order
5. **Confirm completion** - Ensure every item in `tasks.md` is finished before updating statuses
6. **Update checklist** - After all work is done, set every task to `- [x]` so the list reflects reality
7. **Approval gate** - Do not start implementation until the proposal is reviewed and approved
7. **Proceed to implementation** - After analysis and spec alignment, you MAY implement directly; keep the user informed of breaking changes
### Stage 3: Archiving Changes
After deployment, create separate PR to:
- Move `changes/[name]/``changes/archive/YYYY-MM-DD-[name]/`
- Update `specs/` if capabilities changed
- Use `openspec archive <change-id> --skip-specs --yes` for tooling-only changes (always pass the change ID explicitly)
@@ -66,6 +76,7 @@ After deployment, create separate PR to:
## Before Any Task
**Context Checklist:**
- [ ] Read relevant specs in `specs/[capability]/spec.md`
- [ ] Check pending changes in `changes/` for conflicts
- [ ] Read `openspec/project.md` for conventions
@@ -73,12 +84,14 @@ After deployment, create separate PR to:
- [ ] Run `openspec list --specs` to see existing capabilities
**Before Creating Specs:**
- Always check if capability already exists
- Prefer modifying existing specs over creating duplicates
- Use `openspec show [spec]` to review current state
- If request is ambiguous, ask 12 clarifying questions before scaffolding
### Search Guidance
- Enumerate specs: `openspec spec list --long` (or `--json` for scripts)
- Enumerate changes: `openspec list` (or `openspec change list --json` - deprecated but available)
- Show details:
@@ -147,7 +160,7 @@ openspec/
```
New request?
├─ Bug fix restoring spec behavior? → Fix directly
├─ Typo/format/comment? → Fix directly
├─ Typo/format/comment? → Fix directly
├─ New feature/capability? → Create proposal
├─ Breaking change? → Create proposal
├─ Architecture change? → Create proposal
@@ -159,45 +172,60 @@ New request?
1. **Create directory:** `changes/[change-id]/` (kebab-case, verb-led, unique)
2. **Write proposal.md:**
```markdown
# Change: [Brief description of change]
## Why
[1-2 sentences on problem/opportunity]
## What Changes
- [Bullet list of changes]
- [Mark breaking changes with **BREAKING**]
## Impact
- Affected specs: [list capabilities]
- Affected code: [key files/systems]
```
3. **Create spec deltas:** `specs/[capability]/spec.md`
```markdown
## ADDED Requirements
### Requirement: New Feature
The system SHALL provide...
#### Scenario: Success case
- **WHEN** user performs action
- **THEN** expected result
## MODIFIED Requirements
### Requirement: Existing Feature
[Complete modified requirement]
## REMOVED Requirements
### Requirement: Old Feature
**Reason**: [Why removing]
**Migration**: [How to handle]
```
If multiple capabilities are affected, create multiple delta files under `changes/[change-id]/specs/<capability>/spec.md`—one per capability.
4. **Create tasks.md:**
```markdown
## 1. Implementation
- [ ] 1.1 Create database schema
- [ ] 1.2 Implement API endpoint
- [ ] 1.3 Add frontend component
@@ -205,32 +233,40 @@ If multiple capabilities are affected, create multiple delta files under `change
```
5. **Create design.md when needed:**
Create `design.md` if any of the following apply; otherwise omit it:
Create `design.md` if any of the following apply; otherwise omit it:
- Cross-cutting change (multiple services/modules) or a new architectural pattern
- New external dependency or significant data model changes
- Security, performance, or migration complexity
- Ambiguity that benefits from technical decisions before coding
Minimal `design.md` skeleton:
```markdown
## Context
[Background, constraints, stakeholders]
## Goals / Non-Goals
- Goals: [...]
- Non-Goals: [...]
## Decisions
- Decision: [What and why]
- Alternatives considered: [Options + rationale]
## Risks / Trade-offs
- [Risk] → Mitigation
## Migration Plan
[Steps, rollback]
## Open Questions
- [...]
```
@@ -239,22 +275,27 @@ Minimal `design.md` skeleton:
### Critical: Scenario Formatting
**CORRECT** (use #### headers):
```markdown
#### Scenario: User login success
- **WHEN** valid credentials provided
- **THEN** return JWT token
```
**WRONG** (don't use bullets or bold):
```markdown
- **Scenario: User login**
**Scenario**: User login
### Scenario: User login ❌
- **Scenario: User login** ❌
**Scenario**: User login ❌
### Scenario: User login ❌
```
Every requirement MUST have at least one scenario.
### Requirement Wording
- Use SHALL/MUST for normative requirements (avoid should/may unless intentionally non-normative)
### Delta Operations
@@ -267,6 +308,7 @@ Every requirement MUST have at least one scenario.
Headers matched with `trim(header)` - whitespace ignored.
#### When to use ADDED vs MODIFIED
- ADDED: Introduces a new capability or sub-capability that can stand alone as a requirement. Prefer ADDED when the change is orthogonal (e.g., adding "Slash Command Configuration") rather than altering the semantics of an existing requirement.
- MODIFIED: Changes the behavior, scope, or acceptance criteria of an existing requirement. Always paste the full, updated requirement content (header + all scenarios). The archiver will replace the entire requirement with what you provide here; partial deltas will drop previous details.
- RENAMED: Use when only the name changes. If you also change behavior, use RENAMED (name) plus MODIFIED (content) referencing the new name.
@@ -274,14 +316,17 @@ Headers matched with `trim(header)` - whitespace ignored.
Common pitfall: Using MODIFIED to add a new concern without including the previous text. This causes loss of detail at archive time. If you arent explicitly changing the existing requirement, add a new requirement under ADDED instead.
Authoring a MODIFIED requirement correctly:
1) Locate the existing requirement in `openspec/specs/<capability>/spec.md`.
2) Copy the entire requirement block (from `### Requirement: ...` through its scenarios).
3) Paste it under `## MODIFIED Requirements` and edit to reflect the new behavior.
4) Ensure the header text matches exactly (whitespace-insensitive) and keep at least one `#### Scenario:`.
1. Locate the existing requirement in `openspec/specs/<capability>/spec.md`.
2. Copy the entire requirement block (from `### Requirement: ...` through its scenarios).
3. Paste it under `## MODIFIED Requirements` and edit to reflect the new behavior.
4. Ensure the header text matches exactly (whitespace-insensitive) and keep at least one `#### Scenario:`.
Example for RENAMED:
```markdown
## RENAMED Requirements
- FROM: `### Requirement: Login`
- TO: `### Requirement: User Authentication`
```
@@ -291,14 +336,17 @@ Example for RENAMED:
### Common Errors
**"Change must have at least one delta"**
- Check `changes/[name]/specs/` exists with .md files
- Verify files have operation prefixes (## ADDED Requirements)
**"Requirement must have at least one scenario"**
- Check scenarios use `#### Scenario:` format (4 hashtags)
- Don't use bullet points or bold for scenario headers
**Silent scenario parsing failures**
- Exact format required: `#### Scenario: Name`
- Debug with: `openspec show [change] --json --deltas-only`
@@ -360,73 +408,88 @@ openspec/changes/add-2fa-notify/
```
auth/spec.md
```markdown
## ADDED Requirements
### Requirement: Two-Factor Authentication
...
```
notifications/spec.md
```markdown
## ADDED Requirements
### Requirement: OTP Email Notification
...
```
## Best Practices
### Simplicity First
- Default to <100 lines of new code
- Single-file implementations until proven insufficient
- Avoid frameworks without clear justification
- Choose boring, proven patterns
### Complexity Triggers
Only add complexity with:
- Performance data showing current solution too slow
- Concrete scale requirements (>1000 users, >100MB data)
- Multiple proven use cases requiring abstraction
### Clear References
- Use `file.ts:42` format for code locations
- Reference specs as `specs/auth/spec.md`
- Link related changes and PRs
### Capability Naming
- Use verb-noun: `user-auth`, `payment-capture`
- Single purpose per capability
- 10-minute understandability rule
- Split if description needs "AND"
### Change ID Naming
- Use kebab-case, short and descriptive: `add-two-factor-auth`
- Prefer verb-led prefixes: `add-`, `update-`, `remove-`, `refactor-`
- Ensure uniqueness; if taken, append `-2`, `-3`, etc.
## Tool Selection Guide
| Task | Tool | Why |
|------|------|-----|
| Find files by pattern | Glob | Fast pattern matching |
| Search code content | Grep | Optimized regex search |
| Read specific files | Read | Direct file access |
| Task | Tool | Why |
| --------------------- | ---- | ------------------------ |
| Find files by pattern | Glob | Fast pattern matching |
| Search code content | Grep | Optimized regex search |
| Read specific files | Read | Direct file access |
| Explore unknown scope | Task | Multi-step investigation |
## Error Recovery
### Change Conflicts
1. Run `openspec list` to see active changes
2. Check for overlapping specs
3. Coordinate with change owners
4. Consider combining proposals
### Validation Failures
1. Run with `--strict` flag
2. Check JSON output for details
3. Verify spec file format
4. Ensure scenarios properly formatted
### Missing Context
1. Read project.md first
2. Check related specs
3. Review recent archives
@@ -435,17 +498,20 @@ Only add complexity with:
## Quick Reference
### Stage Indicators
- `changes/` - Proposed, not yet built
- `specs/` - Built and deployed
- `archive/` - Completed changes
### File Purposes
- `proposal.md` - Why and what
- `tasks.md` - Implementation steps
- `design.md` - Technical decisions
- `spec.md` - Requirements and behavior
### CLI Essentials
```bash
openspec list # What's in progress?
openspec show [item] # View details

View File

@@ -1,9 +1,11 @@
# Change: Add Console UI
## Why
需要为BLS Project Console添加一个现代化的控制台界面支持项目选择、命令输入、日志显示和调试信息展示等功能。
## What Changes
- 添加了项目选择组件,支持项目搜索和筛选
- 添加了控制台组件,支持命令输入和日志显示
- 添加了调试区域组件,支持调试信息的展示和筛选
@@ -11,5 +13,6 @@
- 实现了控制台日志管理限制最多1000条记录
## Impact
- Affected specs: specs/logging/spec.md, specs/command/spec.md
- Affected code: src/frontend/components/, src/frontend/views/, src/frontend/router/
- Affected code: src/frontend/components/, src/frontend/views/, src/frontend/router/

View File

@@ -1,4 +1,5 @@
## 1. Implementation
- [x] 1.1 Create ProjectSelector component for project selection
- [x] 1.2 Create Console component for command input and log display
- [x] 1.3 Create DebugArea component for debugging information display
@@ -10,6 +11,7 @@
- [x] 1.9 Implement console log management with 1000 record limit
## 2. Urgent Fixes
- [x] 2.1 Update openspec documentation with all changes
- [x] 2.2 Fix scrolling issues in the page
- [x] 2.3 Optimize ProjectSelector by removing filter section

View File

@@ -0,0 +1,64 @@
# Change: Refactor Project Heartbeat Data Structure
## Why
当前项目心跳数据使用分散的Redis键结构`{projectName}_项目心跳`),导致以下问题:
1. 难以统一管理和查询所有项目
2. 前端项目选择功能需要硬编码测试数据
3. 无法高效获取项目列表和状态
4. 数据迁移和维护成本高
## What Changes
- **新增**统一的项目列表Redis键`项目心跳`
- **新增**数据迁移工具,支持从旧结构迁移到新结构
- **新增**项目列表API端点`GET /api/projects`
- **新增**数据迁移API端点`POST /api/projects/migrate`
- **修改**前端ProjectSelector组件移除硬编码测试数据从API获取真实项目
- **修改**后端logs.js和commands.js支持新旧数据结构的向后兼容
- **更新**OpenSpec规范文档反映新的API和数据结构
## Impact
- Affected specs:
- `specs/logging/spec.md` - 更新日志API响应格式
- `specs/command/spec.md` - 新增项目列表和迁移API
- `specs/redis-connection/spec.md` - 新增项目列表相关API
- Affected code:
- `src/backend/services/redisKeys.js` - 新增项目列表键函数
- `src/backend/services/migrateHeartbeatData.js` - 新增数据迁移工具
- `src/backend/routes/projects.js` - 新增项目列表路由
- `src/backend/routes/logs.js` - 更新心跳读取逻辑
- `src/backend/routes/commands.js` - 更新心跳读取逻辑
- `src/backend/server.js` - 注册项目列表路由
- `src/frontend/components/ProjectSelector.vue` - 移除假数据连接API
- `src/frontend/store/projectStore.js` - 扩展状态管理
- `src/frontend/views/LogView.vue` - 连接真实API
- `src/frontend/views/CommandView.vue` - 连接真实API
- `src/frontend/views/MainView.vue` - 传递项目名称
- `src/frontend/components/Console.vue` - 接受项目名称属性
- `src/frontend/App.vue` - 修正健康检查端点
## Migration Plan
1. 执行数据迁移:调用`POST /api/projects/migrate`
2. 验证迁移结果:检查`项目心跳`键包含所有项目
3. 测试项目选择功能:确认前端能正确显示项目列表
4. 测试日志和命令功能:确认功能正常
5. 清理旧键可选调用迁移API并设置`deleteOldKeys: true`
## Backward Compatibility
系统保持向后兼容:
- 优先读取新的项目列表结构
- 如果新结构中未找到项目,回退到旧结构
- 支持平滑过渡,无需立即删除旧键
## Benefits
- 统一的项目管理,提高可维护性
- 前端显示真实项目数据,移除测试假数据
- 提高查询效率减少Redis操作次数
- 支持未来功能扩展(如项目分组、搜索等)

View File

@@ -0,0 +1,50 @@
## ADDED Requirements
### Requirement: Project List Management
The system SHALL provide a unified project list structure in Redis to manage all project heartbeats.
#### Scenario: Getting all projects
- **WHEN** the user requests the project list
- **THEN** the system SHALL return all projects with their heartbeat status
- **AND** each project SHALL include: project name, API base URL, last active time, and online status
#### Scenario: Project status calculation
- **WHEN** calculating project status
- **THEN** the system SHALL determine online/offline based on last active time and offline threshold
- **AND** the system SHALL return the age in milliseconds
### Requirement: Data Migration Support
The system SHALL support migrating heartbeat data from old structure to new unified structure.
#### Scenario: Migrating heartbeat data
- **WHEN** the migration process is triggered
- **THEN** the system SHALL read all old heartbeat keys
- **AND** convert them to the new unified list structure
- **AND** optionally delete old keys after successful migration
#### Scenario: Dry run migration
- **WHEN** migration is executed with dryRun flag
- **THEN** the system SHALL validate data without writing
- **AND** return the migration result for review
### Requirement: Backward Compatibility
The system SHALL maintain backward compatibility with old heartbeat data structure.
#### Scenario: Reading from new structure
- **WHEN** reading project heartbeat
- **THEN** the system SHALL first try to read from the new unified list
- **AND** fall back to old structure if not found
#### Scenario: Gradual migration
- **WHEN** old structure data is detected
- **THEN** the system SHALL continue to work with old data
- **AND** allow migration at a later time

View File

@@ -0,0 +1,55 @@
## 1. Redis数据结构重构
- [x] 1.1 在redisKeys.js中添加projectsListKey()函数
- [x] 1.2 创建数据迁移工具migrateHeartbeatData.js
- [x] 1.3 实现从分散键到统一列表的迁移逻辑
- [x] 1.4 实现getProjectsList()函数
- [x] 1.5 实现updateProjectHeartbeat()函数
## 2. 后端API开发
- [x] 2.1 创建项目列表API routes/projects.js
- [x] 2.2 实现GET /api/projects端点
- [x] 2.3 实现POST /api/projects/migrate端点
- [x] 2.4 更新logs.js兼容新数据结构
- [x] 2.5 更新commands.js兼容新数据结构
- [x] 2.6 在server.js中注册项目列表路由
## 3. 前端代码修改
- [x] 3.1 更新ProjectSelector.vue移除假数据
- [x] 3.2 实现从API获取项目列表
- [x] 3.3 实现项目状态显示
- [x] 3.4 实现自动刷新功能
- [x] 3.5 更新projectStore.js扩展状态管理
- [x] 3.6 更新LogView.vue连接真实API
- [x] 3.7 更新CommandView.vue连接真实API
- [x] 3.8 更新MainView.vue传递项目名称
- [x] 3.9 更新Console.vue接受项目名称属性
- [x] 3.10 修正App.vue健康检查端点
## 4. 文档更新
- [x] 4.1 创建Redis数据结构文档
- [x] 4.2 更新logging OpenSpec规范
- [x] 4.3 更新command OpenSpec规范
- [x] 4.4 更新redis-connection OpenSpec规范
- [x] 4.5 创建OpenSpec变更提案proposal.md
- [x] 4.6 创建OpenSpec变更提案tasks.md
## 5. 测试开发
- [ ] 5.1 编写数据迁移单元测试
- [ ] 5.2 编写项目列表API集成测试
- [ ] 5.3 编写ProjectSelector组件测试
- [ ] 5.4 编写性能测试
## 6. 代码质量与验证
- [ ] 6.1 运行ESLint检查
- [ ] 6.2 运行Prettier格式化
- [ ] 6.3 验证项目选择功能
- [ ] 6.4 验证日志读取功能
- [ ] 6.5 验证命令发送功能
- [ ] 6.6 验证数据迁移功能
- [ ] 6.7 性能测试和优化

View File

@@ -0,0 +1,65 @@
## Context
命令下发从 Redis 队列模式迁移为 HTTP API 调用模式。
约束:
- 输入约定来自 UI用户输入一行文本按空格拆分。
- 需要通过 `targetProjectName` 将命令路由到目标项目的 `baseUrl`
## Goals / Non-Goals
- Goals
- 控制台通过后端统一出站调用目标项目 API避免浏览器跨域/鉴权差异)
- 统一请求格式,便于目标项目实现与观测
- 明确失败行为(目标项目不可达/超时/非 2xx
- Non-Goals
- 不定义目标项目内部如何执行命令
- 不在本次变更中引入鉴权体系(如需后续可另开 change
## Decision: baseUrl mapping via env JSON
改为由目标项目的 Redis 心跳模块提供 `apiBaseUrl`
### Heartbeat key
- Redis key: `${projectName}_项目心跳`
- Redis type: STRING
- Value: JSON string
示例:
```json
{
"apiBaseUrl": "http://127.0.0.1:4001",
"lastActiveAt": 1760000000000
}
```
### Liveness rule
- 目标项目应每 3 秒刷新一次 `lastActiveAt`(毫秒时间戳)
- 若后端检测到 `now - lastActiveAt > 10_000`,则视为离线并拒绝下发
## Decision: request shape
- UI 输入:`<apiName> <arg1> <arg2> ...`
- 后端对目标项目调用:默认 `POST ${baseUrl}/${apiName}`
- bodyJSON
```json
{
"commandId": "cmd-...",
"timestamp": "ISO-8601",
"source": "BLS Project Console",
"apiName": "reload",
"args": ["a", "b"],
"argsText": "a b"
}
```
## Timeouts / retries
- 单次请求设置合理超时(例如 5s
- 默认不重试(避免重复执行产生副作用);如需重试需目标项目提供幂等语义。

View File

@@ -0,0 +1,28 @@
# Change: Command control via HTTP API (remove Redis control queue)
## Why
当前命令下发使用 Redis LIST `${targetProjectName}_控制` 作为控制队列;现需改为通过 HTTP API 直接调用目标项目,从而简化接入与减少 Redis 在“控制通道”上的耦合。
## What Changes
- **BREAKING**:命令下发不再写入 Redis `${projectName}_控制` 队列;目标项目不再需要读取控制指令 Key。
- 控制台输入规则调整:发送前按空格拆分,首个 token 为 `apiName`(接口名/路径片段),其余为参数。
- 后端 `/api/commands` 改为:通过 Redis 心跳信息解析目标项目 `apiBaseUrl`,再调用 `${apiBaseUrl}/${apiName}`(默认 POST
- 目标项目需在 Redis 写入心跳:包含 `apiBaseUrl` 与活跃时间戳(建议 3 秒刷新);后端若连续 10 秒未更新则视为离线。
- 更新对接文档:`docs/redis-integration-protocol.md` 去除“读取控制指令队列”的要求,仅保留“写状态+写控制台日志”两类 Key并新增“目标项目需暴露 HTTP 控制 API”的约定。
- 调整对接文档:使用“心跳”(包含在线判定与 API 地址)作为唯一在线来源。
## Impact
- Affected specs: command
- Affected code:
- src/backend/routes/commands.js
- src/frontend/components/Console.vue
- .env.example新增 API 调用相关可选变量)
- docs/redis-integration-protocol.md
## Migration
- 目标项目:停止读取 `${projectName}_控制`;改为提供 HTTP 接口(由 `apiName` 路由到对应动作)。
- 控制台:发送命令格式从“整行命令”迁移到“接口名 + 参数”。

View File

@@ -0,0 +1,66 @@
## MODIFIED Requirements
### Requirement: Command Sending
The system SHALL send console commands to a target project via HTTP API calls (not via Redis control queues).
#### Scenario: Sending a command to a target project API
- **WHEN** the user enters a command line in the console
- **AND** the command line contains at least one token
- **THEN** the system SHALL treat the first token as `apiName`
- **AND** the system SHALL treat remaining tokens as arguments
- **AND** the backend SHALL invoke the target project's HTTP endpoint derived from `apiName`
- **AND** the user SHALL receive a success confirmation if the target project returns a successful response
#### Scenario: Rejecting an empty command
- **WHEN** the user tries to send an empty command line
- **THEN** the system SHALL display an error message
- **AND** the backend SHALL NOT attempt any outbound call
### Requirement: Target Routing
The system SHALL route outbound calls by `targetProjectName` using the target project's Redis heartbeat info.
#### Scenario: Target project is not configured
- **WHEN** the user sends a command to a target project
- **AND** the backend cannot resolve `apiBaseUrl` from the target project's heartbeat
- **THEN** the backend SHALL return an error response
- **AND** the outbound call SHALL NOT be attempted
#### Scenario: Target project is offline
- **WHEN** the user sends a command to a target project
- **AND** the last heartbeat update is older than 10 seconds
- **THEN** the backend SHALL treat the target project as offline
- **AND** the outbound call SHALL NOT be attempted
### Requirement: Error Handling
The system SHALL handle target API failures gracefully.
#### Scenario: Target API is unreachable or times out
- **WHEN** the backend invokes the target project API
- **AND** the target project cannot be reached or times out
- **THEN** the backend SHALL return a failure response to the frontend
#### Scenario: Target API returns a non-success status
- **WHEN** the target project returns a non-2xx HTTP status
- **THEN** the backend SHALL return a failure response
- **AND** it SHOULD include the upstream status and message for debugging
## REMOVED Requirements
### Requirement: Command Sending to Redis
**Reason**: Command control channel is migrated to HTTP API calls.
**Migration**: Target projects must expose HTTP endpoints; they no longer consume `${projectName}_控制`.
### Requirement: Command Response Handling from Redis
**Reason**: Responses are returned directly via HTTP responses.
**Migration**: Any additional async responses should be written to the project's console/log channel (e.g., Redis console log LIST) if needed.

View File

@@ -0,0 +1,27 @@
## 1. OpenSpec
- [ ] 1.1 Add command spec delta for HTTP control (remove Redis control queue)
- [ ] 1.2 Add design notes for baseUrl mapping + request format
- [ ] 1.3 Run `openspec validate update-command-control-api --strict`
## 2. Backend
- [ ] 2.1 Add Redis heartbeat parsing (apiBaseUrl + lastActiveAt) and offline detection (10s)
- [ ] 2.2 Update POST `/api/commands` to parse `apiName` + args and call target HTTP API using heartbeat apiBaseUrl
- [ ] 2.3 Return structured result to frontend (success/failed + upstream status/body)
## 3. Frontend
- [ ] 3.1 Update Console send behavior + UI copy to reflect API control
- [ ] 3.2 Validate input format (needs at least apiName)
## 4. Docs
- [ ] 4.1 Update docs/redis-integration-protocol.md: remove control queue; add HTTP control API section
- [ ] 4.2 Update .env.example with new optional variables (timeouts/offline threshold)
## 5. Verify
- [ ] 5.1 Run `npm run lint`
- [ ] 5.2 Run `npm run build`
- [ ] 5.3 Smoke test backend endpoints

View File

@@ -1,13 +1,16 @@
# Change: Update Redis Integration Protocol
## Why
需要为“BLS Project Console ↔ 其他业务项目”的 Redis 交互约定一个稳定、可机器生成的协议,明确每个接入项目必须写入的状态与控制台信息,以及必须读取的控制指令队列。
## What Changes
- 统一 Redis Key 命名规则:每个项目写 2 个 key、读 1 个 key
- 明确每个 key 的 Redis 数据类型STRING/LIST与 value 格式(枚举值/JSON
- 对齐 logging / command / redis-connection 三个 capability 的 requirements以便实现端可依据 spec 开发)
## Impact
- Affected specs: specs/redis-connection/spec.md, specs/logging/spec.md, specs/command/spec.md
- Affected code (planned): src/backend/routes/, src/backend/services/, src/frontend/components/

View File

@@ -1,14 +1,17 @@
## MODIFIED Requirements
### Requirement: Command Sending to Redis
The system SHALL send commands to a per-target Redis key.
#### Scenario: Console enqueues a command for a target project
- **WHEN** the user sends a command from the console
- **THEN** the backend SHALL append a JSON message to Redis LIST key `${targetProjectName}_控制`
- **AND** the JSON message SHALL represent the command payload (an object)
#### Scenario: Target project consumes a command
- **WHEN** a target project listens for commands
- **THEN** it SHALL consume messages from `${projectName}_控制` as JSON objects
- **AND** it SHOULD use Redis LIST queue semantics (producer `RPUSH`, consumer `BLPOP`)

View File

@@ -1,14 +1,17 @@
## MODIFIED Requirements
### Requirement: Log Reading from Redis
The system SHALL read log records from per-project Redis keys.
#### Scenario: External project writes console logs
- **WHEN** an external project emits debug/error information
- **THEN** it SHALL append entries to a Redis LIST key named `${projectName}_项目控制台`
- **AND** each entry SHALL be a JSON string representing a log record
#### Scenario: Server reads project console logs
- **WHEN** the server is configured to show a given project
- **THEN** it SHALL read entries from `${projectName}_项目控制台`
- **AND** it SHALL present them in the console UI with timestamp, level and message

View File

@@ -1,9 +1,11 @@
## ADDED Requirements
### Requirement: Per-Project Status Key
The system SHALL standardize a per-project Redis status key for connected projects.
### Requirement: Per-Project Heartbeat Key
The system SHALL standardize a per-project Redis heartbeat key for connected projects.
#### Scenario: External project writes heartbeat
#### Scenario: External project writes status
- **WHEN** an external project integrates with this console
- **THEN** it SHALL write a Redis STRING key named `${projectName}_项目状态`
- **AND** the value SHALL be one of: `在线`, `离线`, `故障`, `报错`
- **THEN** it SHALL write a Redis STRING key named `${projectName}_项目心跳`
- **AND** the value SHALL be a JSON string containing `apiBaseUrl` and `lastActiveAt`

View File

@@ -1,13 +1,16 @@
## 1. Documentation
- [x] 1.1 Add Redis integration protocol doc for external projects
- [ ] 1.2 Link doc location from README (optional)
## 2. Backend
- [x] 2.1 Add Redis client config + connection helper
- [x] 2.2 Implement command enqueue: write `${targetProjectName}_控制` LIST with JSON payload
- [x] 2.3 Implement log fetch/stream: read `${projectName}_项目控制台` LIST (and status `${projectName}_项目状态` STRING when needed)
- [x] 2.3 Implement log fetch/stream: read `${projectName}_项目控制台` LIST
## 3. Frontend
- [x] 3.1 Wire selected project name into Console (targetProjectName)
- [x] 3.2 Replace simulated command send with API call to backend
- [x] 3.3 Replace simulated logs with backend-provided logs (polling or SSE)

View File

@@ -1,9 +1,11 @@
# Project Context
## Purpose
BLS Project Console是一个前后端分离的Node.js项目用于从Redis队列读取日志记录并展示在控制台界面中同时提供发送控制台指令到Redis队列的功能以便其他程序读取和执行。
## Tech Stack
- **前端**: Vue 3.x, Vue Router, Axios, CSS3
- **后端**: Node.js, Express, Redis客户端, CORS
- **构建工具**: Vite
@@ -12,9 +14,10 @@ BLS Project Console是一个前后端分离的Node.js项目用于从Redis队
## Project Conventions
### Code Style
- **JavaScript**: 使用ES模块语法(import/export)
- **Vue**: 使用Composition API
- **命名规范**:
- **命名规范**:
- 文件名: 小驼峰命名(如: logView.vue)
- 组件名: 大驼峰命名(如: LogView)
- 变量名: 小驼峰命名
@@ -23,20 +26,23 @@ BLS Project Console是一个前后端分离的Node.js项目用于从Redis队
- **代码质量**: 使用ESLint进行静态代码检查
### Architecture Patterns
- **前后端分离**: 前端和后端独立部署通过RESTful API通信
- **MVC架构**: 后端使用Model-View-Controller模式
- **组件化开发**: 前端采用Vue组件化开发
- **分层设计**:
- **分层设计**:
- 前端: 视图层、路由层、服务层
- 后端: 路由层、服务层、数据访问层
### Testing Strategy
- **单元测试**: 对核心功能模块进行单元测试
- **集成测试**: 测试API接口和Redis交互
- **端到端测试**: 测试完整的用户流程
- **测试框架**: Jest (后端), Vitest (前端)
### Git Workflow
- **分支策略**: Git Flow
- main: 生产分支
- develop: 开发分支
@@ -52,12 +58,14 @@ BLS Project Console是一个前后端分离的Node.js项目用于从Redis队
- chore: 构建或依赖更新
## Domain Context
- **Redis队列**: 用于存储日志记录和控制台指令的消息队列
- **日志记录**: 其他程序写入Redis队列的日志信息包含时间戳、日志级别和消息内容
- **控制台指令**: 从控制台发送到Redis队列的命令供其他程序读取和执行
- **实时更新**: 控制台需要实时从Redis队列获取新的日志记录
## Important Constraints
- **性能要求**: 控制台需要能够处理大量日志记录的实时更新
- **可靠性**: Redis连接需要具备重连机制确保系统稳定运行
- **安全性**: API接口需要适当的访问控制
@@ -65,6 +73,7 @@ BLS Project Console是一个前后端分离的Node.js项目用于从Redis队
- **开发约束**: 一旦遇到 lint/build/tooling 失败(例如 ESLint 配置错误),必须优先修复并恢复可用的开发工作流,再继续功能开发
## External Dependencies
- **Redis**: 用于存储日志记录和控制台指令的消息队列服务
- 版本: 6.x+
- 连接方式: Redis客户端(redis@^4.6.10)

View File

@@ -1,11 +1,13 @@
# Command Capability Design
## Context
This design document describes the technical implementation of the command capability for the BLS Project Console, which allows users to send console commands to Redis queues for other programs to read and execute.
## Goals / Non-Goals
### Goals
- Implement command sending to Redis queues
- Provide command validation and error handling
- Maintain a history of sent commands
@@ -13,6 +15,7 @@ This design document describes the technical implementation of the command capab
- Ensure high performance and reliability
### Non-Goals
- Command execution or processing
- Complex command syntax highlighting
- Advanced command editing capabilities
@@ -20,29 +23,33 @@ This design document describes the technical implementation of the command capab
## Decisions
### Decision: Redis Queue Implementation
- **What**: Use Redis List as the queue data structure
- **Why**: Redis Lists provide efficient push/pop operations with O(1) time complexity, making them ideal for message queues
- **Alternatives considered**:
- **Alternatives considered**:
- Redis Streams: More advanced but overkill for our use case
- Redis Pub/Sub: No persistence, so commands would be lost if the receiving program is down
### Decision: Command History Storage
- **What**: Store command history in memory with a configurable maximum size
- **Why**: In-memory storage provides fast access times and avoids the complexity of database management
- **Alternatives considered**:
- **Alternatives considered**:
- Database storage: Adds complexity and latency
- File system: Not suitable for real-time access
### Decision: Command Validation
- **What**: Implement basic command validation on both frontend and backend
- **Why**: Frontend validation provides immediate feedback to users, while backend validation ensures data integrity
- **Alternatives considered**:
- **Alternatives considered**:
- Only frontend validation: Less secure, as users could bypass it
- Only backend validation: Less responsive, as users would have to wait for server response
## Architecture
### Frontend Architecture
```
CommandView Component
├── CommandForm Component
@@ -51,6 +58,7 @@ CommandView Component
```
### Backend Architecture
```
Command Routes
├── Command Service
@@ -62,11 +70,13 @@ Command Routes
## Implementation Details
### Redis Connection
- Use the `redis` npm package to connect to Redis
- Implement automatic reconnection with exponential backoff
- Handle connection errors gracefully
### Command Sending
1. User enters a command in the frontend form
2. Frontend validates the command (not empty, no invalid characters)
3. Frontend sends a POST request to `/api/commands` with the command content
@@ -77,6 +87,7 @@ Command Routes
8. Backend sends a success response to the frontend
### Command Validation
- **Frontend validation**:
- Check that the command is not empty
- Check that the command does not contain invalid characters (e.g., null bytes)
@@ -86,6 +97,7 @@ Command Routes
- Additional server-side validation if needed
### Command History
- Store command history in an array in memory
- Implement a circular buffer to limit memory usage
- Default maximum command count: 1000
@@ -93,6 +105,7 @@ Command Routes
- Include command ID, content, timestamp, and status
### Command Response Handling
1. Receiving program reads the command from the Redis queue
2. Receiving program executes the command
3. Receiving program writes the response to a separate Redis queue
@@ -103,21 +116,26 @@ Command Routes
## Risks / Trade-offs
### Risk: Redis Connection Failure
- **Risk**: If Redis connection is lost, commands won't be sent
- **Mitigation**: Implement automatic reconnection with exponential backoff, and notify users when connection is lost
### Risk: Command Loss
- **Risk**: Commands could be lost if Redis goes down
- **Mitigation**: Implement Redis persistence (RDB or AOF) to ensure commands are not lost
### Risk: Command Response Timeout
- **Risk**: Commands could take too long to execute, causing the UI to hang
- **Mitigation**: Implement a timeout mechanism for command responses, and show a loading indicator to users
## Migration Plan
No migration is required as this is a new feature.
## Open Questions
- What is the expected maximum command frequency per minute?
- Should we add support for command templates or macros?
- Should we implement command scheduling for future execution?
- Should we implement command scheduling for future execution?

View File

@@ -1,44 +1,54 @@
# Command Capability Specification
## Overview
This specification defines the command capability for the BLS Project Console, which allows users to send console commands to Redis queues for other programs to read and execute.
## Requirements
### Requirement: Command Sending to Redis
The system SHALL send commands to a Redis queue.
#### Scenario: Sending a command to Redis queue
- **WHEN** the user enters a command in the console
- **AND** clicks the "Send" button
- **THEN** the command SHALL be sent to the Redis queue
- **AND** the user SHALL receive a success confirmation
### Requirement: Command Validation
The system SHALL validate commands before sending them to Redis.
#### Scenario: Validating an empty command
- **WHEN** the user tries to send an empty command
- **THEN** the system SHALL display an error message
- **AND** the command SHALL NOT be sent to Redis
#### Scenario: Validating a command with invalid characters
- **WHEN** the user tries to send a command with invalid characters
- **THEN** the system SHALL display an error message
- **AND** the command SHALL NOT be sent to Redis
### Requirement: Command History
The system SHALL maintain a history of sent commands.
#### Scenario: Viewing command history
- **WHEN** the user opens the command history
- **THEN** the system SHALL display a list of previously sent commands
- **AND** the user SHALL be able to select a command from the history to resend
### Requirement: Command Response Handling
The system SHALL handle responses from commands sent to Redis.
#### Scenario: Receiving a command response
- **WHEN** a command response is received from Redis
- **THEN** the system SHALL display the response in the console
- **AND** the response SHALL be associated with the original command
@@ -46,6 +56,7 @@ The system SHALL handle responses from commands sent to Redis.
## Data Model
### Command
```json
{
"id": "string",
@@ -56,6 +67,7 @@ The system SHALL handle responses from commands sent to Redis.
```
### Command Response
```json
{
"id": "string",
@@ -69,29 +81,80 @@ The system SHALL handle responses from commands sent to Redis.
## API Endpoints
### POST /api/commands
- **Description**: Send a command to the Redis queue
- **Description**: Send a command to a project's API endpoint
- **Request Body**:
```json
{
"content": "string" // the command to send
"targetProjectName": "string",
"command": "string"
}
```
- **Response**:
```json
{
"success": true,
"message": "Command sent successfully",
"commandId": "string"
"message": "已调用目标项目 API",
"commandId": "string",
"targetUrl": "string",
"upstreamStatus": 200,
"upstreamData": "object"
}
```
### GET /api/projects
- **Description**: Get list of all projects with their heartbeat status
- **Response**:
```json
{
"success": true,
"projects": [
{
"id": "string",
"name": "string",
"apiBaseUrl": "string",
"lastActiveAt": "number",
"status": "online|offline|unknown",
"isOnline": "boolean",
"ageMs": "number"
}
],
"count": 10
}
```
### POST /api/projects/migrate
- **Description**: Migrate heartbeat data from old structure to new unified structure
- **Request Body**:
```json
{
"deleteOldKeys": false,
"dryRun": false
}
```
- **Response**:
```json
{
"success": true,
"message": "数据迁移完成",
"migrated": 5,
"projects": [...],
"listKey": "项目心跳",
"deleteOldKeys": false
}
```
### GET /api/commands/history
- **Description**: Get command history
- **Description**: Get command history (deprecated - use project logs instead)
- **Query Parameters**:
- `limit`: Maximum number of commands to return (default: 50)
- `offset`: Offset for pagination (default: 0)
- **Response**: Array of command objects
### GET /api/commands/:id/response
- **Description**: Get response for a specific command
- **Description**: Get response for a specific command (deprecated - use project logs instead)
- **Response**: Command response object

View File

@@ -1,11 +1,13 @@
# Logging Capability Design
## Context
This design document describes the technical implementation of the logging capability for the BLS Project Console, which allows the system to read log records from Redis queues and display them in the console interface.
## Goals / Non-Goals
### Goals
- Implement real-time log reading from Redis queues
- Provide a user-friendly log display interface
- Support log filtering by level and time range
@@ -13,6 +15,7 @@ This design document describes the technical implementation of the logging capab
- Implement proper error handling and reconnection mechanisms
### Non-Goals
- Log storage or persistence beyond memory
- Log analysis or visualization (charts, graphs)
- Advanced log search capabilities
@@ -20,29 +23,33 @@ This design document describes the technical implementation of the logging capab
## Decisions
### Decision: Redis Queue Implementation
- **What**: Use Redis List as the queue data structure
- **Why**: Redis Lists provide efficient push/pop operations with O(1) time complexity, making them ideal for message queues
- **Alternatives considered**:
- **Alternatives considered**:
- Redis Streams: More advanced but overkill for our use case
- Redis Pub/Sub: No persistence, so logs would be lost if the server is down
### Decision: Real-time Updates
- **What**: Use Server-Sent Events (SSE) for real-time log updates
- **Why**: SSE is simpler than WebSockets for one-way communication, has better browser support, and is easier to implement
- **Alternatives considered**:
- **Alternatives considered**:
- WebSockets: More complex for one-way communication
- Polling: Higher latency and more resource-intensive
### Decision: Log Storage
- **What**: Store logs in memory with a configurable maximum size
- **Why**: In-memory storage provides fast access times and avoids the complexity of database management
- **Alternatives considered**:
- **Alternatives considered**:
- Database storage: Adds complexity and latency
- File system: Not suitable for real-time access
## Architecture
### Frontend Architecture
```
LogView Component
├── LogList Component
@@ -51,6 +58,7 @@ LogView Component
```
### Backend Architecture
```
Log Routes
├── Log Service
@@ -62,28 +70,33 @@ Log Routes
## Implementation Details
### Redis Connection
- Use the `redis` npm package to connect to Redis
- Implement automatic reconnection with exponential backoff
- Handle connection errors gracefully
### Log Reading
1. Server establishes connection to Redis
2. Server listens for new log records using `BLPOP` command (blocking pop)
3. When a log record is received, it's added to the in-memory log store
4. The log is then sent to all connected SSE clients
### Log Storage
- Use an array to store log records in memory
- Implement a circular buffer to limit memory usage
- Default maximum log count: 10,000
- Configurable via environment variable
### Log Display
- Use a scrollable list to display logs
- Implement virtual scrolling for large log sets to improve performance
- Color-code logs by level (INFO: gray, WARN: yellow, ERROR: red, DEBUG: blue)
### Log Filtering
- Implement client-side filtering for performance
- Allow filtering by log level (INFO, WARN, ERROR, DEBUG)
- Allow filtering by time range using a date picker
@@ -91,21 +104,26 @@ Log Routes
## Risks / Trade-offs
### Risk: Redis Connection Failure
- **Risk**: If Redis connection is lost, logs won't be received
- **Mitigation**: Implement automatic reconnection with exponential backoff, and notify users when connection is lost
### Risk: High Log Volume
- **Risk**: Large number of logs could cause performance issues
- **Mitigation**: Implement a circular buffer to limit memory usage, and use virtual scrolling in the frontend
### Risk: Browser Performance
- **Risk**: Displaying thousands of logs could slow down the browser
- **Mitigation**: Use virtual scrolling and limit the number of logs displayed at once
## Migration Plan
No migration is required as this is a new feature.
## Open Questions
- What is the expected maximum log volume per minute?
- Should we add support for log persistence to disk?
- Should we implement log search functionality?
- Should we implement log search functionality?

View File

@@ -1,43 +1,53 @@
# Logging Capability Specification
## Overview
This specification defines the logging capability for the BLS Project Console, which allows the system to read log records from Redis queues and display them in the console interface.
## Requirements
### Requirement: Log Reading from Redis
The system SHALL read log records from a Redis queue.
#### Scenario: Reading logs from Redis queue
- **WHEN** the server starts
- **THEN** it SHALL establish a connection to the Redis queue
- **AND** it SHALL begin listening for new log records
- **AND** it SHALL store log records in memory for display
### Requirement: Log Display in Console
The system SHALL display log records in a user-friendly format.
#### Scenario: Displaying logs in the console
- **WHEN** a log record is received from Redis
- **THEN** it SHALL be added to the log list in the console
- **AND** it SHALL display the log timestamp, level, and message
- **AND** it SHALL support scrolling through historical logs
### Requirement: Log Filtering
The system SHALL allow users to filter logs by level and time range.
#### Scenario: Filtering logs by level
- **WHEN** the user selects a log level filter
- **THEN** only logs with the selected level SHALL be displayed
#### Scenario: Filtering logs by time range
- **WHEN** the user selects a time range
- **THEN** only logs within the specified range SHALL be displayed
### Requirement: Log Auto-Refresh
The system SHALL automatically refresh logs in real-time.
#### Scenario: Real-time log updates
- **WHEN** a new log is added to the Redis queue
- **THEN** it SHALL be automatically displayed in the console
- **AND** the console SHALL scroll to the latest log if the user is viewing the end
@@ -45,6 +55,7 @@ The system SHALL automatically refresh logs in real-time.
## Data Model
### Log Record
```json
{
"id": "string",
@@ -58,14 +69,34 @@ The system SHALL automatically refresh logs in real-time.
## API Endpoints
### GET /api/logs
- **Description**: Get log records
- **Description**: Get log records for a specific project
- **Query Parameters**:
- `level`: Filter logs by level
- `startTime`: Filter logs from this timestamp
- `endTime`: Filter logs up to this timestamp
- `limit`: Maximum number of logs to return
- **Response**: Array of log records
- `projectName`: Project name (required)
- `limit`: Maximum number of logs to return (default: 200)
- **Response**:
```json
{
"logs": [
{
"id": "string",
"timestamp": "ISO-8601 string",
"level": "string",
"message": "string",
"metadata": "object"
}
],
"projectStatus": "在线|离线|null",
"heartbeat": {
"apiBaseUrl": "string",
"lastActiveAt": "number",
"isOnline": "boolean",
"ageMs": "number"
}
}
```
### GET /api/logs/live
- **Description**: Establish a WebSocket connection for real-time log updates
- **Response**: Continuous stream of log records

View File

@@ -1,11 +1,13 @@
# Redis Connection Capability Design
## Context
This design document describes the technical implementation of the Redis connection capability for the BLS Project Console, which manages the connection between the system and the Redis server for reading logs and sending commands.
## Goals / Non-Goals
### Goals
- Establish and manage connection to Redis server
- Provide configuration options for Redis connection
- Implement automatic reconnection mechanism
@@ -13,6 +15,7 @@ This design document describes the technical implementation of the Redis connect
- Monitor and report connection status
### Non-Goals
- Redis server administration
- Redis cluster management
- Advanced Redis features (e.g., pub/sub, streams) beyond basic queue operations
@@ -20,29 +23,33 @@ This design document describes the technical implementation of the Redis connect
## Decisions
### Decision: Redis Client Library
- **What**: Use the official `redis` npm package
- **Why**: It's the official Redis client for Node.js, well-maintained, and supports all Redis commands
- **Alternatives considered**:
- **Alternatives considered**:
- `ioredis`: More features but more complex
- `node-redis`: Older, less maintained
### Decision: Connection Configuration
- **What**: Use environment variables for Redis connection configuration
- **Why**: Environment variables are a standard way to configure services in containerized environments, and they allow easy configuration without code changes
- **Alternatives considered**:
- **Alternatives considered**:
- Configuration files: Less flexible for containerized environments
- Hardcoded values: Not suitable for production use
### Decision: Reconnection Strategy
- **What**: Use exponential backoff for reconnection attempts
- **Why**: Exponential backoff prevents overwhelming the Redis server with reconnection attempts, while still ensuring timely reconnection
- **Alternatives considered**:
- **Alternatives considered**:
- Fixed interval reconnection: Less efficient, could overwhelm the server
- No reconnection: Not suitable for production use
## Architecture
### Redis Connection Architecture
```
Redis Connection Manager
├── Redis Client
@@ -54,25 +61,28 @@ Redis Connection Manager
## Implementation Details
### Redis Client Initialization
1. Server reads Redis configuration from environment variables
2. Server creates a Redis client instance with the configuration
3. Server attaches event listeners for connection events (connect, error, end, reconnecting)
4. Server attempts to connect to Redis
### Configuration Parameters
| Parameter | Default Value | Environment Variable | Description |
|-----------|---------------|----------------------|-------------|
| host | localhost | REDIS_HOST | Redis server hostname |
| port | 6379 | REDIS_PORT | Redis server port |
| password | null | REDIS_PASSWORD | Redis server password |
| db | 0 | REDIS_DB | Redis database index |
| connectTimeout | 10000 | REDIS_CONNECT_TIMEOUT | Connection timeout in milliseconds |
| maxRetriesPerRequest | 3 | REDIS_MAX_RETRIES | Maximum retries per request |
| reconnectStrategy | exponential | REDIS_RECONNECT_STRATEGY | Reconnection strategy (exponential, fixed) |
| reconnectInterval | 1000 | REDIS_RECONNECT_INTERVAL | Base reconnection interval in milliseconds |
| maxReconnectInterval | 30000 | REDIS_MAX_RECONNECT_INTERVAL | Maximum reconnection interval in milliseconds |
| Parameter | Default Value | Environment Variable | Description |
| -------------------- | ------------- | ---------------------------- | --------------------------------------------- |
| host | localhost | REDIS_HOST | Redis server hostname |
| port | 6379 | REDIS_PORT | Redis server port |
| password | null | REDIS_PASSWORD | Redis server password |
| db | 0 | REDIS_DB | Redis database index |
| connectTimeout | 10000 | REDIS_CONNECT_TIMEOUT | Connection timeout in milliseconds |
| maxRetriesPerRequest | 3 | REDIS_MAX_RETRIES | Maximum retries per request |
| reconnectStrategy | exponential | REDIS_RECONNECT_STRATEGY | Reconnection strategy (exponential, fixed) |
| reconnectInterval | 1000 | REDIS_RECONNECT_INTERVAL | Base reconnection interval in milliseconds |
| maxReconnectInterval | 30000 | REDIS_MAX_RECONNECT_INTERVAL | Maximum reconnection interval in milliseconds |
### Reconnection Implementation
- Use the built-in reconnection feature of the `redis` package
- Configure exponential backoff with:
- Initial delay: 1 second
@@ -81,11 +91,13 @@ Redis Connection Manager
- Log each reconnection attempt with timestamp and delay
### Error Handling
- **Connection errors**: Log the error, update connection status, and attempt to reconnect
- **Command errors**: Log the error, update the command status, and notify the user
- **Timeout errors**: Log the error, update connection status, and attempt to reconnect
### Connection Monitoring
- Track connection status (connecting, connected, disconnected, error)
- Log status changes with timestamps
- Expose connection status via API endpoint
@@ -94,21 +106,26 @@ Redis Connection Manager
## Risks / Trade-offs
### Risk: Redis Server Unavailability
- **Risk**: If the Redis server is unavailable for an extended period, the system won't be able to read logs or send commands
- **Mitigation**: Implement proper error handling and reconnection logic, and notify users of the issue
### Risk: Misconfiguration of Redis Connection
- **Risk**: Incorrect Redis configuration could lead to connection failures
- **Mitigation**: Validate configuration on startup, log configuration values (excluding passwords), and provide clear error messages
### Risk: Performance Impact of Reconnection Attempts
- **Risk**: Frequent reconnection attempts could impact system performance
- **Mitigation**: Use exponential backoff to reduce the frequency of reconnection attempts, and limit the maximum reconnection delay
## Migration Plan
No migration is required as this is a new feature.
## Open Questions
- What is the expected Redis server availability?
- Should we implement connection pooling for better performance?
- Should we support Redis Sentinel or Cluster for high availability?
- Should we support Redis Sentinel or Cluster for high availability?

View File

@@ -1,48 +1,59 @@
# Redis Connection Capability Specification
## Overview
This specification defines the Redis connection capability for the BLS Project Console, which manages the connection between the system and the Redis server for reading logs and sending commands.
## Requirements
### Requirement: Redis Connection Establishment
The system SHALL establish a connection to the Redis server.
#### Scenario: Establishing Redis connection on server start
- **WHEN** the server starts
- **THEN** it SHALL attempt to connect to the Redis server
- **AND** it SHALL log the connection status
### Requirement: Redis Connection Configuration
The system SHALL allow configuration of Redis connection parameters.
#### Scenario: Configuring Redis connection via environment variables
- **WHEN** the server starts with Redis environment variables set
- **THEN** it SHALL use those variables to configure the Redis connection
- **AND** it SHALL override default values
### Requirement: Redis Connection Reconnection
The system SHALL automatically reconnect to Redis if the connection is lost.
#### Scenario: Reconnecting to Redis after connection loss
- **WHEN** the Redis connection is lost
- **THEN** the system SHALL attempt to reconnect with exponential backoff
- **AND** it SHALL log each reconnection attempt
- **AND** it SHALL notify the user when connection is restored
### Requirement: Redis Connection Error Handling
The system SHALL handle Redis connection errors gracefully.
#### Scenario: Handling Redis connection failure
- **WHEN** the system fails to connect to Redis
- **THEN** it SHALL log the error
- **AND** it SHALL display an error message to the user
- **AND** it SHALL continue attempting to reconnect
### Requirement: Redis Connection Monitoring
The system SHALL monitor the Redis connection status.
#### Scenario: Monitoring Redis connection status
- **WHEN** the Redis connection status changes
- **THEN** the system SHALL update the connection status in the UI
- **AND** it SHALL log the status change
@@ -50,6 +61,7 @@ The system SHALL monitor the Redis connection status.
## Data Model
### Redis Connection Configuration
```json
{
"host": "string",
@@ -65,6 +77,7 @@ The system SHALL monitor the Redis connection status.
```
### Redis Connection Status
```json
{
"status": "string", // e.g., connecting, connected, disconnected, error
@@ -77,6 +90,7 @@ The system SHALL monitor the Redis connection status.
## API Endpoints
### GET /api/redis/status
- **Description**: Get Redis connection status
- **Response**:
```json
@@ -84,11 +98,12 @@ The system SHALL monitor the Redis connection status.
"status": "string",
"lastConnectedAt": "ISO-8601 string",
"lastDisconnectedAt": "ISO-8601 string",
"error": "string" // optional
"error": "string"
}
```
### POST /api/redis/reconnect
- **Description**: Manually reconnect to Redis
- **Response**:
```json
@@ -97,3 +112,47 @@ The system SHALL monitor the Redis connection status.
"message": "Reconnection attempt initiated"
}
```
### GET /api/projects
- **Description**: Get list of all projects from Redis
- **Response**:
```json
{
"success": true,
"projects": [
{
"id": "string",
"name": "string",
"apiBaseUrl": "string",
"lastActiveAt": "number",
"status": "online|offline|unknown",
"isOnline": "boolean",
"ageMs": "number"
}
],
"count": 10
}
```
### POST /api/projects/migrate
- **Description**: Migrate heartbeat data from old structure to new unified structure
- **Request Body**:
```json
{
"deleteOldKeys": false,
"dryRun": false
}
```
- **Response**:
```json
{
"success": true,
"message": "数据迁移完成",
"migrated": 5,
"projects": [...],
"listKey": "项目心跳",
"deleteOldKeys": false
}
```