feat: 初始化前后端Node.js控制台项目基础架构

- 创建项目核心文件:package.json、vite.config.js、.gitignore
- 添加前后端基础依赖和开发工具配置
- 完善OpenSpec模块,包括项目文档和核心能力规格
- 配置ESLint和Prettier代码规范
- 创建基本目录结构
- 实现前端Vue3应用框架和路由
- 添加后端Express服务器和基础路由
- 编写README项目说明文档
This commit is contained in:
2026-01-08 11:46:34 +08:00
commit 5f0fa79606
29 changed files with 6181 additions and 0 deletions

456
openspec/AGENTS.md Normal file
View File

@@ -0,0 +1,456 @@
# OpenSpec Instructions
Instructions for AI coding assistants using OpenSpec for spec-driven development.
## TL;DR Quick Checklist
- Search existing work: `openspec spec list --long`, `openspec list` (use `rg` only for full-text search)
- Decide scope: new capability vs modify existing capability
- Pick a unique `change-id`: kebab-case, verb-led (`add-`, `update-`, `remove-`, `refactor-`)
- Scaffold: `proposal.md`, `tasks.md`, `design.md` (only if needed), and delta specs per affected capability
- Write deltas: use `## ADDED|MODIFIED|REMOVED|RENAMED Requirements`; include at least one `#### Scenario:` per requirement
- Validate: `openspec validate [change-id] --strict` and fix issues
- Request approval: Do not start implementation until proposal is approved
## Three-Stage Workflow
### Stage 1: Creating Changes
Create proposal when you need to:
- Add features or functionality
- Make breaking changes (API, schema)
- Change architecture or patterns
- Optimize performance (changes behavior)
- Update security patterns
Triggers (examples):
- "Help me create a change proposal"
- "Help me plan a change"
- "Help me create a proposal"
- "I want to create a spec proposal"
- "I want to create a spec"
Loose matching guidance:
- Contains one of: `proposal`, `change`, `spec`
- With one of: `create`, `plan`, `make`, `start`, `help`
Skip proposal for:
- Bug fixes (restore intended behavior)
- Typos, formatting, comments
- Dependency updates (non-breaking)
- Configuration changes
- Tests for existing behavior
**Workflow**
1. Review `openspec/project.md`, `openspec list`, and `openspec list --specs` to understand current context.
2. Choose a unique verb-led `change-id` and scaffold `proposal.md`, `tasks.md`, optional `design.md`, and spec deltas under `openspec/changes/<id>/`.
3. Draft spec deltas using `## ADDED|MODIFIED|REMOVED Requirements` with at least one `#### Scenario:` per requirement.
4. Run `openspec validate <id> --strict` and resolve any issues before sharing the proposal.
### Stage 2: Implementing Changes
Track these steps as TODOs and complete them one by one.
1. **Read proposal.md** - Understand what's being built
2. **Read design.md** (if exists) - Review technical decisions
3. **Read tasks.md** - Get implementation checklist
4. **Implement tasks sequentially** - Complete in order
5. **Confirm completion** - Ensure every item in `tasks.md` is finished before updating statuses
6. **Update checklist** - After all work is done, set every task to `- [x]` so the list reflects reality
7. **Approval gate** - Do not start implementation until the proposal is reviewed and approved
### Stage 3: Archiving Changes
After deployment, create separate PR to:
- Move `changes/[name]/``changes/archive/YYYY-MM-DD-[name]/`
- Update `specs/` if capabilities changed
- Use `openspec archive <change-id> --skip-specs --yes` for tooling-only changes (always pass the change ID explicitly)
- Run `openspec validate --strict` to confirm the archived change passes checks
## Before Any Task
**Context Checklist:**
- [ ] Read relevant specs in `specs/[capability]/spec.md`
- [ ] Check pending changes in `changes/` for conflicts
- [ ] Read `openspec/project.md` for conventions
- [ ] Run `openspec list` to see active changes
- [ ] Run `openspec list --specs` to see existing capabilities
**Before Creating Specs:**
- Always check if capability already exists
- Prefer modifying existing specs over creating duplicates
- Use `openspec show [spec]` to review current state
- If request is ambiguous, ask 12 clarifying questions before scaffolding
### Search Guidance
- Enumerate specs: `openspec spec list --long` (or `--json` for scripts)
- Enumerate changes: `openspec list` (or `openspec change list --json` - deprecated but available)
- Show details:
- Spec: `openspec show <spec-id> --type spec` (use `--json` for filters)
- Change: `openspec show <change-id> --json --deltas-only`
- Full-text search (use ripgrep): `rg -n "Requirement:|Scenario:" openspec/specs`
## Quick Start
### CLI Commands
```bash
# Essential commands
openspec list # List active changes
openspec list --specs # List specifications
openspec show [item] # Display change or spec
openspec validate [item] # Validate changes or specs
openspec archive <change-id> [--yes|-y] # Archive after deployment (add --yes for non-interactive runs)
# Project management
openspec init [path] # Initialize OpenSpec
openspec update [path] # Update instruction files
# Interactive mode
openspec show # Prompts for selection
openspec validate # Bulk validation mode
# Debugging
openspec show [change] --json --deltas-only
openspec validate [change] --strict
```
### Command Flags
- `--json` - Machine-readable output
- `--type change|spec` - Disambiguate items
- `--strict` - Comprehensive validation
- `--no-interactive` - Disable prompts
- `--skip-specs` - Archive without spec updates
- `--yes`/`-y` - Skip confirmation prompts (non-interactive archive)
## Directory Structure
```
openspec/
├── project.md # Project conventions
├── specs/ # Current truth - what IS built
│ └── [capability]/ # Single focused capability
│ ├── spec.md # Requirements and scenarios
│ └── design.md # Technical patterns
├── changes/ # Proposals - what SHOULD change
│ ├── [change-name]/
│ │ ├── proposal.md # Why, what, impact
│ │ ├── tasks.md # Implementation checklist
│ │ ├── design.md # Technical decisions (optional; see criteria)
│ │ └── specs/ # Delta changes
│ │ └── [capability]/
│ │ └── spec.md # ADDED/MODIFIED/REMOVED
│ └── archive/ # Completed changes
```
## Creating Change Proposals
### Decision Tree
```
New request?
├─ Bug fix restoring spec behavior? → Fix directly
├─ Typo/format/comment? → Fix directly
├─ New feature/capability? → Create proposal
├─ Breaking change? → Create proposal
├─ Architecture change? → Create proposal
└─ Unclear? → Create proposal (safer)
```
### Proposal Structure
1. **Create directory:** `changes/[change-id]/` (kebab-case, verb-led, unique)
2. **Write proposal.md:**
```markdown
# Change: [Brief description of change]
## Why
[1-2 sentences on problem/opportunity]
## What Changes
- [Bullet list of changes]
- [Mark breaking changes with **BREAKING**]
## Impact
- Affected specs: [list capabilities]
- Affected code: [key files/systems]
```
3. **Create spec deltas:** `specs/[capability]/spec.md`
```markdown
## ADDED Requirements
### Requirement: New Feature
The system SHALL provide...
#### Scenario: Success case
- **WHEN** user performs action
- **THEN** expected result
## MODIFIED Requirements
### Requirement: Existing Feature
[Complete modified requirement]
## REMOVED Requirements
### Requirement: Old Feature
**Reason**: [Why removing]
**Migration**: [How to handle]
```
If multiple capabilities are affected, create multiple delta files under `changes/[change-id]/specs/<capability>/spec.md`—one per capability.
4. **Create tasks.md:**
```markdown
## 1. Implementation
- [ ] 1.1 Create database schema
- [ ] 1.2 Implement API endpoint
- [ ] 1.3 Add frontend component
- [ ] 1.4 Write tests
```
5. **Create design.md when needed:**
Create `design.md` if any of the following apply; otherwise omit it:
- Cross-cutting change (multiple services/modules) or a new architectural pattern
- New external dependency or significant data model changes
- Security, performance, or migration complexity
- Ambiguity that benefits from technical decisions before coding
Minimal `design.md` skeleton:
```markdown
## Context
[Background, constraints, stakeholders]
## Goals / Non-Goals
- Goals: [...]
- Non-Goals: [...]
## Decisions
- Decision: [What and why]
- Alternatives considered: [Options + rationale]
## Risks / Trade-offs
- [Risk] → Mitigation
## Migration Plan
[Steps, rollback]
## Open Questions
- [...]
```
## Spec File Format
### Critical: Scenario Formatting
**CORRECT** (use #### headers):
```markdown
#### Scenario: User login success
- **WHEN** valid credentials provided
- **THEN** return JWT token
```
**WRONG** (don't use bullets or bold):
```markdown
- **Scenario: User login** ❌
**Scenario**: User login ❌
### Scenario: User login ❌
```
Every requirement MUST have at least one scenario.
### Requirement Wording
- Use SHALL/MUST for normative requirements (avoid should/may unless intentionally non-normative)
### Delta Operations
- `## ADDED Requirements` - New capabilities
- `## MODIFIED Requirements` - Changed behavior
- `## REMOVED Requirements` - Deprecated features
- `## RENAMED Requirements` - Name changes
Headers matched with `trim(header)` - whitespace ignored.
#### When to use ADDED vs MODIFIED
- ADDED: Introduces a new capability or sub-capability that can stand alone as a requirement. Prefer ADDED when the change is orthogonal (e.g., adding "Slash Command Configuration") rather than altering the semantics of an existing requirement.
- MODIFIED: Changes the behavior, scope, or acceptance criteria of an existing requirement. Always paste the full, updated requirement content (header + all scenarios). The archiver will replace the entire requirement with what you provide here; partial deltas will drop previous details.
- RENAMED: Use when only the name changes. If you also change behavior, use RENAMED (name) plus MODIFIED (content) referencing the new name.
Common pitfall: Using MODIFIED to add a new concern without including the previous text. This causes loss of detail at archive time. If you arent explicitly changing the existing requirement, add a new requirement under ADDED instead.
Authoring a MODIFIED requirement correctly:
1) Locate the existing requirement in `openspec/specs/<capability>/spec.md`.
2) Copy the entire requirement block (from `### Requirement: ...` through its scenarios).
3) Paste it under `## MODIFIED Requirements` and edit to reflect the new behavior.
4) Ensure the header text matches exactly (whitespace-insensitive) and keep at least one `#### Scenario:`.
Example for RENAMED:
```markdown
## RENAMED Requirements
- FROM: `### Requirement: Login`
- TO: `### Requirement: User Authentication`
```
## Troubleshooting
### Common Errors
**"Change must have at least one delta"**
- Check `changes/[name]/specs/` exists with .md files
- Verify files have operation prefixes (## ADDED Requirements)
**"Requirement must have at least one scenario"**
- Check scenarios use `#### Scenario:` format (4 hashtags)
- Don't use bullet points or bold for scenario headers
**Silent scenario parsing failures**
- Exact format required: `#### Scenario: Name`
- Debug with: `openspec show [change] --json --deltas-only`
### Validation Tips
```bash
# Always use strict mode for comprehensive checks
openspec validate [change] --strict
# Debug delta parsing
openspec show [change] --json | jq '.deltas'
# Check specific requirement
openspec show [spec] --json -r 1
```
## Happy Path Script
```bash
# 1) Explore current state
openspec spec list --long
openspec list
# Optional full-text search:
# rg -n "Requirement:|Scenario:" openspec/specs
# rg -n "^#|Requirement:" openspec/changes
# 2) Choose change id and scaffold
CHANGE=add-two-factor-auth
mkdir -p openspec/changes/$CHANGE/{specs/auth}
printf "## Why\n...\n\n## What Changes\n- ...\n\n## Impact\n- ...\n" > openspec/changes/$CHANGE/proposal.md
printf "## 1. Implementation\n- [ ] 1.1 ...\n" > openspec/changes/$CHANGE/tasks.md
# 3) Add deltas (example)
cat > openspec/changes/$CHANGE/specs/auth/spec.md << 'EOF'
## ADDED Requirements
### Requirement: Two-Factor Authentication
Users MUST provide a second factor during login.
#### Scenario: OTP required
- **WHEN** valid credentials are provided
- **THEN** an OTP challenge is required
EOF
# 4) Validate
openspec validate $CHANGE --strict
```
## Multi-Capability Example
```
openspec/changes/add-2fa-notify/
├── proposal.md
├── tasks.md
└── specs/
├── auth/
│ └── spec.md # ADDED: Two-Factor Authentication
└── notifications/
└── spec.md # ADDED: OTP email notification
```
auth/spec.md
```markdown
## ADDED Requirements
### Requirement: Two-Factor Authentication
...
```
notifications/spec.md
```markdown
## ADDED Requirements
### Requirement: OTP Email Notification
...
```
## Best Practices
### Simplicity First
- Default to <100 lines of new code
- Single-file implementations until proven insufficient
- Avoid frameworks without clear justification
- Choose boring, proven patterns
### Complexity Triggers
Only add complexity with:
- Performance data showing current solution too slow
- Concrete scale requirements (>1000 users, >100MB data)
- Multiple proven use cases requiring abstraction
### Clear References
- Use `file.ts:42` format for code locations
- Reference specs as `specs/auth/spec.md`
- Link related changes and PRs
### Capability Naming
- Use verb-noun: `user-auth`, `payment-capture`
- Single purpose per capability
- 10-minute understandability rule
- Split if description needs "AND"
### Change ID Naming
- Use kebab-case, short and descriptive: `add-two-factor-auth`
- Prefer verb-led prefixes: `add-`, `update-`, `remove-`, `refactor-`
- Ensure uniqueness; if taken, append `-2`, `-3`, etc.
## Tool Selection Guide
| Task | Tool | Why |
|------|------|-----|
| Find files by pattern | Glob | Fast pattern matching |
| Search code content | Grep | Optimized regex search |
| Read specific files | Read | Direct file access |
| Explore unknown scope | Task | Multi-step investigation |
## Error Recovery
### Change Conflicts
1. Run `openspec list` to see active changes
2. Check for overlapping specs
3. Coordinate with change owners
4. Consider combining proposals
### Validation Failures
1. Run with `--strict` flag
2. Check JSON output for details
3. Verify spec file format
4. Ensure scenarios properly formatted
### Missing Context
1. Read project.md first
2. Check related specs
3. Review recent archives
4. Ask for clarification
## Quick Reference
### Stage Indicators
- `changes/` - Proposed, not yet built
- `specs/` - Built and deployed
- `archive/` - Completed changes
### File Purposes
- `proposal.md` - Why and what
- `tasks.md` - Implementation steps
- `design.md` - Technical decisions
- `spec.md` - Requirements and behavior
### CLI Essentials
```bash
openspec list # What's in progress?
openspec show [item] # View details
openspec validate --strict # Is it correct?
openspec archive <change-id> [--yes|-y] # Mark complete (add --yes for automation)
```
Remember: Specs are truth. Changes are proposals. Keep them in sync.

70
openspec/project.md Normal file
View File

@@ -0,0 +1,70 @@
# Project Context
## Purpose
BLS Project Console是一个前后端分离的Node.js项目用于从Redis队列读取日志记录并展示在控制台界面中同时提供发送控制台指令到Redis队列的功能以便其他程序读取和执行。
## Tech Stack
- **前端**: Vue 3.x, Vue Router, Axios, CSS3
- **后端**: Node.js, Express, Redis客户端, CORS
- **构建工具**: Vite
- **开发工具**: ESLint, Prettier, nodemon
## Project Conventions
### Code Style
- **JavaScript**: 使用ES模块语法(import/export)
- **Vue**: 使用Composition API
- **命名规范**:
- 文件名: 小驼峰命名(如: logView.vue)
- 组件名: 大驼峰命名(如: LogView)
- 变量名: 小驼峰命名
- 常量名: 大写蛇形命名(如: REDIS_QUEUE_NAME)
- **代码格式化**: 使用Prettier自动格式化
- **代码质量**: 使用ESLint进行静态代码检查
### Architecture Patterns
- **前后端分离**: 前端和后端独立部署通过RESTful API通信
- **MVC架构**: 后端使用Model-View-Controller模式
- **组件化开发**: 前端采用Vue组件化开发
- **分层设计**:
- 前端: 视图层、路由层、服务层
- 后端: 路由层、服务层、数据访问层
### Testing Strategy
- **单元测试**: 对核心功能模块进行单元测试
- **集成测试**: 测试API接口和Redis交互
- **端到端测试**: 测试完整的用户流程
- **测试框架**: Jest (后端), Vitest (前端)
### Git Workflow
- **分支策略**: Git Flow
- main: 生产分支
- develop: 开发分支
- feature/: 功能分支
- hotfix/: 热修复分支
- **提交规范**: 使用Conventional Commits
- feat: 新功能
- fix: 修复bug
- docs: 文档变更
- style: 代码格式变更
- refactor: 代码重构
- test: 测试相关变更
- chore: 构建或依赖更新
## Domain Context
- **Redis队列**: 用于存储日志记录和控制台指令的消息队列
- **日志记录**: 其他程序写入Redis队列的日志信息包含时间戳、日志级别和消息内容
- **控制台指令**: 从控制台发送到Redis队列的命令供其他程序读取和执行
- **实时更新**: 控制台需要实时从Redis队列获取新的日志记录
## Important Constraints
- **性能要求**: 控制台需要能够处理大量日志记录的实时更新
- **可靠性**: Redis连接需要具备重连机制确保系统稳定运行
- **安全性**: API接口需要适当的访问控制
- **可扩展性**: 系统设计应支持未来功能扩展
## External Dependencies
- **Redis**: 用于存储日志记录和控制台指令的消息队列服务
- 版本: 6.x+
- 连接方式: Redis客户端(redis@^4.6.10)
- 主要用途: 日志队列和指令队列

View File

@@ -0,0 +1,123 @@
# Command Capability Design
## Context
This design document describes the technical implementation of the command capability for the BLS Project Console, which allows users to send console commands to Redis queues for other programs to read and execute.
## Goals / Non-Goals
### Goals
- Implement command sending to Redis queues
- Provide command validation and error handling
- Maintain a history of sent commands
- Handle command responses from Redis
- Ensure high performance and reliability
### Non-Goals
- Command execution or processing
- Complex command syntax highlighting
- Advanced command editing capabilities
## Decisions
### Decision: Redis Queue Implementation
- **What**: Use Redis List as the queue data structure
- **Why**: Redis Lists provide efficient push/pop operations with O(1) time complexity, making them ideal for message queues
- **Alternatives considered**:
- Redis Streams: More advanced but overkill for our use case
- Redis Pub/Sub: No persistence, so commands would be lost if the receiving program is down
### Decision: Command History Storage
- **What**: Store command history in memory with a configurable maximum size
- **Why**: In-memory storage provides fast access times and avoids the complexity of database management
- **Alternatives considered**:
- Database storage: Adds complexity and latency
- File system: Not suitable for real-time access
### Decision: Command Validation
- **What**: Implement basic command validation on both frontend and backend
- **Why**: Frontend validation provides immediate feedback to users, while backend validation ensures data integrity
- **Alternatives considered**:
- Only frontend validation: Less secure, as users could bypass it
- Only backend validation: Less responsive, as users would have to wait for server response
## Architecture
### Frontend Architecture
```
CommandView Component
├── CommandForm Component
├── CommandHistory Component
└── CommandService
```
### Backend Architecture
```
Command Routes
├── Command Service
│ ├── Redis Client
│ └── Command Manager
└── Command History Manager
```
## Implementation Details
### Redis Connection
- Use the `redis` npm package to connect to Redis
- Implement automatic reconnection with exponential backoff
- Handle connection errors gracefully
### Command Sending
1. User enters a command in the frontend form
2. Frontend validates the command (not empty, no invalid characters)
3. Frontend sends a POST request to `/api/commands` with the command content
4. Backend validates the command again
5. Backend generates a unique command ID
6. Backend adds the command to the Redis queue using `LPUSH`
7. Backend stores the command in the in-memory command history
8. Backend sends a success response to the frontend
### Command Validation
- **Frontend validation**:
- Check that the command is not empty
- Check that the command does not contain invalid characters (e.g., null bytes)
- Limit command length to 1024 characters
- **Backend validation**:
- Same checks as frontend
- Additional server-side validation if needed
### Command History
- Store command history in an array in memory
- Implement a circular buffer to limit memory usage
- Default maximum command count: 1000
- Configurable via environment variable
- Include command ID, content, timestamp, and status
### Command Response Handling
1. Receiving program reads the command from the Redis queue
2. Receiving program executes the command
3. Receiving program writes the response to a separate Redis queue
4. Backend listens for responses on the response queue using `BLPOP`
5. When a response is received, backend updates the command status in the history
6. Backend notifies the frontend of the response via Server-Sent Events (SSE)
## Risks / Trade-offs
### Risk: Redis Connection Failure
- **Risk**: If Redis connection is lost, commands won't be sent
- **Mitigation**: Implement automatic reconnection with exponential backoff, and notify users when connection is lost
### Risk: Command Loss
- **Risk**: Commands could be lost if Redis goes down
- **Mitigation**: Implement Redis persistence (RDB or AOF) to ensure commands are not lost
### Risk: Command Response Timeout
- **Risk**: Commands could take too long to execute, causing the UI to hang
- **Mitigation**: Implement a timeout mechanism for command responses, and show a loading indicator to users
## Migration Plan
No migration is required as this is a new feature.
## Open Questions
- What is the expected maximum command frequency per minute?
- Should we add support for command templates or macros?
- Should we implement command scheduling for future execution?

View File

@@ -0,0 +1,97 @@
# Command Capability Specification
## Overview
This specification defines the command capability for the BLS Project Console, which allows users to send console commands to Redis queues for other programs to read and execute.
## Requirements
### Requirement: Command Sending to Redis
The system SHALL send commands to a Redis queue.
#### Scenario: Sending a command to Redis queue
- **WHEN** the user enters a command in the console
- **AND** clicks the "Send" button
- **THEN** the command SHALL be sent to the Redis queue
- **AND** the user SHALL receive a success confirmation
### Requirement: Command Validation
The system SHALL validate commands before sending them to Redis.
#### Scenario: Validating an empty command
- **WHEN** the user tries to send an empty command
- **THEN** the system SHALL display an error message
- **AND** the command SHALL NOT be sent to Redis
#### Scenario: Validating a command with invalid characters
- **WHEN** the user tries to send a command with invalid characters
- **THEN** the system SHALL display an error message
- **AND** the command SHALL NOT be sent to Redis
### Requirement: Command History
The system SHALL maintain a history of sent commands.
#### Scenario: Viewing command history
- **WHEN** the user opens the command history
- **THEN** the system SHALL display a list of previously sent commands
- **AND** the user SHALL be able to select a command from the history to resend
### Requirement: Command Response Handling
The system SHALL handle responses from commands sent to Redis.
#### Scenario: Receiving a command response
- **WHEN** a command response is received from Redis
- **THEN** the system SHALL display the response in the console
- **AND** the response SHALL be associated with the original command
## Data Model
### Command
```json
{
"id": "string",
"content": "string",
"timestamp": "ISO-8601 string",
"status": "string" // e.g., sent, processing, completed, failed
}
```
### Command Response
```json
{
"id": "string",
"commandId": "string",
"timestamp": "ISO-8601 string",
"status": "string", // e.g., success, failure
"result": "string" // command execution result
}
```
## API Endpoints
### POST /api/commands
- **Description**: Send a command to the Redis queue
- **Request Body**:
```json
{
"content": "string" // the command to send
}
```
- **Response**:
```json
{
"success": true,
"message": "Command sent successfully",
"commandId": "string"
}
```
### GET /api/commands/history
- **Description**: Get command history
- **Query Parameters**:
- `limit`: Maximum number of commands to return (default: 50)
- `offset`: Offset for pagination (default: 0)
- **Response**: Array of command objects
### GET /api/commands/:id/response
- **Description**: Get response for a specific command
- **Response**: Command response object

View File

@@ -0,0 +1,111 @@
# Logging Capability Design
## Context
This design document describes the technical implementation of the logging capability for the BLS Project Console, which allows the system to read log records from Redis queues and display them in the console interface.
## Goals / Non-Goals
### Goals
- Implement real-time log reading from Redis queues
- Provide a user-friendly log display interface
- Support log filtering by level and time range
- Ensure high performance and low latency
- Implement proper error handling and reconnection mechanisms
### Non-Goals
- Log storage or persistence beyond memory
- Log analysis or visualization (charts, graphs)
- Advanced log search capabilities
## Decisions
### Decision: Redis Queue Implementation
- **What**: Use Redis List as the queue data structure
- **Why**: Redis Lists provide efficient push/pop operations with O(1) time complexity, making them ideal for message queues
- **Alternatives considered**:
- Redis Streams: More advanced but overkill for our use case
- Redis Pub/Sub: No persistence, so logs would be lost if the server is down
### Decision: Real-time Updates
- **What**: Use Server-Sent Events (SSE) for real-time log updates
- **Why**: SSE is simpler than WebSockets for one-way communication, has better browser support, and is easier to implement
- **Alternatives considered**:
- WebSockets: More complex for one-way communication
- Polling: Higher latency and more resource-intensive
### Decision: Log Storage
- **What**: Store logs in memory with a configurable maximum size
- **Why**: In-memory storage provides fast access times and avoids the complexity of database management
- **Alternatives considered**:
- Database storage: Adds complexity and latency
- File system: Not suitable for real-time access
## Architecture
### Frontend Architecture
```
LogView Component
├── LogList Component
├── LogFilter Component
└── LogService
```
### Backend Architecture
```
Log Routes
├── Log Service
│ ├── Redis Client
│ └── Log Manager
└── SSE Controller
```
## Implementation Details
### Redis Connection
- Use the `redis` npm package to connect to Redis
- Implement automatic reconnection with exponential backoff
- Handle connection errors gracefully
### Log Reading
1. Server establishes connection to Redis
2. Server listens for new log records using `BLPOP` command (blocking pop)
3. When a log record is received, it's added to the in-memory log store
4. The log is then sent to all connected SSE clients
### Log Storage
- Use an array to store log records in memory
- Implement a circular buffer to limit memory usage
- Default maximum log count: 10,000
- Configurable via environment variable
### Log Display
- Use a scrollable list to display logs
- Implement virtual scrolling for large log sets to improve performance
- Color-code logs by level (INFO: gray, WARN: yellow, ERROR: red, DEBUG: blue)
### Log Filtering
- Implement client-side filtering for performance
- Allow filtering by log level (INFO, WARN, ERROR, DEBUG)
- Allow filtering by time range using a date picker
## Risks / Trade-offs
### Risk: Redis Connection Failure
- **Risk**: If Redis connection is lost, logs won't be received
- **Mitigation**: Implement automatic reconnection with exponential backoff, and notify users when connection is lost
### Risk: High Log Volume
- **Risk**: Large number of logs could cause performance issues
- **Mitigation**: Implement a circular buffer to limit memory usage, and use virtual scrolling in the frontend
### Risk: Browser Performance
- **Risk**: Displaying thousands of logs could slow down the browser
- **Mitigation**: Use virtual scrolling and limit the number of logs displayed at once
## Migration Plan
No migration is required as this is a new feature.
## Open Questions
- What is the expected maximum log volume per minute?
- Should we add support for log persistence to disk?
- Should we implement log search functionality?

View File

@@ -0,0 +1,71 @@
# Logging Capability Specification
## Overview
This specification defines the logging capability for the BLS Project Console, which allows the system to read log records from Redis queues and display them in the console interface.
## Requirements
### Requirement: Log Reading from Redis
The system SHALL read log records from a Redis queue.
#### Scenario: Reading logs from Redis queue
- **WHEN** the server starts
- **THEN** it SHALL establish a connection to the Redis queue
- **AND** it SHALL begin listening for new log records
- **AND** it SHALL store log records in memory for display
### Requirement: Log Display in Console
The system SHALL display log records in a user-friendly format.
#### Scenario: Displaying logs in the console
- **WHEN** a log record is received from Redis
- **THEN** it SHALL be added to the log list in the console
- **AND** it SHALL display the log timestamp, level, and message
- **AND** it SHALL support scrolling through historical logs
### Requirement: Log Filtering
The system SHALL allow users to filter logs by level and time range.
#### Scenario: Filtering logs by level
- **WHEN** the user selects a log level filter
- **THEN** only logs with the selected level SHALL be displayed
#### Scenario: Filtering logs by time range
- **WHEN** the user selects a time range
- **THEN** only logs within the specified range SHALL be displayed
### Requirement: Log Auto-Refresh
The system SHALL automatically refresh logs in real-time.
#### Scenario: Real-time log updates
- **WHEN** a new log is added to the Redis queue
- **THEN** it SHALL be automatically displayed in the console
- **AND** the console SHALL scroll to the latest log if the user is viewing the end
## Data Model
### Log Record
```json
{
"id": "string",
"timestamp": "ISO-8601 string",
"level": "string", // e.g., INFO, WARN, ERROR, DEBUG
"message": "string",
"metadata": "object" // optional additional data
}
```
## API Endpoints
### GET /api/logs
- **Description**: Get log records
- **Query Parameters**:
- `level`: Filter logs by level
- `startTime`: Filter logs from this timestamp
- `endTime`: Filter logs up to this timestamp
- `limit`: Maximum number of logs to return
- **Response**: Array of log records
### GET /api/logs/live
- **Description**: Establish a WebSocket connection for real-time log updates
- **Response**: Continuous stream of log records

View File

@@ -0,0 +1,114 @@
# Redis Connection Capability Design
## Context
This design document describes the technical implementation of the Redis connection capability for the BLS Project Console, which manages the connection between the system and the Redis server for reading logs and sending commands.
## Goals / Non-Goals
### Goals
- Establish and manage connection to Redis server
- Provide configuration options for Redis connection
- Implement automatic reconnection mechanism
- Handle connection errors gracefully
- Monitor and report connection status
### Non-Goals
- Redis server administration
- Redis cluster management
- Advanced Redis features (e.g., pub/sub, streams) beyond basic queue operations
## Decisions
### Decision: Redis Client Library
- **What**: Use the official `redis` npm package
- **Why**: It's the official Redis client for Node.js, well-maintained, and supports all Redis commands
- **Alternatives considered**:
- `ioredis`: More features but more complex
- `node-redis`: Older, less maintained
### Decision: Connection Configuration
- **What**: Use environment variables for Redis connection configuration
- **Why**: Environment variables are a standard way to configure services in containerized environments, and they allow easy configuration without code changes
- **Alternatives considered**:
- Configuration files: Less flexible for containerized environments
- Hardcoded values: Not suitable for production use
### Decision: Reconnection Strategy
- **What**: Use exponential backoff for reconnection attempts
- **Why**: Exponential backoff prevents overwhelming the Redis server with reconnection attempts, while still ensuring timely reconnection
- **Alternatives considered**:
- Fixed interval reconnection: Less efficient, could overwhelm the server
- No reconnection: Not suitable for production use
## Architecture
### Redis Connection Architecture
```
Redis Connection Manager
├── Redis Client
├── Connection Monitor
├── Reconnection Handler
└── Configuration Manager
```
## Implementation Details
### Redis Client Initialization
1. Server reads Redis configuration from environment variables
2. Server creates a Redis client instance with the configuration
3. Server attaches event listeners for connection events (connect, error, end, reconnecting)
4. Server attempts to connect to Redis
### Configuration Parameters
| Parameter | Default Value | Environment Variable | Description |
|-----------|---------------|----------------------|-------------|
| host | localhost | REDIS_HOST | Redis server hostname |
| port | 6379 | REDIS_PORT | Redis server port |
| password | null | REDIS_PASSWORD | Redis server password |
| db | 0 | REDIS_DB | Redis database index |
| connectTimeout | 10000 | REDIS_CONNECT_TIMEOUT | Connection timeout in milliseconds |
| maxRetriesPerRequest | 3 | REDIS_MAX_RETRIES | Maximum retries per request |
| reconnectStrategy | exponential | REDIS_RECONNECT_STRATEGY | Reconnection strategy (exponential, fixed) |
| reconnectInterval | 1000 | REDIS_RECONNECT_INTERVAL | Base reconnection interval in milliseconds |
| maxReconnectInterval | 30000 | REDIS_MAX_RECONNECT_INTERVAL | Maximum reconnection interval in milliseconds |
### Reconnection Implementation
- Use the built-in reconnection feature of the `redis` package
- Configure exponential backoff with:
- Initial delay: 1 second
- Maximum delay: 30 seconds
- Factor: 1.5
- Log each reconnection attempt with timestamp and delay
### Error Handling
- **Connection errors**: Log the error, update connection status, and attempt to reconnect
- **Command errors**: Log the error, update the command status, and notify the user
- **Timeout errors**: Log the error, update connection status, and attempt to reconnect
### Connection Monitoring
- Track connection status (connecting, connected, disconnected, error)
- Log status changes with timestamps
- Expose connection status via API endpoint
- Update UI with connection status changes via Server-Sent Events (SSE)
## Risks / Trade-offs
### Risk: Redis Server Unavailability
- **Risk**: If the Redis server is unavailable for an extended period, the system won't be able to read logs or send commands
- **Mitigation**: Implement proper error handling and reconnection logic, and notify users of the issue
### Risk: Misconfiguration of Redis Connection
- **Risk**: Incorrect Redis configuration could lead to connection failures
- **Mitigation**: Validate configuration on startup, log configuration values (excluding passwords), and provide clear error messages
### Risk: Performance Impact of Reconnection Attempts
- **Risk**: Frequent reconnection attempts could impact system performance
- **Mitigation**: Use exponential backoff to reduce the frequency of reconnection attempts, and limit the maximum reconnection delay
## Migration Plan
No migration is required as this is a new feature.
## Open Questions
- What is the expected Redis server availability?
- Should we implement connection pooling for better performance?
- Should we support Redis Sentinel or Cluster for high availability?

View File

@@ -0,0 +1,99 @@
# Redis Connection Capability Specification
## Overview
This specification defines the Redis connection capability for the BLS Project Console, which manages the connection between the system and the Redis server for reading logs and sending commands.
## Requirements
### Requirement: Redis Connection Establishment
The system SHALL establish a connection to the Redis server.
#### Scenario: Establishing Redis connection on server start
- **WHEN** the server starts
- **THEN** it SHALL attempt to connect to the Redis server
- **AND** it SHALL log the connection status
### Requirement: Redis Connection Configuration
The system SHALL allow configuration of Redis connection parameters.
#### Scenario: Configuring Redis connection via environment variables
- **WHEN** the server starts with Redis environment variables set
- **THEN** it SHALL use those variables to configure the Redis connection
- **AND** it SHALL override default values
### Requirement: Redis Connection Reconnection
The system SHALL automatically reconnect to Redis if the connection is lost.
#### Scenario: Reconnecting to Redis after connection loss
- **WHEN** the Redis connection is lost
- **THEN** the system SHALL attempt to reconnect with exponential backoff
- **AND** it SHALL log each reconnection attempt
- **AND** it SHALL notify the user when connection is restored
### Requirement: Redis Connection Error Handling
The system SHALL handle Redis connection errors gracefully.
#### Scenario: Handling Redis connection failure
- **WHEN** the system fails to connect to Redis
- **THEN** it SHALL log the error
- **AND** it SHALL display an error message to the user
- **AND** it SHALL continue attempting to reconnect
### Requirement: Redis Connection Monitoring
The system SHALL monitor the Redis connection status.
#### Scenario: Monitoring Redis connection status
- **WHEN** the Redis connection status changes
- **THEN** the system SHALL update the connection status in the UI
- **AND** it SHALL log the status change
## Data Model
### Redis Connection Configuration
```json
{
"host": "string",
"port": "number",
"password": "string",
"db": "number",
"socket": {
"reconnectStrategy": "function",
"connectTimeout": "number",
"keepAlive": "number"
}
}
```
### Redis Connection Status
```json
{
"status": "string", // e.g., connecting, connected, disconnected, error
"lastConnectedAt": "ISO-8601 string",
"lastDisconnectedAt": "ISO-8601 string",
"error": "string" // optional error message
}
```
## API Endpoints
### GET /api/redis/status
- **Description**: Get Redis connection status
- **Response**:
```json
{
"status": "string",
"lastConnectedAt": "ISO-8601 string",
"lastDisconnectedAt": "ISO-8601 string",
"error": "string" // optional
}
```
### POST /api/redis/reconnect
- **Description**: Manually reconnect to Redis
- **Response**:
```json
{
"success": true,
"message": "Reconnection attempt initiated"
}
```