feat: 实现RCU固件升级服务核心功能

- 添加升级服务主逻辑,包括定时触发升级、状态查询和日志记录
- 实现数据库初始化脚本和日志表结构
- 添加PM2部署配置文件
- 实现环境变量配置系统
- 添加API客户端模块处理外部接口调用
- 实现升级状态轮询和超时处理机制
- 添加测试用例验证核心功能
This commit is contained in:
2026-01-21 13:34:42 +08:00
commit d04205ddba
23 changed files with 8788 additions and 0 deletions

5
.gitignore vendored Normal file
View File

@@ -0,0 +1,5 @@
node_modules/
.env
.DS_Store
coverage/
logs/

View File

@@ -0,0 +1,82 @@
# RCU Upgrade Backend Service Implementation Plan
This plan is based on the requirements in `project.md` and the development standards in `Promise.md`.
## Phase 1: Project Initialization & Specification (Spec-First)
**Goal**: Set up the environment and define specifications before coding.
1. **Project Scaffolding**
- Initialize Node.js project (v24.10.0 as per environment, satisfying v22+ requirement).
- Configure `npm` as the package manager.
- Create directory structure: `spec/`, `src/`, `tests/`, `scripts/`.
- Install development dependencies: `@fission-ai/openspec`, `eslint` (or similar), testing framework (e.g., `jest` or `mocha`).
- Create `README.md` with run/test/spec instructions.
2. **OpenSpec Definition**
- Create `spec/rcu-upgrade-flow.yaml` (using OpenAPI 3.1 or JSON Schema for non-API logical specification).
- Define data structures:
- `UpgradeConfig` (RoomType, HostList, FileNames).
- `UpgradeLog` (DB Schema representation).
- `APIResponse` schemas for external calls (`Upgrade_V2`, `QueryUpdateHostProgressBar`).
- Define `npm` scripts: `npm run spec:lint`, `npm run spec:validate`.
- **Checkpoint**: Pass `spec:validate`.
## Phase 2: Database & Configuration Design
**Goal**: Prepare data storage and configuration management.
3. **Database Setup**
- Design PostgreSQL schema for `test_upgrade` database.
- Create SQL script for `upgrade_log` table with fields:
- `start_time`, `roomtype_id`, `host_str`, `filename`, `status`, `end_time`, `file_type`, `config_version`, `firmware_version`, `uuid`.
- **Clarification Needed**: Design a mechanism to track the "2 consecutive upgrades per version" state. (Will propose adding a `upgrade_state` table or using a local file if DB schema changes are restricted).
4. **Configuration Module**
- Implement `.env` parser.
- Create a configuration loader to parse the complex JSON-like structure for `roomtype_id` arrays, `host_list_str` arrays, and `fileName` pairs.
- Validate configuration against the Spec defined in Phase 1.
## Phase 3: Core Implementation
**Goal**: Implement business logic adhering to constraints.
5. **External API Client**
- Implement `UpgradeClient` to handle HTTP POST requests to `https://www.boonlive-rcu.com/api`.
- Implement `Upgrade_V2` call.
- Implement `QueryUpdateHostProgressBar` call.
6. **Upgrade Logic Controller**
- Implement the main workflow:
1. Determine current `fileName` based on rotation logic (2x A, 2x B).
2. Call `Upgrade_V2`.
3. Wait 45 seconds.
4. Poll `QueryUpdateHostProgressBar` (Interval: 30s, Timeout: 5m).
5. Process results and trigger DB logging.
7. **Database Logger**
- Implement `LoggerService` to write to `test_upgrade`.
- Ensure UUID consistency between the initial call and the status result.
- Implement the logic to only log the "Completed" status or final "Timeout/Fail" status as requested.
8. **Scheduler**
- Implement the 10-minute interval timer (configurable).
- Ensure overlapping executions are handled or prevented (if applicable).
## Phase 4: Deployment & Quality Assurance
**Goal**: Prepare for production deployment on Windows Server.
9. **PM2 Configuration**
- Create `pm2.json` in the root directory.
- Configure environment variables and log paths in PM2 config.
10. **Testing**
- Implement unit tests in `tests/`.
- Mock external API responses for success/failure/timeout scenarios.
- **Checkpoint**: `npm run test` passes.
11. **Final Verification**
- Verify against `Promise.md` checklist (Lint, Spec consistency, No extra features).
- Verify against `project.md` functional requirements.
## Clarifications & Risks
The following points require attention or assumption confirmation:
1. **State Persistence**: The requirement "each version needs to be upgraded 2 times consecutively, then switch" implies persistent state across the 10-minute intervals. *Assumption*: I will implement a lightweight persistence mechanism (e.g., a small JSON file or a dedicated DB table `upgrade_state`) to track the current version and iteration count, as `upgrade_log` is for historical logging only.
2. **Concurrency**: If a single upgrade process takes > 10 minutes (e.g., 5 min timeout + retries), should the next scheduled job start? *Assumption*: Overlap is allowed, but we should handle it carefully.

18
AGENTS.md Normal file
View File

@@ -0,0 +1,18 @@
<!-- OPENSPEC:START -->
# OpenSpec Instructions
These instructions are for AI assistants working in this project.
Always open `@/openspec/AGENTS.md` when the request:
- Mentions planning or proposals (words like proposal, spec, change, plan)
- Introduces new capabilities, breaking changes, architecture shifts, or big performance/security work
- Sounds ambiguous and you need the authoritative spec before coding
Use `@/openspec/AGENTS.md` to learn:
- How to create and apply change proposals
- Spec format and conventions
- Project structure and guidelines
Keep this managed block so 'openspec update' can refresh the instructions.
<!-- OPENSPEC:END -->

86
Promise.md Normal file
View File

@@ -0,0 +1,86 @@
开发框架约束(供 AI 创建项目使用)
目的:本文件用于约束 AI 在创建/改造项目时的技术选型、目录结构、工程化与交付流程。除非明确得到人工指令,否则 AI 不得偏离本文件的约束。
1. 运行环境与基础约束
- Node.js 版本:必须使用 Node.js 22+(建议使用最新 LTS
- 主要语言JavaScript.js为主。
- 允许在必要时引入类型检查方案(例如 JSDoc + // @ts-check但默认不将 TypeScript 作为主要语言。
- 包管理器:**强制统一使用 `npm`**。
- 跨平台:默认需兼容 WindowsPowerShell与类 Unix 环境。
2. 技术栈约束
2.1 前端(如需要前端)
- 框架:必须使用 Vue 3.x。
- 生态库:仅引入与 Vue 3.x 兼容的相关库;避免引入与 Vue 2.x 绑定的历史库。
- 构建工具:优先 Vite如与既有工程冲突需说明原因并保持一致性
2.2 后端(如需要后端)
- 运行时:必须使用 Node.js。
- 语言:后端同样以 JavaScript 为主。
- API 风格:默认使用 HTTP JSON API如采用 GraphQL/WebSocket 等需明确说明并仍遵循 OpenSpec 约束)。
3. OpenSpec规范驱动开发流程约束
> 说明:这里的 OpenSpec 指通过全局安装 `@fission-ai/openspec` 获得的规范驱动工具链;在 API 场景下,接口契约必须使用并遵守 OpenAPI 3.1。两者不冲突OpenSpec 用于驱动/校验流程OpenAPI 3.1 是规范文件中必须满足的契约。
3.0 OpenSpec 工具链安装(强制)
- 开发与 CI 环境必须确保可用的 OpenSpec 工具链:
- 安装命令npm install -g @fission-ai/openspec@latest
- AI 在生成项目脚本时:
- 必须将规范校验能力接入到 npm scripts见 3.3)。
- 不得绕过 OpenSpec 校验直接交付“未受规范约束”的 API 实现。
3.1 必须交付的规范产物
- 项目必须包含一个可追溯的规范文件:
- API 项目:`spec/openapi.yaml`(或 `spec/openapi.json`),版本 OpenAPI 3.1。
- 非 API 项目:仍需提供对应的“规格说明”(例如流程/数据结构/输入输出契约),放在 spec/ 目录下。
- 规范文件需满足:
- 可被校验lint/validate
- 与实现一致(实现变更必须同步更新规范)
3.2 开发顺序(强制)
1. 先写/更新规范spec-first在新增/修改功能前,先更新 `spec/` 下的规范。
2. 再实现:实现必须与规范一致。
3. 再验证CI/本地脚本必须包含规范校验步骤。
4. 再文档化README 中必须说明如何查看/使用规范与如何运行校验。
3.3 规范校验与联动(强制)
- 必须提供脚本(示例命名,可按项目调整但不可缺失):
- npm run spec:lint调用 OpenSpec 对 spec/ 做 lint具体 CLI 参数以 openspec --help 为准)
- npm run spec:validate调用 OpenSpec 对 spec/ 做结构/引用/契约校验(具体 CLI 参数以 openspec --help 为准)
- 若为 API
- 必须在实现层提供请求/响应校验或至少在测试阶段进行契约校验。
- 鼓励(非强制)从 OpenAPI 生成 client/server stub 或生成类型定义但不得改变“JS 为主语言”的前提。
4. 工程结构约束(建议默认)
AI 创建项目时,默认使用以下结构;如项目类型不适用,可在不违背约束的前提下做最小调整。
- spec/OpenSpec 规范OpenAPI 或其他规格说明)
- src/:源代码
- tests/:测试
- scripts/:工程脚本(构建/校验/生成等)
- README.md必须包含运行、测试、规范使用方式
5. 质量与交付约束(强制)
- 必须提供基础脚本:
- npm run dev如可交互开发
- npm run build如需要构建
- npm run test
- npm run lint
- 变更要求:
- 修改实现时同步更新 spec/ 与测试。
- 不得只改实现不改规范;也不得只改规范不改实现。
6. AI 行为约束(强制)
- 若用户需求与本文件冲突:
- 先指出冲突点,并请求用户确认是否允许偏离约束。
- 未明确要求时:
- 不引入与约束无关的“额外页面/功能/组件/花哨配置”。
- 保持最小可用、可验证、可维护的实现。

21
README.md Normal file
View File

@@ -0,0 +1,21 @@
# RCU Upgrade Service
## Overview
Node.js backend service to manage RCU firmware upgrades.
Triggers upgrades via `WebChatUpgrade` API and polls status via `QueryUpdateHostProgressBar`.
## Setup
1. Install dependencies: `npm install`
2. Configure `.env` (see `.env.example` or documentation)
3. Setup PostgreSQL database `test_upgrade`.
## Running
- Dev: `npm run dev`
- Production: `pm2 start pm2.json`
## Spec & Validation
- Lint Spec: `npm run spec:lint`
- Validate Spec: `npm run spec:validate`
## Testing
- Run tests: `npm test`

456
openspec/AGENTS.md Normal file
View File

@@ -0,0 +1,456 @@
# OpenSpec Instructions
Instructions for AI coding assistants using OpenSpec for spec-driven development.
## TL;DR Quick Checklist
- Search existing work: `openspec spec list --long`, `openspec list` (use `rg` only for full-text search)
- Decide scope: new capability vs modify existing capability
- Pick a unique `change-id`: kebab-case, verb-led (`add-`, `update-`, `remove-`, `refactor-`)
- Scaffold: `proposal.md`, `tasks.md`, `design.md` (only if needed), and delta specs per affected capability
- Write deltas: use `## ADDED|MODIFIED|REMOVED|RENAMED Requirements`; include at least one `#### Scenario:` per requirement
- Validate: `openspec validate [change-id] --strict --no-interactive` and fix issues
- Request approval: Do not start implementation until proposal is approved
## Three-Stage Workflow
### Stage 1: Creating Changes
Create proposal when you need to:
- Add features or functionality
- Make breaking changes (API, schema)
- Change architecture or patterns
- Optimize performance (changes behavior)
- Update security patterns
Triggers (examples):
- "Help me create a change proposal"
- "Help me plan a change"
- "Help me create a proposal"
- "I want to create a spec proposal"
- "I want to create a spec"
Loose matching guidance:
- Contains one of: `proposal`, `change`, `spec`
- With one of: `create`, `plan`, `make`, `start`, `help`
Skip proposal for:
- Bug fixes (restore intended behavior)
- Typos, formatting, comments
- Dependency updates (non-breaking)
- Configuration changes
- Tests for existing behavior
**Workflow**
1. Review `openspec/project.md`, `openspec list`, and `openspec list --specs` to understand current context.
2. Choose a unique verb-led `change-id` and scaffold `proposal.md`, `tasks.md`, optional `design.md`, and spec deltas under `openspec/changes/<id>/`.
3. Draft spec deltas using `## ADDED|MODIFIED|REMOVED Requirements` with at least one `#### Scenario:` per requirement.
4. Run `openspec validate <id> --strict --no-interactive` and resolve any issues before sharing the proposal.
### Stage 2: Implementing Changes
Track these steps as TODOs and complete them one by one.
1. **Read proposal.md** - Understand what's being built
2. **Read design.md** (if exists) - Review technical decisions
3. **Read tasks.md** - Get implementation checklist
4. **Implement tasks sequentially** - Complete in order
5. **Confirm completion** - Ensure every item in `tasks.md` is finished before updating statuses
6. **Update checklist** - After all work is done, set every task to `- [x]` so the list reflects reality
7. **Approval gate** - Do not start implementation until the proposal is reviewed and approved
### Stage 3: Archiving Changes
After deployment, create separate PR to:
- Move `changes/[name]/``changes/archive/YYYY-MM-DD-[name]/`
- Update `specs/` if capabilities changed
- Use `openspec archive <change-id> --skip-specs --yes` for tooling-only changes (always pass the change ID explicitly)
- Run `openspec validate --strict --no-interactive` to confirm the archived change passes checks
## Before Any Task
**Context Checklist:**
- [ ] Read relevant specs in `specs/[capability]/spec.md`
- [ ] Check pending changes in `changes/` for conflicts
- [ ] Read `openspec/project.md` for conventions
- [ ] Run `openspec list` to see active changes
- [ ] Run `openspec list --specs` to see existing capabilities
**Before Creating Specs:**
- Always check if capability already exists
- Prefer modifying existing specs over creating duplicates
- Use `openspec show [spec]` to review current state
- If request is ambiguous, ask 12 clarifying questions before scaffolding
### Search Guidance
- Enumerate specs: `openspec spec list --long` (or `--json` for scripts)
- Enumerate changes: `openspec list` (or `openspec change list --json` - deprecated but available)
- Show details:
- Spec: `openspec show <spec-id> --type spec` (use `--json` for filters)
- Change: `openspec show <change-id> --json --deltas-only`
- Full-text search (use ripgrep): `rg -n "Requirement:|Scenario:" openspec/specs`
## Quick Start
### CLI Commands
```bash
# Essential commands
openspec list # List active changes
openspec list --specs # List specifications
openspec show [item] # Display change or spec
openspec validate [item] # Validate changes or specs
openspec archive <change-id> [--yes|-y] # Archive after deployment (add --yes for non-interactive runs)
# Project management
openspec init [path] # Initialize OpenSpec
openspec update [path] # Update instruction files
# Interactive mode
openspec show # Prompts for selection
openspec validate # Bulk validation mode
# Debugging
openspec show [change] --json --deltas-only
openspec validate [change] --strict --no-interactive
```
### Command Flags
- `--json` - Machine-readable output
- `--type change|spec` - Disambiguate items
- `--strict` - Comprehensive validation
- `--no-interactive` - Disable prompts
- `--skip-specs` - Archive without spec updates
- `--yes`/`-y` - Skip confirmation prompts (non-interactive archive)
## Directory Structure
```
openspec/
├── project.md # Project conventions
├── specs/ # Current truth - what IS built
│ └── [capability]/ # Single focused capability
│ ├── spec.md # Requirements and scenarios
│ └── design.md # Technical patterns
├── changes/ # Proposals - what SHOULD change
│ ├── [change-name]/
│ │ ├── proposal.md # Why, what, impact
│ │ ├── tasks.md # Implementation checklist
│ │ ├── design.md # Technical decisions (optional; see criteria)
│ │ └── specs/ # Delta changes
│ │ └── [capability]/
│ │ └── spec.md # ADDED/MODIFIED/REMOVED
│ └── archive/ # Completed changes
```
## Creating Change Proposals
### Decision Tree
```
New request?
├─ Bug fix restoring spec behavior? → Fix directly
├─ Typo/format/comment? → Fix directly
├─ New feature/capability? → Create proposal
├─ Breaking change? → Create proposal
├─ Architecture change? → Create proposal
└─ Unclear? → Create proposal (safer)
```
### Proposal Structure
1. **Create directory:** `changes/[change-id]/` (kebab-case, verb-led, unique)
2. **Write proposal.md:**
```markdown
# Change: [Brief description of change]
## Why
[1-2 sentences on problem/opportunity]
## What Changes
- [Bullet list of changes]
- [Mark breaking changes with **BREAKING**]
## Impact
- Affected specs: [list capabilities]
- Affected code: [key files/systems]
```
3. **Create spec deltas:** `specs/[capability]/spec.md`
```markdown
## ADDED Requirements
### Requirement: New Feature
The system SHALL provide...
#### Scenario: Success case
- **WHEN** user performs action
- **THEN** expected result
## MODIFIED Requirements
### Requirement: Existing Feature
[Complete modified requirement]
## REMOVED Requirements
### Requirement: Old Feature
**Reason**: [Why removing]
**Migration**: [How to handle]
```
If multiple capabilities are affected, create multiple delta files under `changes/[change-id]/specs/<capability>/spec.md`—one per capability.
4. **Create tasks.md:**
```markdown
## 1. Implementation
- [ ] 1.1 Create database schema
- [ ] 1.2 Implement API endpoint
- [ ] 1.3 Add frontend component
- [ ] 1.4 Write tests
```
5. **Create design.md when needed:**
Create `design.md` if any of the following apply; otherwise omit it:
- Cross-cutting change (multiple services/modules) or a new architectural pattern
- New external dependency or significant data model changes
- Security, performance, or migration complexity
- Ambiguity that benefits from technical decisions before coding
Minimal `design.md` skeleton:
```markdown
## Context
[Background, constraints, stakeholders]
## Goals / Non-Goals
- Goals: [...]
- Non-Goals: [...]
## Decisions
- Decision: [What and why]
- Alternatives considered: [Options + rationale]
## Risks / Trade-offs
- [Risk] → Mitigation
## Migration Plan
[Steps, rollback]
## Open Questions
- [...]
```
## Spec File Format
### Critical: Scenario Formatting
**CORRECT** (use #### headers):
```markdown
#### Scenario: User login success
- **WHEN** valid credentials provided
- **THEN** return JWT token
```
**WRONG** (don't use bullets or bold):
```markdown
- **Scenario: User login** ❌
**Scenario**: User login ❌
### Scenario: User login ❌
```
Every requirement MUST have at least one scenario.
### Requirement Wording
- Use SHALL/MUST for normative requirements (avoid should/may unless intentionally non-normative)
### Delta Operations
- `## ADDED Requirements` - New capabilities
- `## MODIFIED Requirements` - Changed behavior
- `## REMOVED Requirements` - Deprecated features
- `## RENAMED Requirements` - Name changes
Headers matched with `trim(header)` - whitespace ignored.
#### When to use ADDED vs MODIFIED
- ADDED: Introduces a new capability or sub-capability that can stand alone as a requirement. Prefer ADDED when the change is orthogonal (e.g., adding "Slash Command Configuration") rather than altering the semantics of an existing requirement.
- MODIFIED: Changes the behavior, scope, or acceptance criteria of an existing requirement. Always paste the full, updated requirement content (header + all scenarios). The archiver will replace the entire requirement with what you provide here; partial deltas will drop previous details.
- RENAMED: Use when only the name changes. If you also change behavior, use RENAMED (name) plus MODIFIED (content) referencing the new name.
Common pitfall: Using MODIFIED to add a new concern without including the previous text. This causes loss of detail at archive time. If you arent explicitly changing the existing requirement, add a new requirement under ADDED instead.
Authoring a MODIFIED requirement correctly:
1) Locate the existing requirement in `openspec/specs/<capability>/spec.md`.
2) Copy the entire requirement block (from `### Requirement: ...` through its scenarios).
3) Paste it under `## MODIFIED Requirements` and edit to reflect the new behavior.
4) Ensure the header text matches exactly (whitespace-insensitive) and keep at least one `#### Scenario:`.
Example for RENAMED:
```markdown
## RENAMED Requirements
- FROM: `### Requirement: Login`
- TO: `### Requirement: User Authentication`
```
## Troubleshooting
### Common Errors
**"Change must have at least one delta"**
- Check `changes/[name]/specs/` exists with .md files
- Verify files have operation prefixes (## ADDED Requirements)
**"Requirement must have at least one scenario"**
- Check scenarios use `#### Scenario:` format (4 hashtags)
- Don't use bullet points or bold for scenario headers
**Silent scenario parsing failures**
- Exact format required: `#### Scenario: Name`
- Debug with: `openspec show [change] --json --deltas-only`
### Validation Tips
```bash
# Always use strict mode for comprehensive checks
openspec validate [change] --strict --no-interactive
# Debug delta parsing
openspec show [change] --json | jq '.deltas'
# Check specific requirement
openspec show [spec] --json -r 1
```
## Happy Path Script
```bash
# 1) Explore current state
openspec spec list --long
openspec list
# Optional full-text search:
# rg -n "Requirement:|Scenario:" openspec/specs
# rg -n "^#|Requirement:" openspec/changes
# 2) Choose change id and scaffold
CHANGE=add-two-factor-auth
mkdir -p openspec/changes/$CHANGE/{specs/auth}
printf "## Why\n...\n\n## What Changes\n- ...\n\n## Impact\n- ...\n" > openspec/changes/$CHANGE/proposal.md
printf "## 1. Implementation\n- [ ] 1.1 ...\n" > openspec/changes/$CHANGE/tasks.md
# 3) Add deltas (example)
cat > openspec/changes/$CHANGE/specs/auth/spec.md << 'EOF'
## ADDED Requirements
### Requirement: Two-Factor Authentication
Users MUST provide a second factor during login.
#### Scenario: OTP required
- **WHEN** valid credentials are provided
- **THEN** an OTP challenge is required
EOF
# 4) Validate
openspec validate $CHANGE --strict --no-interactive
```
## Multi-Capability Example
```
openspec/changes/add-2fa-notify/
├── proposal.md
├── tasks.md
└── specs/
├── auth/
│ └── spec.md # ADDED: Two-Factor Authentication
└── notifications/
└── spec.md # ADDED: OTP email notification
```
auth/spec.md
```markdown
## ADDED Requirements
### Requirement: Two-Factor Authentication
...
```
notifications/spec.md
```markdown
## ADDED Requirements
### Requirement: OTP Email Notification
...
```
## Best Practices
### Simplicity First
- Default to <100 lines of new code
- Single-file implementations until proven insufficient
- Avoid frameworks without clear justification
- Choose boring, proven patterns
### Complexity Triggers
Only add complexity with:
- Performance data showing current solution too slow
- Concrete scale requirements (>1000 users, >100MB data)
- Multiple proven use cases requiring abstraction
### Clear References
- Use `file.ts:42` format for code locations
- Reference specs as `specs/auth/spec.md`
- Link related changes and PRs
### Capability Naming
- Use verb-noun: `user-auth`, `payment-capture`
- Single purpose per capability
- 10-minute understandability rule
- Split if description needs "AND"
### Change ID Naming
- Use kebab-case, short and descriptive: `add-two-factor-auth`
- Prefer verb-led prefixes: `add-`, `update-`, `remove-`, `refactor-`
- Ensure uniqueness; if taken, append `-2`, `-3`, etc.
## Tool Selection Guide
| Task | Tool | Why |
|------|------|-----|
| Find files by pattern | Glob | Fast pattern matching |
| Search code content | Grep | Optimized regex search |
| Read specific files | Read | Direct file access |
| Explore unknown scope | Task | Multi-step investigation |
## Error Recovery
### Change Conflicts
1. Run `openspec list` to see active changes
2. Check for overlapping specs
3. Coordinate with change owners
4. Consider combining proposals
### Validation Failures
1. Run with `--strict` flag
2. Check JSON output for details
3. Verify spec file format
4. Ensure scenarios properly formatted
### Missing Context
1. Read project.md first
2. Check related specs
3. Review recent archives
4. Ask for clarification
## Quick Reference
### Stage Indicators
- `changes/` - Proposed, not yet built
- `specs/` - Built and deployed
- `archive/` - Completed changes
### File Purposes
- `proposal.md` - Why and what
- `tasks.md` - Implementation steps
- `design.md` - Technical decisions
- `spec.md` - Requirements and behavior
### CLI Essentials
```bash
openspec list # What's in progress?
openspec show [item] # View details
openspec validate --strict --no-interactive # Is it correct?
openspec archive <change-id> [--yes|-y] # Mark complete (add --yes for automation)
```
Remember: Specs are truth. Changes are proposals. Keep them in sync.

31
openspec/project.md Normal file
View File

@@ -0,0 +1,31 @@
# Project Context
## Purpose
[Describe your project's purpose and goals]
## Tech Stack
- [List your primary technologies]
- [e.g., TypeScript, React, Node.js]
## Project Conventions
### Code Style
[Describe your code style preferences, formatting rules, and naming conventions]
### Architecture Patterns
[Document your architectural decisions and patterns]
### Testing Strategy
[Explain your testing approach and requirements]
### Git Workflow
[Describe your branching strategy and commit conventions]
## Domain Context
[Add domain-specific knowledge that AI assistants need to understand]
## Important Constraints
[List any technical, business, or regulatory constraints]
## External Dependencies
[Document key external services, APIs, or systems]

7375
package-lock.json generated Normal file

File diff suppressed because it is too large Load Diff

28
package.json Normal file
View File

@@ -0,0 +1,28 @@
{
"name": "upload_rcu_dev",
"version": "1.0.0",
"description": "",
"main": "index.js",
"scripts": {
"test": "jest",
"spec:validate": "openspec validate --specs --no-interactive",
"dev": "node src/index.js",
"start": "node src/index.js"
},
"keywords": [],
"author": "",
"license": "ISC",
"type": "commonjs",
"dependencies": {
"axios": "^1.13.2",
"dotenv": "^17.2.3",
"node-cron": "^4.2.1",
"pg": "^8.17.1",
"uuid": "^13.0.0"
},
"devDependencies": {
"@fission-ai/openspec": "^0.22.0",
"eslint": "^9.39.2",
"jest": "^30.2.0"
}
}

13
pm2.json Normal file
View File

@@ -0,0 +1,13 @@
{
"apps": [{
"name": "rcu-upgrade-service",
"script": "./src/index.js",
"env": {
"NODE_ENV": "production"
},
"log_date_format": "YYYY-MM-DD HH:mm:ss",
"error_file": "./logs/err.log",
"out_file": "./logs/out.log",
"restart_delay": 5000
}]
}

66
project.md Normal file
View File

@@ -0,0 +1,66 @@
根据约束帮我写一个nodejs的后端项目。这个项目需要实现以下的功能
1调用其他项目的API是一个升级接口等待返回。返回以后执行下一步。
2有返回值以后等待45秒调用第二个API获取到一个升级是否成功的状态如果调用失败或者返回空则在5分钟之内每30秒调用一次直到成功或者超时
3把上述所有信息记录到日志数据库中需要创建一个日志数据库和时序日志表
4项目需要用pm2部署需要在项目根目录下创建一个pm2.json文件配置好启动参数。
5项目需要在.env文件配置好所有的环境变量和升级参数。
6项目需要创建一个test_upgrade数据库数据库中需要有一个upgrade_log表用于记录升级日志。
7需要一个定时器每隔10分钟时间需要通过env配置文件修改调用升级接口。这里注意配置文件需要特殊设计因为可能需要同时对多个主机分多组进行升级每组主机的升级时间也可能是不同的需要根据实际情况进行配置roomtype_id应当是数组一个roomtype_id应当对应一组host_list_str每个host_list_str也应当是数组一组host_list_str对应2个fileName这里注意2个fileName是因为每一个版本需要连续升级2次然后切换另一个版本再升级两次再切换版本以此类推。每次升级最小升级单位是roomtype_id多个roomtype_id的情况下分别使用每一个roomtype_id和他对应的host_list_str以及fileName来调用接口进行下发。你需要设计好配置文件的结构并且给出一个案例。
环境:
Windows server 2022
nodejs 24.10.0
pm2 部署
升级接口:
https://www.boonlive-rcu.com/api/WebChatUpgrade
Post方法form表单
参数:
roomtype_id=2 释义房型编号目前固定值需要通过env配置文件修改
host_list_str=[1,2,3] 释义:主机 编号ID 目前固定值需要通过env配置文件修改
fileName=1.bin 释义:固件的名字 目前固定值需要通过env配置文件修改
返回值:
{"IsSuccess":true,"Data":"升级中"} IsSuccess释义是否成功成功返回true失败返回false Data 释义:返回信息
查询升级完成接口:
https://www.boonlive-rcu.com/api/QueryUpdateHostProgressBar
post方法,form形式
参数:
HostIDList=[1,2,3] 释义:主机 编号ID 目前固定值需要通过env配置文件修改等同于升级接口的host_list_str
返回值:
{
"IsSuccess": true,
"Response": [ // 升级状态列表每个对象组是HostIDList中的一个主机的升级状态
{
"HostID": 12345, // 主机编号ID
"Upgrade_status": "升级就绪", // 升级状态枚举值升级就绪升级完成升级失败开始下载下载中下载完成校验中校验完成RCU升级中超时失败
"Upgrade_DateTime": "2026-01-20 09:18:57", // 升级时间格式yyyy-MM-dd HH:mm:ss
"UpgradeFileType": "固件升级", // 升级文件类型(枚举值:固件升级,配置升级)
"FileBlockTotalCount": 100, // 总块数
"FileCurrentNumber": 25, // 当前块数
"BaiFenBi": "25%", // 升级进度(百分比)
"FileName": "firmware_v1.2.bin", // 升级文件名
"Version": "1.2.0", // 固件版本号
"ConfiguraVersion": "2.1.0" // 配置版本号
}
]
}
上述内容里,需要记入数据库的字段有:
-1开始升级时间初次调用WebChatUpgrade接口时间格式yyyy-MM-dd HH:mm:ss
-2roomtype_id (升级的房型编号)
-3host_str 升级的主机编号ID列表
-4fileName (升级的文件名)
-5升级状态成功或失败
-6查询到完成升级或者升级失败的时间记录5分钟内最后一次返回值的Upgrade_status时间格式yyyy-MM-dd HH:mm:ss若所有对象Upgrade_status都为升级完成或超时失败则直接结束不需要等待5分钟
-7升级文件类型固件升级或配置升级
-8配置版本号
-9固件版本号
-10其他你认为必要的字段
而且不是每次调用这个查询接口都写库只记录5分钟内最后一次返回值的Upgrade_status字段到status里如果所有对象的Upgrade_status都为升级完成或超时失败则直接结束不需要等待5分钟。
例如调用接口参数roomtype_id=2host_list_str=[1,2,3]。写库3行分别host_str是123。然后调用接口QueryUpdateHostProgressBar参数roomtype_id=2HostIDList=[1,2,3]。返回值里Response里的Upgrade_status字段为升级完成的那次调用里的信息记录到数据库逐个记录例如1为升级成功其余两个无记录则先记录1等2升级成功再记录2以此类推。这里需要用uuid来确保是同一次调用。每次调用WebChatUpgrade都应该创建新的数据库行数据。
另外
需要把 https://www.boonlive-rcu.com/api 作为一个常量放在env配置文件中。
链接需要用到的PGSQL的字符串配置的env我已经放在了.env文件中。其他配置字段需要你根据实际情况添加。

View File

@@ -0,0 +1 @@
ALTER TABLE upgrade_log DROP COLUMN IF EXISTS id;

45
scripts/init_db.js Normal file
View File

@@ -0,0 +1,45 @@
require('dotenv').config();
const { Client } = require('pg');
const fs = require('fs');
const path = require('path');
const getDbConfig = (database) => ({
host: process.env.DB_HOST,
port: process.env.DB_PORT,
user: process.env.DB_USER,
password: process.env.DB_PASSWORD,
database
});
const ensureDatabase = async () => {
const adminClient = new Client(getDbConfig('postgres'));
await adminClient.connect();
const dbName = process.env.DB_NAME;
const exists = await adminClient.query('SELECT 1 FROM pg_database WHERE datname = $1', [dbName]);
if (exists.rowCount === 0) {
await adminClient.query(`CREATE DATABASE ${dbName}`);
}
await adminClient.end();
};
const ensureSchema = async () => {
const dbClient = new Client(getDbConfig(process.env.DB_NAME));
await dbClient.connect();
const sqlPath = path.join(__dirname, 'init_db.sql');
const sql = fs.readFileSync(sqlPath, 'utf8');
await dbClient.query(sql);
await dbClient.end();
};
const run = async () => {
try {
await ensureDatabase();
await ensureSchema();
console.log('Database initialization complete.');
} catch (err) {
console.error('Database initialization failed:', err.message);
process.exit(1);
}
};
run();

25
scripts/init_db.sql Normal file
View File

@@ -0,0 +1,25 @@
-- Create database (run manually or via script if user has permissions)
-- CREATE DATABASE test_upgrade;
-- Connect to test_upgrade before running the following:
CREATE TABLE IF NOT EXISTS upgrade_log (
uuid UUID NOT NULL,
start_time TIMESTAMP NOT NULL,
roomtype_id INTEGER NOT NULL,
host_str TEXT NOT NULL,
filename TEXT NOT NULL,
status TEXT,
end_time TIMESTAMP,
file_type TEXT,
config_version TEXT,
firmware_version TEXT,
created_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP
);
CREATE TABLE IF NOT EXISTS upgrade_state (
state_key TEXT PRIMARY KEY,
current_roomtype_index INTEGER DEFAULT 0,
execution_count INTEGER DEFAULT 0,
last_updated TIMESTAMP DEFAULT CURRENT_TIMESTAMP
);

183
spec/rcu-upgrade-flow.yaml Normal file
View File

@@ -0,0 +1,183 @@
openapi: 3.1.0
info:
title: RCU Upgrade Service Spec
version: 1.0.0
description: Specification for RCU Upgrade Service API interactions and data models.
paths:
/api/WebChatUpgrade:
post:
summary: Trigger Upgrade
operationId: triggerUpgrade
requestBody:
content:
application/x-www-form-urlencoded:
schema:
$ref: '#/components/schemas/UpgradeRequest'
responses:
'200':
description: Successful response
content:
application/json:
schema:
$ref: '#/components/schemas/UpgradeResponse'
/api/QueryUpdateHostProgressBar:
post:
summary: Query Upgrade Status
operationId: queryUpgradeStatus
requestBody:
content:
application/x-www-form-urlencoded:
schema:
$ref: '#/components/schemas/QueryRequest'
responses:
'200':
description: Status response
content:
application/json:
schema:
$ref: '#/components/schemas/QueryResponse'
components:
schemas:
UpgradeRequest:
type: object
properties:
roomtype_id:
type: integer
description: Room Type ID
host_list_str:
type: string
description: JSON string array of Host IDs (e.g., "[1,2,3]")
fileName:
type: string
description: Firmware filename
required:
- roomtype_id
- host_list_str
- fileName
UpgradeResponse:
type: object
properties:
IsSuccess:
type: boolean
Data:
type: string
required:
- IsSuccess
QueryRequest:
type: object
properties:
HostIDList:
type: string
description: JSON string array of Host IDs (e.g., "[1,2,3]")
required:
- HostIDList
QueryResponse:
type: object
properties:
IsSuccess:
type: boolean
Response:
type: array
items:
$ref: '#/components/schemas/HostStatus'
required:
- IsSuccess
HostStatus:
type: object
properties:
HostID:
type: integer
Upgrade_status:
type: string
enum:
- 升级就绪
- 升级完成
- 升级失败
- 开始下载
- 下载中
- 下载完成
- 校验中
- 校验完成
- RCU升级中
- 超时失败
Upgrade_DateTime:
type: string
format: date-time
UpgradeFileType:
type: string
enum:
- 固件升级
- 配置升级
FileBlockTotalCount:
type: integer
FileCurrentNumber:
type: integer
BaiFenBi:
type: string
FileName:
type: string
Version:
type: string
ConfiguraVersion:
type: string
UpgradeLog:
type: object
description: Schema for database log entry
properties:
start_time:
type: string
format: date-time
roomtype_id:
type: integer
host_str:
type: string
filename:
type: string
status:
type: string
end_time:
type: string
format: date-time
file_type:
type: string
config_version:
type: string
firmware_version:
type: string
uuid:
type: string
format: uuid
UpgradeConfigGroup:
type: object
properties:
hosts:
type: array
items:
type: integer
roomtypes:
type: array
minItems: 2
maxItems: 2
items:
$ref: '#/components/schemas/UpgradeConfigRoomtype'
required:
- hosts
- roomtypes
UpgradeConfigRoomtype:
type: object
properties:
roomtype_id:
type: integer
fileName:
type: string
required:
- roomtype_id
- fileName

53
src/apiClient.js Normal file
View File

@@ -0,0 +1,53 @@
const axios = require('axios');
const config = require('./config');
const apiClient = axios.create({
baseURL: config.apiBaseUrl,
timeout: 10000,
headers: {
'Content-Type': 'application/x-www-form-urlencoded'
}
});
const triggerUpgrade = async (roomtype_id, host_list, fileName) => {
try {
const params = new URLSearchParams();
params.append('roomtype_id', roomtype_id);
params.append('host_list_str', JSON.stringify(host_list));
params.append('fileName', fileName);
console.log('[WebChatUpgrade] Request', {
roomtype_id,
host_list_str: host_list,
fileName
});
const response = await apiClient.post('/WebChatUpgrade', params);
console.log('[WebChatUpgrade] Response', response.data);
return response.data;
} catch (error) {
console.error('Error calling WebChatUpgrade:', error.message);
throw error;
}
};
const queryStatus = async (host_list) => {
try {
const params = new URLSearchParams();
params.append('HostIDList', JSON.stringify(host_list));
console.log('[QueryUpdateHostProgressBar] Request', {
HostIDList: host_list
});
const response = await apiClient.post('/QueryUpdateHostProgressBar', params);
console.log('[QueryUpdateHostProgressBar] Response', response.data);
return response.data;
} catch (error) {
console.error('Error calling QueryUpdateHostProgressBar:', error.message);
throw error;
}
};
module.exports = {
triggerUpgrade,
queryStatus
};

42
src/config.js Normal file
View File

@@ -0,0 +1,42 @@
require('dotenv').config();
const parseUpgradeConfig = (configStr) => {
try {
const config = JSON.parse(configStr);
if (!Array.isArray(config)) {
throw new Error("Config must be an array");
}
// Basic validation
config.forEach((group, idx) => {
if (!Array.isArray(group.hosts)) throw new Error(`Group ${idx} missing hosts array`);
if (!Array.isArray(group.roomtypes) || group.roomtypes.length !== 2) {
throw new Error(`Group ${idx} must have roomtypes array with length 2`);
}
group.roomtypes.forEach((roomtype, rIdx) => {
if (!roomtype.roomtype_id) throw new Error(`Group ${idx} Roomtype ${rIdx} missing roomtype_id`);
if (!roomtype.fileName) throw new Error(`Group ${idx} Roomtype ${rIdx} missing fileName`);
});
});
return config;
} catch (e) {
console.error("Failed to parse UPGRADE_CONFIG:", e.message);
process.exit(1);
}
};
module.exports = {
port: process.env.PORT || 3000,
db: {
host: process.env.DB_HOST,
port: process.env.DB_PORT,
user: process.env.DB_USER,
password: process.env.DB_PASSWORD,
database: process.env.DB_NAME,
},
apiBaseUrl: process.env.API_BASE_URL,
cronSchedule: process.env.CRON_SCHEDULE || '*/10 * * * *',
runOnStartup: String(process.env.RUN_ON_STARTUP || 'false').toLowerCase() === 'true',
upgradeWaitSeconds: Number(process.env.UPGRADE_WAIT_SECONDS || 45),
upgradePollIntervalSeconds: Number(process.env.UPGRADE_POLL_INTERVAL_SECONDS || 45),
upgradeConfig: parseUpgradeConfig(process.env.UPGRADE_CONFIG)
};

14
src/db.js Normal file
View File

@@ -0,0 +1,14 @@
const { Pool } = require('pg');
const config = require('./config');
const pool = new Pool(config.db);
pool.on('error', (err, client) => {
console.error('Unexpected error on idle client', err);
process.exit(-1);
});
module.exports = {
query: (text, params) => pool.query(text, params),
pool
};

40
src/index.js Normal file
View File

@@ -0,0 +1,40 @@
const cron = require('node-cron');
const config = require('./config');
const upgradeController = require('./upgradeController');
const db = require('./db');
console.log('RCU Upgrade Service Starting...');
console.log('Schedule:', config.cronSchedule);
// Validate schedule
if (!cron.validate(config.cronSchedule)) {
console.error('Invalid cron schedule');
process.exit(1);
}
// Schedule task
const task = cron.schedule(config.cronSchedule, async () => {
console.log(`[${new Date().toISOString()}] Triggering scheduled upgrade...`);
try {
await upgradeController.run();
} catch (error) {
console.error('Error during scheduled run:', error);
}
});
if (config.runOnStartup) {
console.log(`[${new Date().toISOString()}] Triggering startup upgrade...`);
upgradeController.run().catch((error) => {
console.error('Error during startup run:', error);
});
}
console.log('Service is running. Press Ctrl+C to stop.');
// Graceful shutdown
process.on('SIGINT', async () => {
console.log('Stopping service...');
task.stop();
await db.pool.end();
process.exit(0);
});

50
src/loggerService.js Normal file
View File

@@ -0,0 +1,50 @@
const db = require('./db');
const logHostResult = async (data) => {
const query = `
INSERT INTO upgrade_log (
uuid, start_time, roomtype_id, host_str, filename, status,
end_time, file_type, config_version, firmware_version
) VALUES ($1, $2, $3, $4, $5, $6, $7, $8, $9, $10)
`;
const values = [
data.uuid,
data.start_time,
data.roomtype_id,
data.host_str,
data.filename,
data.status,
data.end_time,
data.file_type,
data.config_version,
data.firmware_version
];
await db.query(query, values);
};
const getUpgradeState = async (key) => {
const res = await db.query('SELECT * FROM upgrade_state WHERE state_key = $1', [key]);
if (res.rows.length > 0) {
return res.rows[0];
}
return null;
};
const updateUpgradeState = async (key, currentRoomtypeIndex, executionCount) => {
const query = `
INSERT INTO upgrade_state (state_key, current_roomtype_index, execution_count, last_updated)
VALUES ($1, $2, $3, NOW())
ON CONFLICT (state_key)
DO UPDATE SET
current_roomtype_index = EXCLUDED.current_roomtype_index,
execution_count = EXCLUDED.execution_count,
last_updated = NOW();
`;
await db.query(query, [key, currentRoomtypeIndex, executionCount]);
};
module.exports = {
logHostResult,
getUpgradeState,
updateUpgradeState
};

128
src/upgradeController.js Normal file
View File

@@ -0,0 +1,128 @@
const { v4: uuidv4 } = require('uuid');
const apiClient = require('./apiClient');
const loggerService = require('./loggerService');
const config = require('./config');
const sleep = (ms) => new Promise(resolve => setTimeout(resolve, ms));
const processGroup = async (group, groupIdx) => {
const stateKey = `group_${groupIdx}`;
let state = await loggerService.getUpgradeState(stateKey);
if (!state) {
state = { current_roomtype_index: 0, execution_count: 0 };
}
const roomtype = group.roomtypes[state.current_roomtype_index];
const roomtype_id = roomtype.roomtype_id;
const fileName = roomtype.fileName;
let nextState = { ...state };
nextState.execution_count += 1;
if (nextState.execution_count >= 2) {
nextState.execution_count = 0;
nextState.current_roomtype_index = 1 - state.current_roomtype_index;
}
const sessionUuid = uuidv4();
const startTime = new Date();
const hostList = group.hosts;
console.log(`[${stateKey}] Starting upgrade. Roomtype: ${roomtype_id}. File: ${fileName}. Hosts: ${hostList}`);
try {
const upgradeRes = await apiClient.triggerUpgrade(roomtype_id, hostList, fileName);
if (!upgradeRes.IsSuccess) {
console.error(`[${stateKey}] Upgrade trigger failed: ${upgradeRes.Data}`);
return;
}
} catch (e) {
console.error(`[${stateKey}] Upgrade trigger error:`, e.message);
return;
}
const waitSeconds = config.upgradeWaitSeconds || 45;
const pollIntervalSeconds = config.upgradePollIntervalSeconds || 45;
console.log(`[${stateKey}] Waiting ${waitSeconds}s...`);
await sleep(waitSeconds * 1000);
const timeout = 5 * 60 * 1000;
const interval = pollIntervalSeconds * 1000;
const pollStartTime = Date.now();
const allHosts = new Set(hostList.map(String));
const lastStatusMap = new Map();
while (Date.now() - pollStartTime < timeout) {
try {
const queryRes = await apiClient.queryStatus(hostList);
if (queryRes && Array.isArray(queryRes.Response)) {
for (const hostStatus of queryRes.Response) {
const hid = String(hostStatus.HostID);
if (allHosts.has(hid)) {
lastStatusMap.set(hid, {
status: hostStatus.Upgrade_status,
file_type: hostStatus.UpgradeFileType,
config_version: hostStatus.ConfiguraVersion,
firmware_version: hostStatus.Version
});
}
}
}
} catch (e) {
console.error(`[${stateKey}] Poll error:`, e.message);
}
if (lastStatusMap.size === allHosts.size) {
const allDone = Array.from(lastStatusMap.values()).every((item) =>
item.status === '升级完成' || item.status === '超时失败'
);
if (allDone) {
console.log(`[${stateKey}] All hosts completed or timeout failed.`);
break;
}
}
await sleep(interval);
}
for (const hid of allHosts) {
const data = lastStatusMap.get(hid) || {
status: '超时失败',
file_type: 'Unknown',
config_version: '',
firmware_version: ''
};
await loggerService.logHostResult({
uuid: sessionUuid,
start_time: startTime.toISOString(),
roomtype_id: roomtype_id,
host_str: hid,
filename: fileName,
status: data.status,
end_time: new Date().toISOString(),
file_type: data.file_type,
config_version: data.config_version,
firmware_version: data.firmware_version
});
console.log(`[${stateKey}] Host ${hid} final status: ${data.status}.`);
}
await loggerService.updateUpgradeState(stateKey, nextState.current_roomtype_index, nextState.execution_count);
};
const run = async () => {
console.log('Starting upgrade cycle...');
const promises = [];
config.upgradeConfig.forEach((group, idx) => {
promises.push(processGroup(group, idx));
});
await Promise.all(promises);
console.log('Upgrade cycle finished.');
};
module.exports = { run };

9
tests/apiClient.test.js Normal file
View File

@@ -0,0 +1,9 @@
const apiClient = require('../src/apiClient');
describe('API Client', () => {
test('should be defined', () => {
expect(apiClient).toBeDefined();
expect(apiClient.triggerUpgrade).toBeDefined();
expect(apiClient.queryStatus).toBeDefined();
});
});

17
tests/config.test.js Normal file
View File

@@ -0,0 +1,17 @@
const config = require('../src/config');
describe('Config Loader', () => {
test('should load and parse UPGRADE_CONFIG correctly', () => {
expect(Array.isArray(config.upgradeConfig)).toBe(true);
const group = config.upgradeConfig[0];
expect(Array.isArray(group.hosts)).toBe(true);
expect(Array.isArray(group.roomtypes)).toBe(true);
expect(group.roomtypes.length).toBe(2);
expect(group.roomtypes[0]).toHaveProperty('roomtype_id');
expect(group.roomtypes[0]).toHaveProperty('fileName');
});
test('should have default cron schedule', () => {
expect(config.cronSchedule).toBeDefined();
});
});