feat: 初始化BLS心跳接收端项目

- 添加基础项目结构,包括.gitignore、vite配置和package.json
- 实现Kafka消费者模块框架
- 添加心跳处理器模块框架
- 实现数据库管理模块框架
- 添加OpenSpec规范文档
- 更新README文档说明项目功能和技术栈
This commit is contained in:
2026-01-08 09:16:53 +08:00
parent 24654c4b90
commit adc3bfd87d
19 changed files with 5549 additions and 1 deletions

View File

@@ -0,0 +1,22 @@
---
description: Implement an approved OpenSpec change and keep tasks in sync.
---
$ARGUMENTS
<!-- OPENSPEC:START -->
**Guardrails**
- Favor straightforward, minimal implementations first and add complexity only when it is requested or clearly required.
- Keep changes tightly scoped to the requested outcome.
- Refer to `openspec/AGENTS.md` (located inside the `openspec/` directory—run `ls openspec` or `openspec update` if you don't see it) if you need additional OpenSpec conventions or clarifications.
**Steps**
Track these steps as TODOs and complete them one by one.
1. Read `changes/<id>/proposal.md`, `design.md` (if present), and `tasks.md` to confirm scope and acceptance criteria.
2. Work through tasks sequentially, keeping edits minimal and focused on the requested change.
3. Confirm completion before updating statuses—make sure every item in `tasks.md` is finished.
4. Update the checklist after all work is done so each task is marked `- [x]` and reflects reality.
5. Reference `openspec list` or `openspec show <item>` when additional context is required.
**Reference**
- Use `openspec show <id> --json --deltas-only` if you need additional context from the proposal while implementing.
<!-- OPENSPEC:END -->

View File

@@ -0,0 +1,26 @@
---
description: Archive a deployed OpenSpec change and update specs.
---
$ARGUMENTS
<!-- OPENSPEC:START -->
**Guardrails**
- Favor straightforward, minimal implementations first and add complexity only when it is requested or clearly required.
- Keep changes tightly scoped to the requested outcome.
- Refer to `openspec/AGENTS.md` (located inside the `openspec/` directory—run `ls openspec` or `openspec update` if you don't see it) if you need additional OpenSpec conventions or clarifications.
**Steps**
1. Determine the change ID to archive:
- If this prompt already includes a specific change ID (for example inside a `<ChangeId>` block populated by slash-command arguments), use that value after trimming whitespace.
- If the conversation references a change loosely (for example by title or summary), run `openspec list` to surface likely IDs, share the relevant candidates, and confirm which one the user intends.
- Otherwise, review the conversation, run `openspec list`, and ask the user which change to archive; wait for a confirmed change ID before proceeding.
- If you still cannot identify a single change ID, stop and tell the user you cannot archive anything yet.
2. Validate the change ID by running `openspec list` (or `openspec show <id>`) and stop if the change is missing, already archived, or otherwise not ready to archive.
3. Run `openspec archive <id> --yes` so the CLI moves the change and applies spec updates without prompts (use `--skip-specs` only for tooling-only work).
4. Review the command output to confirm the target specs were updated and the change landed in `changes/archive/`.
5. Validate with `openspec validate --strict` and inspect with `openspec show <id>` if anything looks off.
**Reference**
- Use `openspec list` to confirm change IDs before archiving.
- Inspect refreshed specs with `openspec list --specs` and address any validation issues before handing off.
<!-- OPENSPEC:END -->

View File

@@ -0,0 +1,27 @@
---
description: Scaffold a new OpenSpec change and validate strictly.
---
$ARGUMENTS
<!-- OPENSPEC:START -->
**Guardrails**
- Favor straightforward, minimal implementations first and add complexity only when it is requested or clearly required.
- Keep changes tightly scoped to the requested outcome.
- Refer to `openspec/AGENTS.md` (located inside the `openspec/` directory—run `ls openspec` or `openspec update` if you don't see it) if you need additional OpenSpec conventions or clarifications.
- Identify any vague or ambiguous details and ask the necessary follow-up questions before editing files.
- Do not write any code during the proposal stage. Only create design documents (proposal.md, tasks.md, design.md, and spec deltas). Implementation happens in the apply stage after approval.
**Steps**
1. Review `openspec/project.md`, run `openspec list` and `openspec list --specs`, and inspect related code or docs (e.g., via `rg`/`ls`) to ground the proposal in current behaviour; note any gaps that require clarification.
2. Choose a unique verb-led `change-id` and scaffold `proposal.md`, `tasks.md`, and `design.md` (when needed) under `openspec/changes/<id>/`.
3. Map the change into concrete capabilities or requirements, breaking multi-scope efforts into distinct spec deltas with clear relationships and sequencing.
4. Capture architectural reasoning in `design.md` when the solution spans multiple systems, introduces new patterns, or demands trade-off discussion before committing to specs.
5. Draft spec deltas in `changes/<id>/specs/<capability>/spec.md` (one folder per capability) using `## ADDED|MODIFIED|REMOVED Requirements` with at least one `#### Scenario:` per requirement and cross-reference related capabilities when relevant.
6. Draft `tasks.md` as an ordered list of small, verifiable work items that deliver user-visible progress, include validation (tests, tooling), and highlight dependencies or parallelizable work.
7. Validate with `openspec validate <id> --strict` and resolve every issue before sharing the proposal.
**Reference**
- Use `openspec show <id> --json --deltas-only` or `openspec show <spec> --type spec` to inspect details when validation fails.
- Search existing requirements with `rg -n "Requirement:|Scenario:" openspec/specs` before writing new ones.
- Explore the codebase with `rg <keyword>`, `ls`, or direct file reads so proposals align with current implementation realities.
<!-- OPENSPEC:END -->

38
.gitignore vendored Normal file
View File

@@ -0,0 +1,38 @@
# 依赖目录
node_modules/
# 构建输出目录
dist/
build/
# 环境配置文件
.env
.env.local
.env.*.local
config.js
# 日志文件
logs/
*.log
npm-debug.log*
yarn-debug.log*
yarn-error.log*
pnpm-debug.log*
lerna-debug.log*
# IDE配置文件
.vscode/
.idea/
*.swp
*.swo
*~
.DS_Store
# 测试覆盖率报告
coverage/
.nyc_output/
# 临时文件
*.tmp
*.temp
.cache/

18
AGENTS.md Normal file
View File

@@ -0,0 +1,18 @@
<!-- OPENSPEC:START -->
# OpenSpec Instructions
These instructions are for AI assistants working in this project.
Always open `@/openspec/AGENTS.md` when the request:
- Mentions planning or proposals (words like proposal, spec, change, plan)
- Introduces new capabilities, breaking changes, architecture shifts, or big performance/security work
- Sounds ambiguous and you need the authoritative spec before coding
Use `@/openspec/AGENTS.md` to learn:
- How to create and apply change proposals
- Spec format and conventions
- Project structure and guidelines
Keep this managed block so 'openspec update' can refresh the instructions.
<!-- OPENSPEC:END -->

View File

@@ -1,3 +1,71 @@
# Web_BLS_Heartbeat_Server
BLS心跳接收端
BLS心跳接收端用于接收并处理Kafka队列中的心跳数据经过解包处理后写入PostgreSQL数据库。
## 功能介绍
- 从Kafka队列接收心跳数据
- 解包和验证心跳数据格式
- 批量处理心跳数据,提高处理效率
- 将处理后的数据写入PostgreSQL数据库
- 支持高并发和自动重连机制
## 技术栈
- **Node.js** (JavaScript) - 运行环境
- **Vite** - 构建工具
- **Kafka** - 消息队列
- **PostgreSQL** - 数据库
## 快速开始
### 安装依赖
```bash
npm install
```
### 配置
复制配置文件模板并根据环境需求修改:
```bash
cp src/config/config.example.js src/config/config.js
```
### 构建
```bash
npm run build
```
### 运行
```bash
npm run dev
```
## 项目结构
```
├── src/ # 源代码目录
│ ├── config/ # 配置文件
│ ├── kafka/ # Kafka消息处理
│ ├── processor/ # 心跳数据处理
│ ├── db/ # 数据库操作
│ └── index.js # 项目入口
├── openspec/ # OpenSpec规范文档
├── package.json # 项目依赖
├── vite.config.js # Vite配置
└── README.md # 项目说明
```
## 开发命令
| 命令 | 描述 |
|------|------|
| `npm install` | 安装项目依赖 |
| `npm run build` | 构建项目 |
| `npm run dev` | 启动开发服务器 |
| `npm run test` | 运行测试 |
| `npm run lint` | 代码检查 |

456
openspec/AGENTS.md Normal file
View File

@@ -0,0 +1,456 @@
# OpenSpec Instructions
Instructions for AI coding assistants using OpenSpec for spec-driven development.
## TL;DR Quick Checklist
- Search existing work: `openspec spec list --long`, `openspec list` (use `rg` only for full-text search)
- Decide scope: new capability vs modify existing capability
- Pick a unique `change-id`: kebab-case, verb-led (`add-`, `update-`, `remove-`, `refactor-`)
- Scaffold: `proposal.md`, `tasks.md`, `design.md` (only if needed), and delta specs per affected capability
- Write deltas: use `## ADDED|MODIFIED|REMOVED|RENAMED Requirements`; include at least one `#### Scenario:` per requirement
- Validate: `openspec validate [change-id] --strict` and fix issues
- Request approval: Do not start implementation until proposal is approved
## Three-Stage Workflow
### Stage 1: Creating Changes
Create proposal when you need to:
- Add features or functionality
- Make breaking changes (API, schema)
- Change architecture or patterns
- Optimize performance (changes behavior)
- Update security patterns
Triggers (examples):
- "Help me create a change proposal"
- "Help me plan a change"
- "Help me create a proposal"
- "I want to create a spec proposal"
- "I want to create a spec"
Loose matching guidance:
- Contains one of: `proposal`, `change`, `spec`
- With one of: `create`, `plan`, `make`, `start`, `help`
Skip proposal for:
- Bug fixes (restore intended behavior)
- Typos, formatting, comments
- Dependency updates (non-breaking)
- Configuration changes
- Tests for existing behavior
**Workflow**
1. Review `openspec/project.md`, `openspec list`, and `openspec list --specs` to understand current context.
2. Choose a unique verb-led `change-id` and scaffold `proposal.md`, `tasks.md`, optional `design.md`, and spec deltas under `openspec/changes/<id>/`.
3. Draft spec deltas using `## ADDED|MODIFIED|REMOVED Requirements` with at least one `#### Scenario:` per requirement.
4. Run `openspec validate <id> --strict` and resolve any issues before sharing the proposal.
### Stage 2: Implementing Changes
Track these steps as TODOs and complete them one by one.
1. **Read proposal.md** - Understand what's being built
2. **Read design.md** (if exists) - Review technical decisions
3. **Read tasks.md** - Get implementation checklist
4. **Implement tasks sequentially** - Complete in order
5. **Confirm completion** - Ensure every item in `tasks.md` is finished before updating statuses
6. **Update checklist** - After all work is done, set every task to `- [x]` so the list reflects reality
7. **Approval gate** - Do not start implementation until the proposal is reviewed and approved
### Stage 3: Archiving Changes
After deployment, create separate PR to:
- Move `changes/[name]/``changes/archive/YYYY-MM-DD-[name]/`
- Update `specs/` if capabilities changed
- Use `openspec archive <change-id> --skip-specs --yes` for tooling-only changes (always pass the change ID explicitly)
- Run `openspec validate --strict` to confirm the archived change passes checks
## Before Any Task
**Context Checklist:**
- [ ] Read relevant specs in `specs/[capability]/spec.md`
- [ ] Check pending changes in `changes/` for conflicts
- [ ] Read `openspec/project.md` for conventions
- [ ] Run `openspec list` to see active changes
- [ ] Run `openspec list --specs` to see existing capabilities
**Before Creating Specs:**
- Always check if capability already exists
- Prefer modifying existing specs over creating duplicates
- Use `openspec show [spec]` to review current state
- If request is ambiguous, ask 12 clarifying questions before scaffolding
### Search Guidance
- Enumerate specs: `openspec spec list --long` (or `--json` for scripts)
- Enumerate changes: `openspec list` (or `openspec change list --json` - deprecated but available)
- Show details:
- Spec: `openspec show <spec-id> --type spec` (use `--json` for filters)
- Change: `openspec show <change-id> --json --deltas-only`
- Full-text search (use ripgrep): `rg -n "Requirement:|Scenario:" openspec/specs`
## Quick Start
### CLI Commands
```bash
# Essential commands
openspec list # List active changes
openspec list --specs # List specifications
openspec show [item] # Display change or spec
openspec validate [item] # Validate changes or specs
openspec archive <change-id> [--yes|-y] # Archive after deployment (add --yes for non-interactive runs)
# Project management
openspec init [path] # Initialize OpenSpec
openspec update [path] # Update instruction files
# Interactive mode
openspec show # Prompts for selection
openspec validate # Bulk validation mode
# Debugging
openspec show [change] --json --deltas-only
openspec validate [change] --strict
```
### Command Flags
- `--json` - Machine-readable output
- `--type change|spec` - Disambiguate items
- `--strict` - Comprehensive validation
- `--no-interactive` - Disable prompts
- `--skip-specs` - Archive without spec updates
- `--yes`/`-y` - Skip confirmation prompts (non-interactive archive)
## Directory Structure
```
openspec/
├── project.md # Project conventions
├── specs/ # Current truth - what IS built
│ └── [capability]/ # Single focused capability
│ ├── spec.md # Requirements and scenarios
│ └── design.md # Technical patterns
├── changes/ # Proposals - what SHOULD change
│ ├── [change-name]/
│ │ ├── proposal.md # Why, what, impact
│ │ ├── tasks.md # Implementation checklist
│ │ ├── design.md # Technical decisions (optional; see criteria)
│ │ └── specs/ # Delta changes
│ │ └── [capability]/
│ │ └── spec.md # ADDED/MODIFIED/REMOVED
│ └── archive/ # Completed changes
```
## Creating Change Proposals
### Decision Tree
```
New request?
├─ Bug fix restoring spec behavior? → Fix directly
├─ Typo/format/comment? → Fix directly
├─ New feature/capability? → Create proposal
├─ Breaking change? → Create proposal
├─ Architecture change? → Create proposal
└─ Unclear? → Create proposal (safer)
```
### Proposal Structure
1. **Create directory:** `changes/[change-id]/` (kebab-case, verb-led, unique)
2. **Write proposal.md:**
```markdown
# Change: [Brief description of change]
## Why
[1-2 sentences on problem/opportunity]
## What Changes
- [Bullet list of changes]
- [Mark breaking changes with **BREAKING**]
## Impact
- Affected specs: [list capabilities]
- Affected code: [key files/systems]
```
3. **Create spec deltas:** `specs/[capability]/spec.md`
```markdown
## ADDED Requirements
### Requirement: New Feature
The system SHALL provide...
#### Scenario: Success case
- **WHEN** user performs action
- **THEN** expected result
## MODIFIED Requirements
### Requirement: Existing Feature
[Complete modified requirement]
## REMOVED Requirements
### Requirement: Old Feature
**Reason**: [Why removing]
**Migration**: [How to handle]
```
If multiple capabilities are affected, create multiple delta files under `changes/[change-id]/specs/<capability>/spec.md`—one per capability.
4. **Create tasks.md:**
```markdown
## 1. Implementation
- [ ] 1.1 Create database schema
- [ ] 1.2 Implement API endpoint
- [ ] 1.3 Add frontend component
- [ ] 1.4 Write tests
```
5. **Create design.md when needed:**
Create `design.md` if any of the following apply; otherwise omit it:
- Cross-cutting change (multiple services/modules) or a new architectural pattern
- New external dependency or significant data model changes
- Security, performance, or migration complexity
- Ambiguity that benefits from technical decisions before coding
Minimal `design.md` skeleton:
```markdown
## Context
[Background, constraints, stakeholders]
## Goals / Non-Goals
- Goals: [...]
- Non-Goals: [...]
## Decisions
- Decision: [What and why]
- Alternatives considered: [Options + rationale]
## Risks / Trade-offs
- [Risk] → Mitigation
## Migration Plan
[Steps, rollback]
## Open Questions
- [...]
```
## Spec File Format
### Critical: Scenario Formatting
**CORRECT** (use #### headers):
```markdown
#### Scenario: User login success
- **WHEN** valid credentials provided
- **THEN** return JWT token
```
**WRONG** (don't use bullets or bold):
```markdown
- **Scenario: User login** ❌
**Scenario**: User login ❌
### Scenario: User login ❌
```
Every requirement MUST have at least one scenario.
### Requirement Wording
- Use SHALL/MUST for normative requirements (avoid should/may unless intentionally non-normative)
### Delta Operations
- `## ADDED Requirements` - New capabilities
- `## MODIFIED Requirements` - Changed behavior
- `## REMOVED Requirements` - Deprecated features
- `## RENAMED Requirements` - Name changes
Headers matched with `trim(header)` - whitespace ignored.
#### When to use ADDED vs MODIFIED
- ADDED: Introduces a new capability or sub-capability that can stand alone as a requirement. Prefer ADDED when the change is orthogonal (e.g., adding "Slash Command Configuration") rather than altering the semantics of an existing requirement.
- MODIFIED: Changes the behavior, scope, or acceptance criteria of an existing requirement. Always paste the full, updated requirement content (header + all scenarios). The archiver will replace the entire requirement with what you provide here; partial deltas will drop previous details.
- RENAMED: Use when only the name changes. If you also change behavior, use RENAMED (name) plus MODIFIED (content) referencing the new name.
Common pitfall: Using MODIFIED to add a new concern without including the previous text. This causes loss of detail at archive time. If you arent explicitly changing the existing requirement, add a new requirement under ADDED instead.
Authoring a MODIFIED requirement correctly:
1) Locate the existing requirement in `openspec/specs/<capability>/spec.md`.
2) Copy the entire requirement block (from `### Requirement: ...` through its scenarios).
3) Paste it under `## MODIFIED Requirements` and edit to reflect the new behavior.
4) Ensure the header text matches exactly (whitespace-insensitive) and keep at least one `#### Scenario:`.
Example for RENAMED:
```markdown
## RENAMED Requirements
- FROM: `### Requirement: Login`
- TO: `### Requirement: User Authentication`
```
## Troubleshooting
### Common Errors
**"Change must have at least one delta"**
- Check `changes/[name]/specs/` exists with .md files
- Verify files have operation prefixes (## ADDED Requirements)
**"Requirement must have at least one scenario"**
- Check scenarios use `#### Scenario:` format (4 hashtags)
- Don't use bullet points or bold for scenario headers
**Silent scenario parsing failures**
- Exact format required: `#### Scenario: Name`
- Debug with: `openspec show [change] --json --deltas-only`
### Validation Tips
```bash
# Always use strict mode for comprehensive checks
openspec validate [change] --strict
# Debug delta parsing
openspec show [change] --json | jq '.deltas'
# Check specific requirement
openspec show [spec] --json -r 1
```
## Happy Path Script
```bash
# 1) Explore current state
openspec spec list --long
openspec list
# Optional full-text search:
# rg -n "Requirement:|Scenario:" openspec/specs
# rg -n "^#|Requirement:" openspec/changes
# 2) Choose change id and scaffold
CHANGE=add-two-factor-auth
mkdir -p openspec/changes/$CHANGE/{specs/auth}
printf "## Why\n...\n\n## What Changes\n- ...\n\n## Impact\n- ...\n" > openspec/changes/$CHANGE/proposal.md
printf "## 1. Implementation\n- [ ] 1.1 ...\n" > openspec/changes/$CHANGE/tasks.md
# 3) Add deltas (example)
cat > openspec/changes/$CHANGE/specs/auth/spec.md << 'EOF'
## ADDED Requirements
### Requirement: Two-Factor Authentication
Users MUST provide a second factor during login.
#### Scenario: OTP required
- **WHEN** valid credentials are provided
- **THEN** an OTP challenge is required
EOF
# 4) Validate
openspec validate $CHANGE --strict
```
## Multi-Capability Example
```
openspec/changes/add-2fa-notify/
├── proposal.md
├── tasks.md
└── specs/
├── auth/
│ └── spec.md # ADDED: Two-Factor Authentication
└── notifications/
└── spec.md # ADDED: OTP email notification
```
auth/spec.md
```markdown
## ADDED Requirements
### Requirement: Two-Factor Authentication
...
```
notifications/spec.md
```markdown
## ADDED Requirements
### Requirement: OTP Email Notification
...
```
## Best Practices
### Simplicity First
- Default to <100 lines of new code
- Single-file implementations until proven insufficient
- Avoid frameworks without clear justification
- Choose boring, proven patterns
### Complexity Triggers
Only add complexity with:
- Performance data showing current solution too slow
- Concrete scale requirements (>1000 users, >100MB data)
- Multiple proven use cases requiring abstraction
### Clear References
- Use `file.ts:42` format for code locations
- Reference specs as `specs/auth/spec.md`
- Link related changes and PRs
### Capability Naming
- Use verb-noun: `user-auth`, `payment-capture`
- Single purpose per capability
- 10-minute understandability rule
- Split if description needs "AND"
### Change ID Naming
- Use kebab-case, short and descriptive: `add-two-factor-auth`
- Prefer verb-led prefixes: `add-`, `update-`, `remove-`, `refactor-`
- Ensure uniqueness; if taken, append `-2`, `-3`, etc.
## Tool Selection Guide
| Task | Tool | Why |
|------|------|-----|
| Find files by pattern | Glob | Fast pattern matching |
| Search code content | Grep | Optimized regex search |
| Read specific files | Read | Direct file access |
| Explore unknown scope | Task | Multi-step investigation |
## Error Recovery
### Change Conflicts
1. Run `openspec list` to see active changes
2. Check for overlapping specs
3. Coordinate with change owners
4. Consider combining proposals
### Validation Failures
1. Run with `--strict` flag
2. Check JSON output for details
3. Verify spec file format
4. Ensure scenarios properly formatted
### Missing Context
1. Read project.md first
2. Check related specs
3. Review recent archives
4. Ask for clarification
## Quick Reference
### Stage Indicators
- `changes/` - Proposed, not yet built
- `specs/` - Built and deployed
- `archive/` - Completed changes
### File Purposes
- `proposal.md` - Why and what
- `tasks.md` - Implementation steps
- `design.md` - Technical decisions
- `spec.md` - Requirements and behavior
### CLI Essentials
```bash
openspec list # What's in progress?
openspec show [item] # View details
openspec validate --strict # Is it correct?
openspec archive <change-id> [--yes|-y] # Mark complete (add --yes for automation)
```
Remember: Specs are truth. Changes are proposals. Keep them in sync.

55
openspec/project.md Normal file
View File

@@ -0,0 +1,55 @@
# Project Context
## Purpose
BLS心跳接收端用于接收并处理Kafka队列中的心跳数据经过解包处理后写入PostgreSQL数据库。
## Tech Stack
- Node.js (JavaScript)
- Vite (打包工具)
- Kafka (消息队列)
- PostgreSQL (数据库)
- OpenSpec (规范驱动开发框架)
## Project Conventions
### Code Style
- 使用ES模块语法
- 采用2空格缩进
- 变量命名使用小驼峰命名法
- 函数命名使用小驼峰命名法
- 常量命名使用大写下划线分隔
- 文件命名使用kebab-case格式
### Architecture Patterns
- 采用分层架构:数据接入层、业务逻辑层、数据持久化层
- 基于事件驱动模型处理Kafka消息
- 单职责原则,每个模块只负责一个功能
### Testing Strategy
- 使用Mocha进行单元测试
- 测试覆盖率目标核心功能≥80%
- 每个功能模块都应有对应的测试用例
### Git Workflow
- 采用Git Flow工作流
- 主分支main (生产环境)
- 开发分支develop (开发环境)
- 特性分支feature/xxx (新功能开发)
- 修复分支hotfix/xxx (紧急修复)
- 提交信息格式:`type(scope): description`
## Domain Context
- BLSBiometric Login System生物识别登录系统
- 心跳数据系统各组件定期发送的状态信息包含组件ID、状态、时间戳等
- 解包处理将Kafka消息中的二进制数据转换为结构化数据
## Important Constraints
- 系统需支持高并发,能够处理大量心跳数据
- 数据写入数据库需保证可靠性,避免数据丢失
- 需实现错误重试机制,处理临时网络故障等异常情况
- 系统需具备监控能力,能够实时查看运行状态
## External Dependencies
- Kafka集群用于接收心跳消息
- PostgreSQL数据库用于存储心跳数据
- OpenSpec用于规范驱动开发确保代码符合设计规范

69
openspec/specs/db/spec.md Normal file
View File

@@ -0,0 +1,69 @@
# 数据库操作规范
## 需求
### Requirement: 数据库连接管理
系统必须能够建立和维护与PostgreSQL数据库的连接。
#### Scenario: 成功连接数据库
- **WHEN** 系统启动时
- **THEN** 应该成功连接到配置的PostgreSQL数据库
- **AND** 应该监控连接状态
#### Scenario: 数据库连接断开重连
- **WHEN** 数据库连接断开时
- **THEN** 系统应该自动尝试重连
- **AND** 重连失败时应该记录错误日志
### Requirement: 心跳数据写入
系统必须能够将处理后的心跳数据写入PostgreSQL数据库。
#### Scenario: 写入单条心跳数据
- **WHEN** 接收到单条处理后的心跳数据时
- **THEN** 系统应该将数据写入数据库
- **AND** 返回写入结果
#### Scenario: 批量写入心跳数据
- **WHEN** 接收到批量处理后的心跳数据时
- **THEN** 系统应该使用批量写入机制将数据写入数据库
- **AND** 提高写入效率
### Requirement: 数据完整性保障
系统必须保障写入数据库的心跳数据的完整性。
#### Scenario: 事务管理
- **WHEN** 写入多条相关数据时
- **THEN** 系统应该使用事务确保数据一致性
- **AND** 要么全部写入成功,要么全部失败
#### Scenario: 数据约束验证
- **WHEN** 写入的数据违反数据库约束时
- **THEN** 系统应该捕获约束错误
- **AND** 记录错误日志
- **AND** 根据配置决定是否重试
### Requirement: 数据库表结构管理
系统必须包含数据库表结构的定义和管理机制。
#### Scenario: 表结构初始化
- **WHEN** 系统首次启动时
- **THEN** 系统应该检查数据库表是否存在
- **AND** 不存在时应该创建表结构
#### Scenario: 表结构迁移
- **WHEN** 表结构需要变更时
- **THEN** 系统应该支持平滑的表结构迁移
- **AND** 不影响现有数据
### Requirement: 数据查询支持
系统必须支持基本的数据查询操作,用于监控和调试。
#### Scenario: 查询最新心跳数据
- **WHEN** 需要查询最新的心跳数据时
- **THEN** 系统应该提供查询接口
- **AND** 返回符合条件的数据
#### Scenario: 按条件查询心跳数据
- **WHEN** 需要按特定条件查询心跳数据时
- **THEN** 系统应该支持条件过滤
- **AND** 返回符合条件的数据

View File

@@ -0,0 +1,40 @@
# Kafka消息处理规范
## 需求
### Requirement: Kafka连接管理
系统必须能够建立和维护与Kafka集群的连接。
#### Scenario: 成功连接Kafka集群
- **WHEN** 系统启动时
- **THEN** 应该成功连接到配置的Kafka集群
- **AND** 应该监控连接状态
#### Scenario: Kafka连接断开重连
- **WHEN** Kafka连接断开时
- **THEN** 系统应该自动尝试重连
- **AND** 重连失败时应该记录错误日志
### Requirement: 心跳消息消费
系统必须能够消费Kafka队列中的心跳消息。
#### Scenario: 消费心跳消息
- **WHEN** Kafka队列中有心跳消息时
- **THEN** 系统应该消费该消息
- **AND** 将消息传递给处理器进行解包
#### Scenario: 消息消费确认
- **WHEN** 消息处理完成后
- **THEN** 系统应该向Kafka确认消息已消费
### Requirement: 消息过滤与路由
系统必须能够根据消息类型过滤和路由心跳消息。
#### Scenario: 过滤无效消息
- **WHEN** 接收到无效格式的消息时
- **THEN** 系统应该丢弃该消息
- **AND** 记录错误日志
#### Scenario: 路由有效消息
- **WHEN** 接收到有效格式的心跳消息时
- **THEN** 系统应该将消息路由到正确的处理器

View File

@@ -0,0 +1,46 @@
# 数据处理器规范
## 需求
### Requirement: 心跳数据解包
系统必须能够解包Kafka消息中的二进制心跳数据。
#### Scenario: 解包有效心跳数据
- **WHEN** 接收到有效格式的Kafka心跳消息时
- **THEN** 系统应该成功解包消息
- **AND** 提取出心跳数据的各个字段
#### Scenario: 解包无效心跳数据
- **WHEN** 接收到无效格式的Kafka心跳消息时
- **THEN** 系统应该返回解包错误
- **AND** 记录错误日志
### Requirement: 心跳数据验证
系统必须能够验证解包后的心跳数据的有效性。
#### Scenario: 验证有效心跳数据
- **WHEN** 解包后的心跳数据格式正确且字段完整时
- **THEN** 系统应该验证通过
- **AND** 将数据传递给数据库层进行存储
#### Scenario: 验证无效心跳数据
- **WHEN** 解包后的心跳数据缺少必填字段时
- **THEN** 系统应该验证失败
- **AND** 记录错误日志
- **AND** 丢弃该数据
### Requirement: 心跳数据转换
系统必须能够将解包后的心跳数据转换为数据库存储格式。
#### Scenario: 转换心跳数据格式
- **WHEN** 心跳数据验证通过时
- **THEN** 系统应该将数据转换为数据库表结构所需的格式
- **AND** 添加必要的元数据
### Requirement: 批量处理支持
系统必须支持批量处理心跳数据,提高处理效率。
#### Scenario: 批量处理心跳数据
- **WHEN** 接收到大量心跳消息时
- **THEN** 系统应该将数据分批处理
- **AND** 每批处理的数量可配置

4232
package-lock.json generated Normal file

File diff suppressed because it is too large Load Diff

32
package.json Normal file
View File

@@ -0,0 +1,32 @@
{
"name": "web-bls-heartbeat-server",
"version": "1.0.0",
"description": "BLS心跳接收端用于处理Kafka队列数据并写入PostgreSQL数据库",
"type": "module",
"main": "dist/index.js",
"scripts": {
"dev": "vite",
"build": "vite build",
"preview": "vite preview",
"lint": "eslint . --ext .js",
"test": "mocha"
},
"dependencies": {
"kafka-node": "^5.0.0",
"pg": "^8.11.3"
},
"devDependencies": {
"vite": "^5.0.0",
"eslint": "^8.56.0",
"mocha": "^10.2.0"
},
"keywords": [
"BLS",
"heartbeat",
"kafka",
"postgresql",
"nodejs"
],
"author": "",
"license": "MIT"
}

View File

@@ -0,0 +1,46 @@
// 配置文件示例
// 复制此文件为 config.js 并填写实际配置
export default {
// Kafka配置
kafka: {
brokers: ['localhost:9092'], // Kafka集群地址
groupId: 'bls-heartbeat-consumer', // 消费者组ID
topic: 'bls-heartbeat', // 心跳消息主题
autoCommit: true, // 自动提交偏移量
autoCommitIntervalMs: 5000, // 自动提交间隔
retryAttempts: 3, // 重试次数
retryDelay: 1000 // 重试延迟
},
// 处理器配置
processor: {
batchSize: 100, // 批量处理大小
batchTimeout: 5000 // 批量处理超时时间
},
// 数据库配置
db: {
host: '10.8.8.109', // 数据库主机
port: 5433, // 数据库端口
user: 'log_admin', // 数据库用户名
password: 'YourActualStrongPasswordForPostgres!', // 数据库密码
database: 'log_platform', // 数据库名称
maxConnections: 10, // 最大连接数
idleTimeoutMillis: 30000, // 连接空闲超时时间
retryAttempts: 3, // 重试次数
retryDelay: 1000 // 重试延迟
},
// 日志配置
logger: {
level: 'info', // 日志级别
format: 'json' // 日志格式
},
// 应用配置
app: {
port: 3000, // 应用端口
env: 'development' // 运行环境
}
};

138
src/db/databaseManager.js Normal file
View File

@@ -0,0 +1,138 @@
// 数据库管理器模块
import { Pool } from 'pg';
class DatabaseManager {
constructor(config) {
this.config = config;
this.pool = null;
}
async connect() {
try {
// 创建数据库连接池
this.pool = new Pool(this.config);
// 测试连接
await this.pool.connect();
console.log('数据库连接池创建成功');
// 初始化表结构
await this.initTables();
} catch (error) {
console.error('数据库连接失败:', error);
throw error;
}
}
async disconnect() {
try {
if (this.pool) {
await this.pool.end();
console.log('数据库连接池已关闭');
}
} catch (error) {
console.error('关闭数据库连接池失败:', error);
throw error;
}
}
async initTables() {
try {
const createTableQuery = `
CREATE TABLE IF NOT EXISTS heartbeat (
id SERIAL PRIMARY KEY,
component_id VARCHAR(50) NOT NULL,
status VARCHAR(20) NOT NULL,
timestamp TIMESTAMP NOT NULL DEFAULT CURRENT_TIMESTAMP,
data JSONB,
created_at TIMESTAMP NOT NULL DEFAULT CURRENT_TIMESTAMP
);
CREATE INDEX IF NOT EXISTS idx_heartbeat_component_id ON heartbeat(component_id);
CREATE INDEX IF NOT EXISTS idx_heartbeat_timestamp ON heartbeat(timestamp);
`;
await this.pool.query(createTableQuery);
console.log('数据库表初始化成功');
} catch (error) {
console.error('数据库表初始化失败:', error);
throw error;
}
}
async insertHeartbeatData(data) {
try {
if (!Array.isArray(data)) {
data = [data];
}
if (data.length === 0) {
return;
}
// 构建批量插入语句
const values = data.map(item => [
item.component_id,
item.status,
item.timestamp,
item.data
]);
const query = {
text: `
INSERT INTO heartbeat (component_id, status, timestamp, data)
VALUES ${values.map((_, index) => `($${index * 4 + 1}, $${index * 4 + 2}, $${index * 4 + 3}, $${index * 4 + 4})`).join(', ')}
`,
values: values.flat()
};
await this.pool.query(query);
console.log(`成功插入 ${data.length} 条心跳数据`);
} catch (error) {
console.error('插入心跳数据失败:', error);
throw error;
}
}
async getLatestHeartbeat(componentId) {
try {
const query = {
text: `
SELECT * FROM heartbeat
WHERE component_id = $1
ORDER BY timestamp DESC
LIMIT 1
`,
values: [componentId]
};
const result = await this.pool.query(query);
return result.rows[0] || null;
} catch (error) {
console.error('查询最新心跳数据失败:', error);
throw error;
}
}
async getHeartbeatHistory(componentId, startTime, endTime) {
try {
const query = {
text: `
SELECT * FROM heartbeat
WHERE component_id = $1
AND timestamp BETWEEN $2 AND $3
ORDER BY timestamp DESC
`,
values: [componentId, startTime, endTime]
};
const result = await this.pool.query(query);
return result.rows;
} catch (error) {
console.error('查询心跳历史数据失败:', error);
throw error;
}
}
}
export { DatabaseManager };

78
src/index.js Normal file
View File

@@ -0,0 +1,78 @@
// 项目入口文件
import config from './config/config.js';
import { KafkaConsumer } from './kafka/consumer.js';
import { HeartbeatProcessor } from './processor/heartbeatProcessor.js';
import { DatabaseManager } from './db/databaseManager.js';
class WebBLSHeartbeatServer {
constructor() {
this.config = config;
this.kafkaConsumer = null;
this.heartbeatProcessor = null;
this.databaseManager = null;
}
async start() {
try {
// 初始化数据库连接
this.databaseManager = new DatabaseManager(this.config.db);
await this.databaseManager.connect();
console.log('数据库连接成功');
// 初始化处理器
this.heartbeatProcessor = new HeartbeatProcessor(
this.config.processor,
this.databaseManager
);
// 初始化Kafka消费者
this.kafkaConsumer = new KafkaConsumer(
this.config.kafka,
this.heartbeatProcessor.processMessage.bind(this.heartbeatProcessor)
);
await this.kafkaConsumer.connect();
await this.kafkaConsumer.subscribe();
await this.kafkaConsumer.startConsuming();
console.log('Kafka消费者启动成功');
console.log('BLS心跳接收端启动成功');
} catch (error) {
console.error('启动失败:', error);
process.exit(1);
}
}
async stop() {
try {
if (this.kafkaConsumer) {
await this.kafkaConsumer.stopConsuming();
await this.kafkaConsumer.disconnect();
}
if (this.databaseManager) {
await this.databaseManager.disconnect();
}
console.log('BLS心跳接收端已停止');
} catch (error) {
console.error('停止失败:', error);
}
}
}
// 启动服务器
const server = new WebBLSHeartbeatServer();
server.start();
// 处理进程终止信号
process.on('SIGINT', () => {
server.stop();
process.exit(0);
});
process.on('SIGTERM', () => {
server.stop();
process.exit(0);
});
export { WebBLSHeartbeatServer };

44
src/kafka/consumer.js Normal file
View File

@@ -0,0 +1,44 @@
// Kafka消费者模块
class KafkaConsumer {
constructor(config, messageHandler) {
this.config = config;
this.messageHandler = messageHandler;
this.consumer = null;
this.isRunning = false;
}
async connect() {
// 实现Kafka连接逻辑
console.log('连接到Kafka集群:', this.config.brokers);
// TODO: 实现Kafka连接
}
async disconnect() {
// 实现Kafka断开连接逻辑
console.log('断开与Kafka集群的连接');
// TODO: 实现Kafka断开连接
}
async subscribe() {
// 实现Kafka订阅逻辑
console.log('订阅Kafka主题:', this.config.topic);
// TODO: 实现Kafka订阅
}
async startConsuming() {
// 实现Kafka消息消费逻辑
console.log('开始消费Kafka消息');
this.isRunning = true;
// TODO: 实现Kafka消息消费
}
async stopConsuming() {
// 实现停止Kafka消息消费逻辑
console.log('停止消费Kafka消息');
this.isRunning = false;
// TODO: 实现停止Kafka消息消费
}
}
export { KafkaConsumer };

View File

@@ -0,0 +1,90 @@
// 心跳处理器模块
class HeartbeatProcessor {
constructor(config, databaseManager) {
this.config = config;
this.databaseManager = databaseManager;
this.batchQueue = [];
this.batchTimer = null;
}
async processMessage(message) {
try {
// 解包心跳消息
const unpackedData = this.unpackMessage(message);
// 验证心跳数据
const isValid = this.validateData(unpackedData);
if (!isValid) {
console.error('无效的心跳数据:', unpackedData);
return;
}
// 转换数据格式
const transformedData = this.transformData(unpackedData);
// 添加到批量队列
this.batchQueue.push(transformedData);
// 检查是否需要立即处理
if (this.batchQueue.length >= this.config.batchSize) {
await this.processBatch();
} else if (!this.batchTimer) {
// 设置批量处理定时器
this.batchTimer = setTimeout(
() => this.processBatch(),
this.config.batchTimeout
);
}
} catch (error) {
console.error('处理消息失败:', error);
}
}
unpackMessage(message) {
// 实现心跳消息解包逻辑
console.log('解包心跳消息:', message);
// TODO: 实现消息解包
return {};
}
validateData(data) {
// 实现心跳数据验证逻辑
console.log('验证心跳数据:', data);
// TODO: 实现数据验证
return true;
}
transformData(data) {
// 实现心跳数据转换逻辑
console.log('转换心跳数据:', data);
// TODO: 实现数据转换
return data;
}
async processBatch() {
if (this.batchQueue.length === 0) {
return;
}
// 清除定时器
if (this.batchTimer) {
clearTimeout(this.batchTimer);
this.batchTimer = null;
}
try {
// 获取当前批次数据
const batchData = [...this.batchQueue];
this.batchQueue = [];
// 写入数据库
await this.databaseManager.insertHeartbeatData(batchData);
console.log(`成功处理批次数据,共 ${batchData.length}`);
} catch (error) {
console.error('批量处理失败:', error);
}
}
}
export { HeartbeatProcessor };

23
vite.config.js Normal file
View File

@@ -0,0 +1,23 @@
import { defineConfig } from 'vite';
import { resolve } from 'path';
export default defineConfig({
build: {
lib: {
entry: resolve(__dirname, 'src/index.js'),
name: 'WebBLSHeartbeatServer',
formats: ['es'],
fileName: (format) => `index.${format}.js`
},
rollupOptions: {
external: ['kafka-node', 'pg', 'openspec'],
output: {
globals: {
'kafka-node': 'KafkaNode',
'pg': 'Pg',
'openspec': 'OpenSpec'
}
}
}
}
});