feat: 初始化 bls-onoffline-backend 项目基础结构
添加 Kafka 消费者、数据库写入、Redis 集成等核心模块,实现设备上下线事件处理 - 创建项目基础目录结构与配置文件 - 实现 Kafka 消费逻辑与手动提交偏移量 - 添加 PostgreSQL 数据库连接与分区表管理 - 集成 Redis 用于错误队列和项目心跳 - 包含数据处理逻辑,区分重启与非重启数据 - 提供数据库初始化脚本与分区创建工具 - 添加单元测试与代码校验脚本
This commit is contained in:
2
.gitignore
vendored
Normal file
2
.gitignore
vendored
Normal file
@@ -0,0 +1,2 @@
|
|||||||
|
/bls-onoffline-backend/node_modules
|
||||||
|
/template
|
||||||
86
agent.md
Normal file
86
agent.md
Normal file
@@ -0,0 +1,86 @@
|
|||||||
|
开发框架约束(供 AI 创建项目使用)
|
||||||
|
|
||||||
|
目的:本文件用于约束 AI 在创建/改造项目时的技术选型、目录结构、工程化与交付流程。除非明确得到人工指令,否则 AI 不得偏离本文件的约束。
|
||||||
|
|
||||||
|
1. 运行环境与基础约束
|
||||||
|
|
||||||
|
- Node.js 版本:必须使用 Node.js 22+(建议使用最新 LTS)。
|
||||||
|
- 主要语言:JavaScript(.js)为主。
|
||||||
|
- 允许在必要时引入类型检查方案(例如 JSDoc + // @ts-check),但默认不将 TypeScript 作为主要语言。
|
||||||
|
- 包管理器:**强制统一使用 `npm`**。
|
||||||
|
- 跨平台:默认需兼容 Windows(PowerShell)与类 Unix 环境。
|
||||||
|
|
||||||
|
2. 技术栈约束
|
||||||
|
|
||||||
|
2.1 前端(如需要前端)
|
||||||
|
|
||||||
|
- 框架:必须使用 Vue 3.x。
|
||||||
|
- 生态库:仅引入与 Vue 3.x 兼容的相关库;避免引入与 Vue 2.x 绑定的历史库。
|
||||||
|
- 构建工具:优先 Vite(如与既有工程冲突,需说明原因并保持一致性)。
|
||||||
|
|
||||||
|
2.2 后端(如需要后端)
|
||||||
|
|
||||||
|
- 运行时:必须使用 Node.js。
|
||||||
|
- 语言:后端同样以 JavaScript 为主。
|
||||||
|
- API 风格:默认使用 HTTP JSON API(如采用 GraphQL/WebSocket 等需明确说明并仍遵循 OpenSpec 约束)。
|
||||||
|
|
||||||
|
3. OpenSpec(规范驱动)开发流程约束
|
||||||
|
|
||||||
|
> 说明:这里的 OpenSpec 指通过全局安装 `@fission-ai/openspec` 获得的规范驱动工具链;在 API 场景下,接口契约必须使用并遵守 OpenAPI 3.1。两者不冲突:OpenSpec 用于驱动/校验流程,OpenAPI 3.1 是规范文件中必须满足的契约。
|
||||||
|
|
||||||
|
3.0 OpenSpec 工具链安装(强制)
|
||||||
|
|
||||||
|
- 开发与 CI 环境必须确保可用的 OpenSpec 工具链:
|
||||||
|
- 安装命令:npm install -g @fission-ai/openspec@latest
|
||||||
|
- AI 在生成项目脚本时:
|
||||||
|
- 必须将规范校验能力接入到 npm scripts(见 3.3)。
|
||||||
|
- 不得绕过 OpenSpec 校验直接交付“未受规范约束”的 API 实现。
|
||||||
|
3.1 必须交付的规范产物
|
||||||
|
|
||||||
|
- 项目必须包含一个可追溯的规范文件:
|
||||||
|
- API 项目:`spec/openapi.yaml`(或 `spec/openapi.json`),版本 OpenAPI 3.1。
|
||||||
|
- 非 API 项目:仍需提供对应的“规格说明”(例如流程/数据结构/输入输出契约),放在 spec/ 目录下。
|
||||||
|
- 规范文件需满足:
|
||||||
|
- 可被校验(lint/validate)
|
||||||
|
- 与实现一致(实现变更必须同步更新规范)
|
||||||
|
3.2 开发顺序(强制)
|
||||||
|
|
||||||
|
1. 先写/更新规范(spec-first):在新增/修改功能前,先更新 `spec/` 下的规范。
|
||||||
|
2. 再实现:实现必须与规范一致。
|
||||||
|
3. 再验证:CI/本地脚本必须包含规范校验步骤。
|
||||||
|
4. 再文档化:README 中必须说明如何查看/使用规范与如何运行校验。
|
||||||
|
|
||||||
|
3.3 规范校验与联动(强制)
|
||||||
|
|
||||||
|
- 必须提供脚本(示例命名,可按项目调整但不可缺失):
|
||||||
|
- npm run spec:lint:调用 OpenSpec 对 spec/ 做 lint(具体 CLI 参数以 openspec --help 为准)
|
||||||
|
- npm run spec:validate:调用 OpenSpec 对 spec/ 做结构/引用/契约校验(具体 CLI 参数以 openspec --help 为准)
|
||||||
|
- 若为 API:
|
||||||
|
- 必须在实现层提供请求/响应校验或至少在测试阶段进行契约校验。
|
||||||
|
- 鼓励(非强制)从 OpenAPI 生成 client/server stub 或生成类型定义,但不得改变“JS 为主语言”的前提。
|
||||||
|
4. 工程结构约束(建议默认)
|
||||||
|
|
||||||
|
AI 创建项目时,默认使用以下结构;如项目类型不适用,可在不违背约束的前提下做最小调整。
|
||||||
|
|
||||||
|
- spec/:OpenSpec 规范(OpenAPI 或其他规格说明)
|
||||||
|
- src/:源代码
|
||||||
|
- tests/:测试
|
||||||
|
- scripts/:工程脚本(构建/校验/生成等)
|
||||||
|
- README.md:必须包含运行、测试、规范使用方式
|
||||||
|
5. 质量与交付约束(强制)
|
||||||
|
|
||||||
|
- 必须提供基础脚本:
|
||||||
|
- npm run dev(如可交互开发)
|
||||||
|
- npm run build(如需要构建)
|
||||||
|
- npm run test
|
||||||
|
- npm run lint
|
||||||
|
- 变更要求:
|
||||||
|
- 修改实现时同步更新 spec/ 与测试。
|
||||||
|
- 不得只改实现不改规范;也不得只改规范不改实现。
|
||||||
|
6. AI 行为约束(强制)
|
||||||
|
|
||||||
|
- 若用户需求与本文件冲突:
|
||||||
|
- 先指出冲突点,并请求用户确认是否允许偏离约束。
|
||||||
|
- 未明确要求时:
|
||||||
|
- 不引入与约束无关的“额外页面/功能/组件/花哨配置”。
|
||||||
|
- 保持最小可用、可验证、可维护的实现。
|
||||||
37
bls-onoffline-backend/.env
Normal file
37
bls-onoffline-backend/.env
Normal file
@@ -0,0 +1,37 @@
|
|||||||
|
KAFKA_BROKERS=kafka.blv-oa.com:9092
|
||||||
|
KAFKA_CLIENT_ID=bls-onoffline-producer
|
||||||
|
KAFKA_GROUP_ID=bls-onoffline-consumer
|
||||||
|
KAFKA_TOPICS=blwlog4Nodejs-rcu-onoffline-topic-0
|
||||||
|
KAFKA_AUTO_COMMIT=false
|
||||||
|
KAFKA_AUTO_COMMIT_INTERVAL_MS=5000
|
||||||
|
KAFKA_SASL_ENABLED=true
|
||||||
|
KAFKA_SASL_MECHANISM=plain
|
||||||
|
KAFKA_SASL_USERNAME=blwmomo
|
||||||
|
KAFKA_SASL_PASSWORD=blwmomo
|
||||||
|
KAFKA_SSL_ENABLED=false
|
||||||
|
KAFKA_CONSUMER_INSTANCES=6
|
||||||
|
KAFKA_MAX_IN_FLIGHT=50
|
||||||
|
KAFKA_FETCH_MAX_BYTES=10485760
|
||||||
|
KAFKA_FETCH_MAX_WAIT_MS=100
|
||||||
|
KAFKA_FETCH_MIN_BYTES=1
|
||||||
|
|
||||||
|
POSTGRES_HOST=10.8.8.109
|
||||||
|
POSTGRES_PORT=5433
|
||||||
|
POSTGRES_DATABASE=log_platform
|
||||||
|
POSTGRES_USER=log_admin
|
||||||
|
POSTGRES_PASSWORD=YourActualStrongPasswordForPostgres!
|
||||||
|
POSTGRES_MAX_CONNECTIONS=6
|
||||||
|
POSTGRES_IDLE_TIMEOUT_MS=30000
|
||||||
|
DB_SCHEMA=onoffline
|
||||||
|
DB_TABLE=onoffline_record
|
||||||
|
|
||||||
|
PORT=3001
|
||||||
|
LOG_LEVEL=info
|
||||||
|
|
||||||
|
# Redis connection
|
||||||
|
REDIS_HOST=10.8.8.109
|
||||||
|
REDIS_PORT=6379
|
||||||
|
REDIS_PASSWORD=
|
||||||
|
REDIS_DB=15
|
||||||
|
REDIS_CONNECT_TIMEOUT_MS=5000
|
||||||
|
REDIS_PROJECT_NAME=bls-onoffline
|
||||||
31
bls-onoffline-backend/.env.example
Normal file
31
bls-onoffline-backend/.env.example
Normal file
@@ -0,0 +1,31 @@
|
|||||||
|
# Server Configuration
|
||||||
|
PORT=3001
|
||||||
|
NODE_ENV=development
|
||||||
|
|
||||||
|
# Kafka Configuration
|
||||||
|
KAFKA_BROKERS=localhost:9092
|
||||||
|
KAFKA_TOPIC=blwlog4Nodejs-rcu-onoffline-topic
|
||||||
|
KAFKA_GROUP_ID=bls-onoffline-group
|
||||||
|
KAFKA_CLIENT_ID=bls-onoffline-client
|
||||||
|
KAFKA_CONSUMER_INSTANCES=1
|
||||||
|
# KAFKA_SASL_USERNAME=
|
||||||
|
# KAFKA_SASL_PASSWORD=
|
||||||
|
# KAFKA_SASL_MECHANISM=plain
|
||||||
|
|
||||||
|
# Database Configuration (PostgreSQL)
|
||||||
|
DB_HOST=localhost
|
||||||
|
DB_PORT=5432
|
||||||
|
DB_USER=postgres
|
||||||
|
DB_PASSWORD=password
|
||||||
|
DB_DATABASE=log_platform
|
||||||
|
DB_SCHEMA=public
|
||||||
|
DB_TABLE=onoffline_record
|
||||||
|
DB_MAX_CONNECTIONS=10
|
||||||
|
|
||||||
|
# Redis Configuration
|
||||||
|
REDIS_HOST=localhost
|
||||||
|
REDIS_PORT=6379
|
||||||
|
REDIS_PASSWORD=
|
||||||
|
REDIS_DB=0
|
||||||
|
REDIS_PROJECT_NAME=bls-onoffline
|
||||||
|
REDIS_API_BASE_URL=http://localhost:3001
|
||||||
18
bls-onoffline-backend/AGENTS.md
Normal file
18
bls-onoffline-backend/AGENTS.md
Normal file
@@ -0,0 +1,18 @@
|
|||||||
|
<!-- OPENSPEC:START -->
|
||||||
|
# OpenSpec Instructions
|
||||||
|
|
||||||
|
These instructions are for AI assistants working in this project.
|
||||||
|
|
||||||
|
Always open `@/openspec/AGENTS.md` when the request:
|
||||||
|
- Mentions planning or proposals (words like proposal, spec, change, plan)
|
||||||
|
- Introduces new capabilities, breaking changes, architecture shifts, or big performance/security work
|
||||||
|
- Sounds ambiguous and you need the authoritative spec before coding
|
||||||
|
|
||||||
|
Use `@/openspec/AGENTS.md` to learn:
|
||||||
|
- How to create and apply change proposals
|
||||||
|
- Spec format and conventions
|
||||||
|
- Project structure and guidelines
|
||||||
|
|
||||||
|
Keep this managed block so 'openspec update' can refresh the instructions.
|
||||||
|
|
||||||
|
<!-- OPENSPEC:END -->
|
||||||
24
bls-onoffline-backend/README.md
Normal file
24
bls-onoffline-backend/README.md
Normal file
@@ -0,0 +1,24 @@
|
|||||||
|
bls-onoffline-backend
|
||||||
|
|
||||||
|
安装与运行
|
||||||
|
- Node.js 22+
|
||||||
|
- npm install
|
||||||
|
- npm run dev
|
||||||
|
|
||||||
|
构建与测试
|
||||||
|
- npm run build
|
||||||
|
- npm run test
|
||||||
|
- npm run lint
|
||||||
|
|
||||||
|
规范校验
|
||||||
|
- npm run spec:lint
|
||||||
|
- npm run spec:validate
|
||||||
|
|
||||||
|
环境变量
|
||||||
|
- 复制 .env.example 为 .env 并按实际环境配置
|
||||||
|
|
||||||
|
数据库初始化
|
||||||
|
- 启动时自动执行 scripts/init_db.sql 并预创建未来 30 天分区
|
||||||
|
|
||||||
|
规范说明
|
||||||
|
- 规格文件位于 spec/onoffline-spec.md
|
||||||
857
bls-onoffline-backend/dist/index.js
vendored
Normal file
857
bls-onoffline-backend/dist/index.js
vendored
Normal file
@@ -0,0 +1,857 @@
|
|||||||
|
import cron from "node-cron";
|
||||||
|
import dotenv from "dotenv";
|
||||||
|
import pg from "pg";
|
||||||
|
import fs from "fs";
|
||||||
|
import path from "path";
|
||||||
|
import { fileURLToPath } from "url";
|
||||||
|
import kafka from "kafka-node";
|
||||||
|
import { randomUUID } from "crypto";
|
||||||
|
import { z } from "zod";
|
||||||
|
import { createClient } from "redis";
|
||||||
|
dotenv.config();
|
||||||
|
const parseNumber = (value, defaultValue) => {
|
||||||
|
const parsed = Number(value);
|
||||||
|
return Number.isFinite(parsed) ? parsed : defaultValue;
|
||||||
|
};
|
||||||
|
const parseList = (value) => (value || "").split(",").map((item) => item.trim()).filter(Boolean);
|
||||||
|
const config = {
|
||||||
|
env: process.env.NODE_ENV || "development",
|
||||||
|
port: parseNumber(process.env.PORT, 3001),
|
||||||
|
kafka: {
|
||||||
|
brokers: parseList(process.env.KAFKA_BROKERS),
|
||||||
|
topic: process.env.KAFKA_TOPIC || process.env.KAFKA_TOPICS || "blwlog4Nodejs-rcu-onoffline-topic",
|
||||||
|
groupId: process.env.KAFKA_GROUP_ID || "bls-onoffline-group",
|
||||||
|
clientId: process.env.KAFKA_CLIENT_ID || "bls-onoffline-client",
|
||||||
|
consumerInstances: parseNumber(process.env.KAFKA_CONSUMER_INSTANCES, 1),
|
||||||
|
maxInFlight: parseNumber(process.env.KAFKA_MAX_IN_FLIGHT, 50),
|
||||||
|
fetchMaxBytes: parseNumber(process.env.KAFKA_FETCH_MAX_BYTES, 10 * 1024 * 1024),
|
||||||
|
fetchMinBytes: parseNumber(process.env.KAFKA_FETCH_MIN_BYTES, 1),
|
||||||
|
fetchMaxWaitMs: parseNumber(process.env.KAFKA_FETCH_MAX_WAIT_MS, 100),
|
||||||
|
autoCommitIntervalMs: parseNumber(process.env.KAFKA_AUTO_COMMIT_INTERVAL_MS, 5e3),
|
||||||
|
logMessages: process.env.KAFKA_LOG_MESSAGES === "true",
|
||||||
|
sasl: process.env.KAFKA_SASL_USERNAME && process.env.KAFKA_SASL_PASSWORD ? {
|
||||||
|
mechanism: process.env.KAFKA_SASL_MECHANISM || "plain",
|
||||||
|
username: process.env.KAFKA_SASL_USERNAME,
|
||||||
|
password: process.env.KAFKA_SASL_PASSWORD
|
||||||
|
} : void 0
|
||||||
|
},
|
||||||
|
db: {
|
||||||
|
host: process.env.DB_HOST || process.env.POSTGRES_HOST || "localhost",
|
||||||
|
port: parseNumber(process.env.DB_PORT || process.env.POSTGRES_PORT, 5432),
|
||||||
|
user: process.env.DB_USER || process.env.POSTGRES_USER || "postgres",
|
||||||
|
password: process.env.DB_PASSWORD || process.env.POSTGRES_PASSWORD || "",
|
||||||
|
database: process.env.DB_DATABASE || process.env.POSTGRES_DATABASE || "log_platform",
|
||||||
|
max: parseNumber(process.env.DB_MAX_CONNECTIONS || process.env.POSTGRES_MAX_CONNECTIONS, 10),
|
||||||
|
ssl: process.env.DB_SSL === "true" ? { rejectUnauthorized: false } : void 0,
|
||||||
|
schema: process.env.DB_SCHEMA || "onoffline",
|
||||||
|
table: process.env.DB_TABLE || "onoffline_record"
|
||||||
|
},
|
||||||
|
redis: {
|
||||||
|
host: process.env.REDIS_HOST || "localhost",
|
||||||
|
port: parseNumber(process.env.REDIS_PORT, 6379),
|
||||||
|
password: process.env.REDIS_PASSWORD || void 0,
|
||||||
|
db: parseNumber(process.env.REDIS_DB, 0),
|
||||||
|
projectName: process.env.REDIS_PROJECT_NAME || "bls-onoffline",
|
||||||
|
apiBaseUrl: process.env.REDIS_API_BASE_URL || `http://localhost:${parseNumber(process.env.PORT, 3001)}`
|
||||||
|
}
|
||||||
|
};
|
||||||
|
const format = (level, message, context) => {
|
||||||
|
const payload = {
|
||||||
|
level,
|
||||||
|
message,
|
||||||
|
timestamp: Date.now(),
|
||||||
|
...context ? { context } : {}
|
||||||
|
};
|
||||||
|
return JSON.stringify(payload);
|
||||||
|
};
|
||||||
|
const logger$1 = {
|
||||||
|
info(message, context) {
|
||||||
|
process.stdout.write(`${format("info", message, context)}
|
||||||
|
`);
|
||||||
|
},
|
||||||
|
error(message, context) {
|
||||||
|
process.stderr.write(`${format("error", message, context)}
|
||||||
|
`);
|
||||||
|
}
|
||||||
|
};
|
||||||
|
const { Pool } = pg;
|
||||||
|
const columns = [
|
||||||
|
"guid",
|
||||||
|
"ts_ms",
|
||||||
|
"write_ts_ms",
|
||||||
|
"hotel_id",
|
||||||
|
"mac",
|
||||||
|
"device_id",
|
||||||
|
"room_id",
|
||||||
|
"ip",
|
||||||
|
"current_status",
|
||||||
|
"launcher_version",
|
||||||
|
"reboot_reason"
|
||||||
|
];
|
||||||
|
class DatabaseManager {
|
||||||
|
constructor(dbConfig) {
|
||||||
|
this.pool = new Pool({
|
||||||
|
host: dbConfig.host,
|
||||||
|
port: dbConfig.port,
|
||||||
|
user: dbConfig.user,
|
||||||
|
password: dbConfig.password,
|
||||||
|
database: dbConfig.database,
|
||||||
|
max: dbConfig.max,
|
||||||
|
ssl: dbConfig.ssl
|
||||||
|
});
|
||||||
|
}
|
||||||
|
async insertRows({ schema, table, rows }) {
|
||||||
|
if (!rows || rows.length === 0) {
|
||||||
|
return;
|
||||||
|
}
|
||||||
|
const values = [];
|
||||||
|
const placeholders = rows.map((row, rowIndex) => {
|
||||||
|
const offset = rowIndex * columns.length;
|
||||||
|
columns.forEach((column) => {
|
||||||
|
values.push(row[column] ?? null);
|
||||||
|
});
|
||||||
|
const params = columns.map((_, columnIndex) => `$${offset + columnIndex + 1}`);
|
||||||
|
return `(${params.join(", ")})`;
|
||||||
|
});
|
||||||
|
const statement = `
|
||||||
|
INSERT INTO ${schema}.${table} (${columns.join(", ")})
|
||||||
|
VALUES ${placeholders.join(", ")}
|
||||||
|
ON CONFLICT DO NOTHING
|
||||||
|
`;
|
||||||
|
try {
|
||||||
|
await this.pool.query(statement, values);
|
||||||
|
} catch (error) {
|
||||||
|
logger$1.error("Database insert failed", {
|
||||||
|
error: error?.message,
|
||||||
|
schema,
|
||||||
|
table,
|
||||||
|
rowsLength: rows.length
|
||||||
|
});
|
||||||
|
throw error;
|
||||||
|
}
|
||||||
|
}
|
||||||
|
async checkConnection() {
|
||||||
|
let client;
|
||||||
|
try {
|
||||||
|
const connectPromise = this.pool.connect();
|
||||||
|
const timeoutPromise = new Promise((_, reject) => {
|
||||||
|
setTimeout(() => reject(new Error("Connection timeout")), 5e3);
|
||||||
|
});
|
||||||
|
try {
|
||||||
|
client = await Promise.race([connectPromise, timeoutPromise]);
|
||||||
|
} catch (raceError) {
|
||||||
|
connectPromise.then((c) => c.release()).catch(() => {
|
||||||
|
});
|
||||||
|
throw raceError;
|
||||||
|
}
|
||||||
|
await client.query("SELECT 1");
|
||||||
|
return true;
|
||||||
|
} catch (err) {
|
||||||
|
logger$1.error("Database check connection failed", { error: err.message });
|
||||||
|
return false;
|
||||||
|
} finally {
|
||||||
|
if (client) {
|
||||||
|
client.release();
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
async close() {
|
||||||
|
await this.pool.end();
|
||||||
|
}
|
||||||
|
}
|
||||||
|
const dbManager = new DatabaseManager(config.db);
|
||||||
|
class PartitionManager {
|
||||||
|
/**
|
||||||
|
* Calculate the start and end timestamps (milliseconds) for a given date.
|
||||||
|
* @param {Date} date - The date to calculate for.
|
||||||
|
* @returns {Object} { startMs, endMs, partitionSuffix }
|
||||||
|
*/
|
||||||
|
getPartitionInfo(date) {
|
||||||
|
const yyyy = date.getFullYear();
|
||||||
|
const mm = String(date.getMonth() + 1).padStart(2, "0");
|
||||||
|
const dd = String(date.getDate()).padStart(2, "0");
|
||||||
|
const partitionSuffix = `${yyyy}${mm}${dd}`;
|
||||||
|
const start = new Date(date);
|
||||||
|
start.setHours(0, 0, 0, 0);
|
||||||
|
const startMs = start.getTime();
|
||||||
|
const end = new Date(date);
|
||||||
|
end.setDate(end.getDate() + 1);
|
||||||
|
end.setHours(0, 0, 0, 0);
|
||||||
|
const endMs = end.getTime();
|
||||||
|
return { startMs, endMs, partitionSuffix };
|
||||||
|
}
|
||||||
|
/**
|
||||||
|
* Ensure partitions exist for the past M days and next N days.
|
||||||
|
* @param {number} daysAhead - Number of days to pre-create.
|
||||||
|
* @param {number} daysBack - Number of days to look back.
|
||||||
|
*/
|
||||||
|
async ensurePartitions(daysAhead = 30, daysBack = 15) {
|
||||||
|
const client = await dbManager.pool.connect();
|
||||||
|
try {
|
||||||
|
logger$1.info(`Starting partition check for the past ${daysBack} days and next ${daysAhead} days...`);
|
||||||
|
console.log(`Starting partition check for the past ${daysBack} days and next ${daysAhead} days...`);
|
||||||
|
const now = /* @__PURE__ */ new Date();
|
||||||
|
for (let i = -daysBack; i < daysAhead; i++) {
|
||||||
|
const targetDate = new Date(now);
|
||||||
|
targetDate.setDate(now.getDate() + i);
|
||||||
|
const { startMs, endMs, partitionSuffix } = this.getPartitionInfo(targetDate);
|
||||||
|
const schema = config.db.schema;
|
||||||
|
const table = config.db.table;
|
||||||
|
const partitionName = `${schema}.${table}_${partitionSuffix}`;
|
||||||
|
const checkSql = `
|
||||||
|
SELECT to_regclass($1) as exists;
|
||||||
|
`;
|
||||||
|
const checkRes = await client.query(checkSql, [partitionName]);
|
||||||
|
if (!checkRes.rows[0].exists) {
|
||||||
|
logger$1.info(`Creating partition ${partitionName} for range [${startMs}, ${endMs})`);
|
||||||
|
console.log(`Creating partition ${partitionName} for range [${startMs}, ${endMs})`);
|
||||||
|
const createSql = `
|
||||||
|
CREATE TABLE IF NOT EXISTS ${partitionName}
|
||||||
|
PARTITION OF ${schema}.${table}
|
||||||
|
FOR VALUES FROM (${startMs}) TO (${endMs});
|
||||||
|
`;
|
||||||
|
await client.query(createSql);
|
||||||
|
}
|
||||||
|
}
|
||||||
|
logger$1.info("Partition check completed.");
|
||||||
|
} catch (err) {
|
||||||
|
logger$1.error("Error ensuring partitions:", err);
|
||||||
|
throw err;
|
||||||
|
} finally {
|
||||||
|
client.release();
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
const partitionManager = new PartitionManager();
|
||||||
|
const __filename$1 = fileURLToPath(import.meta.url);
|
||||||
|
const __dirname$1 = path.dirname(__filename$1);
|
||||||
|
class DatabaseInitializer {
|
||||||
|
async initialize() {
|
||||||
|
logger$1.info("Starting database initialization check...");
|
||||||
|
await this.ensureDatabaseExists();
|
||||||
|
await this.ensureSchemaAndTable();
|
||||||
|
await partitionManager.ensurePartitions(30);
|
||||||
|
console.log("Database initialization completed successfully.");
|
||||||
|
logger$1.info("Database initialization completed successfully.");
|
||||||
|
}
|
||||||
|
async ensureDatabaseExists() {
|
||||||
|
const { host, port, user, password, database, ssl } = config.db;
|
||||||
|
console.log(`Checking if database '${database}' exists at ${host}:${port}...`);
|
||||||
|
const client = new pg.Client({
|
||||||
|
host,
|
||||||
|
port,
|
||||||
|
user,
|
||||||
|
password,
|
||||||
|
database: "postgres",
|
||||||
|
ssl: ssl ? { rejectUnauthorized: false } : false
|
||||||
|
});
|
||||||
|
try {
|
||||||
|
await client.connect();
|
||||||
|
const checkRes = await client.query(
|
||||||
|
`SELECT 1 FROM pg_database WHERE datname = $1`,
|
||||||
|
[database]
|
||||||
|
);
|
||||||
|
if (checkRes.rowCount === 0) {
|
||||||
|
logger$1.info(`Database '${database}' does not exist. Creating...`);
|
||||||
|
await client.query(`CREATE DATABASE "${database}"`);
|
||||||
|
console.log(`Database '${database}' created.`);
|
||||||
|
logger$1.info(`Database '${database}' created.`);
|
||||||
|
} else {
|
||||||
|
console.log(`Database '${database}' already exists.`);
|
||||||
|
logger$1.info(`Database '${database}' already exists.`);
|
||||||
|
}
|
||||||
|
} catch (err) {
|
||||||
|
logger$1.error("Error ensuring database exists:", err);
|
||||||
|
throw err;
|
||||||
|
} finally {
|
||||||
|
await client.end();
|
||||||
|
}
|
||||||
|
}
|
||||||
|
async ensureSchemaAndTable() {
|
||||||
|
const client = await dbManager.pool.connect();
|
||||||
|
try {
|
||||||
|
const sqlPathCandidates = [
|
||||||
|
path.resolve(process.cwd(), "scripts/init_db.sql"),
|
||||||
|
path.resolve(__dirname$1, "../scripts/init_db.sql"),
|
||||||
|
path.resolve(__dirname$1, "../../scripts/init_db.sql")
|
||||||
|
];
|
||||||
|
const sqlPath = sqlPathCandidates.find((candidate) => fs.existsSync(candidate));
|
||||||
|
if (!sqlPath) {
|
||||||
|
throw new Error(`init_db.sql not found. Candidates: ${sqlPathCandidates.join(" | ")}`);
|
||||||
|
}
|
||||||
|
const sql = fs.readFileSync(sqlPath, "utf8");
|
||||||
|
console.log(`Executing init_db.sql from ${sqlPath}...`);
|
||||||
|
logger$1.info("Executing init_db.sql...");
|
||||||
|
await client.query(sql);
|
||||||
|
console.log("Schema and parent table initialized.");
|
||||||
|
logger$1.info("Schema and parent table initialized.");
|
||||||
|
} catch (err) {
|
||||||
|
logger$1.error("Error initializing schema and table:", err);
|
||||||
|
throw err;
|
||||||
|
} finally {
|
||||||
|
client.release();
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
const dbInitializer = new DatabaseInitializer();
|
||||||
|
class OffsetTracker {
|
||||||
|
constructor() {
|
||||||
|
this.partitions = /* @__PURE__ */ new Map();
|
||||||
|
}
|
||||||
|
// Called when a message is received (before processing)
|
||||||
|
add(topic, partition, offset) {
|
||||||
|
const key = `${topic}-${partition}`;
|
||||||
|
if (!this.partitions.has(key)) {
|
||||||
|
this.partitions.set(key, []);
|
||||||
|
}
|
||||||
|
this.partitions.get(key).push({ offset, done: false });
|
||||||
|
}
|
||||||
|
// Called when a message is successfully processed
|
||||||
|
// Returns the next offset to commit (if any advancement is possible), or null
|
||||||
|
markDone(topic, partition, offset) {
|
||||||
|
const key = `${topic}-${partition}`;
|
||||||
|
const list = this.partitions.get(key);
|
||||||
|
if (!list) return null;
|
||||||
|
const item = list.find((i) => i.offset === offset);
|
||||||
|
if (item) {
|
||||||
|
item.done = true;
|
||||||
|
}
|
||||||
|
let lastDoneOffset = null;
|
||||||
|
let itemsRemoved = false;
|
||||||
|
while (list.length > 0 && list[0].done) {
|
||||||
|
lastDoneOffset = list[0].offset;
|
||||||
|
list.shift();
|
||||||
|
itemsRemoved = true;
|
||||||
|
}
|
||||||
|
if (itemsRemoved && lastDoneOffset !== null) {
|
||||||
|
return lastDoneOffset + 1;
|
||||||
|
}
|
||||||
|
return null;
|
||||||
|
}
|
||||||
|
}
|
||||||
|
const { ConsumerGroup } = kafka;
|
||||||
|
const createOneConsumer = ({ kafkaConfig, onMessage, onError, instanceIndex }) => {
|
||||||
|
const kafkaHost = kafkaConfig.brokers.join(",");
|
||||||
|
const clientId = instanceIndex === 0 ? kafkaConfig.clientId : `${kafkaConfig.clientId}-${instanceIndex}`;
|
||||||
|
const id = `${clientId}-${process.pid}-${Date.now()}`;
|
||||||
|
const maxInFlight = Number.isFinite(kafkaConfig.maxInFlight) ? kafkaConfig.maxInFlight : 50;
|
||||||
|
let inFlight = 0;
|
||||||
|
const tracker = new OffsetTracker();
|
||||||
|
const consumer = new ConsumerGroup(
|
||||||
|
{
|
||||||
|
kafkaHost,
|
||||||
|
groupId: kafkaConfig.groupId,
|
||||||
|
clientId,
|
||||||
|
id,
|
||||||
|
fromOffset: "earliest",
|
||||||
|
protocol: ["roundrobin"],
|
||||||
|
outOfRangeOffset: "latest",
|
||||||
|
autoCommit: false,
|
||||||
|
autoCommitIntervalMs: kafkaConfig.autoCommitIntervalMs,
|
||||||
|
fetchMaxBytes: kafkaConfig.fetchMaxBytes,
|
||||||
|
fetchMinBytes: kafkaConfig.fetchMinBytes,
|
||||||
|
fetchMaxWaitMs: kafkaConfig.fetchMaxWaitMs,
|
||||||
|
sasl: kafkaConfig.sasl
|
||||||
|
},
|
||||||
|
kafkaConfig.topic
|
||||||
|
);
|
||||||
|
const tryResume = () => {
|
||||||
|
if (inFlight < maxInFlight && consumer.paused) {
|
||||||
|
consumer.resume();
|
||||||
|
}
|
||||||
|
};
|
||||||
|
consumer.on("message", (message) => {
|
||||||
|
inFlight += 1;
|
||||||
|
tracker.add(message.topic, message.partition, message.offset);
|
||||||
|
if (inFlight >= maxInFlight) {
|
||||||
|
consumer.pause();
|
||||||
|
}
|
||||||
|
Promise.resolve(onMessage(message)).then(() => {
|
||||||
|
const commitOffset = tracker.markDone(message.topic, message.partition, message.offset);
|
||||||
|
if (commitOffset !== null) {
|
||||||
|
consumer.sendOffsetCommitRequest([{
|
||||||
|
topic: message.topic,
|
||||||
|
partition: message.partition,
|
||||||
|
offset: commitOffset,
|
||||||
|
metadata: "m"
|
||||||
|
}], (err) => {
|
||||||
|
if (err) {
|
||||||
|
logger$1.error("Kafka commit failed", { error: err?.message, topic: message.topic, partition: message.partition, offset: commitOffset });
|
||||||
|
}
|
||||||
|
});
|
||||||
|
}
|
||||||
|
}).catch((error) => {
|
||||||
|
logger$1.error("Kafka message handling failed", { error: error?.message });
|
||||||
|
if (onError) {
|
||||||
|
onError(error, message);
|
||||||
|
}
|
||||||
|
}).finally(() => {
|
||||||
|
inFlight -= 1;
|
||||||
|
tryResume();
|
||||||
|
});
|
||||||
|
});
|
||||||
|
consumer.on("error", (error) => {
|
||||||
|
logger$1.error("Kafka consumer error", { error: error?.message });
|
||||||
|
if (onError) {
|
||||||
|
onError(error);
|
||||||
|
}
|
||||||
|
});
|
||||||
|
consumer.on("connect", () => {
|
||||||
|
logger$1.info(`Kafka Consumer connected`, {
|
||||||
|
groupId: kafkaConfig.groupId,
|
||||||
|
clientId
|
||||||
|
});
|
||||||
|
});
|
||||||
|
consumer.on("rebalancing", () => {
|
||||||
|
logger$1.info(`Kafka Consumer rebalancing`, {
|
||||||
|
groupId: kafkaConfig.groupId,
|
||||||
|
clientId
|
||||||
|
});
|
||||||
|
});
|
||||||
|
consumer.on("rebalanced", () => {
|
||||||
|
logger$1.info("Kafka Consumer rebalanced", { clientId, groupId: kafkaConfig.groupId });
|
||||||
|
});
|
||||||
|
consumer.on("error", (err) => {
|
||||||
|
logger$1.error("Kafka Consumer Error", { error: err.message });
|
||||||
|
});
|
||||||
|
consumer.on("offsetOutOfRange", (err) => {
|
||||||
|
logger$1.warn("Offset out of range", { error: err.message, topic: err.topic, partition: err.partition });
|
||||||
|
});
|
||||||
|
consumer.on("offsetOutOfRange", (error) => {
|
||||||
|
logger$1.warn(`Kafka Consumer offset out of range`, {
|
||||||
|
error: error?.message,
|
||||||
|
groupId: kafkaConfig.groupId,
|
||||||
|
clientId
|
||||||
|
});
|
||||||
|
});
|
||||||
|
consumer.on("close", () => {
|
||||||
|
logger$1.warn(`Kafka Consumer closed`, {
|
||||||
|
groupId: kafkaConfig.groupId,
|
||||||
|
clientId
|
||||||
|
});
|
||||||
|
});
|
||||||
|
return consumer;
|
||||||
|
};
|
||||||
|
const createKafkaConsumers = ({ kafkaConfig, onMessage, onError }) => {
|
||||||
|
const instances = Number.isFinite(kafkaConfig.consumerInstances) ? kafkaConfig.consumerInstances : 1;
|
||||||
|
const count = Math.max(1, instances);
|
||||||
|
return Array.from(
|
||||||
|
{ length: count },
|
||||||
|
(_, idx) => createOneConsumer({ kafkaConfig, onMessage, onError, instanceIndex: idx })
|
||||||
|
);
|
||||||
|
};
|
||||||
|
const createGuid = () => randomUUID().replace(/-/g, "");
|
||||||
|
const toNumber = (value) => {
|
||||||
|
if (value === void 0 || value === null || value === "") {
|
||||||
|
return value;
|
||||||
|
}
|
||||||
|
if (typeof value === "number") {
|
||||||
|
return value;
|
||||||
|
}
|
||||||
|
const parsed = Number(value);
|
||||||
|
return Number.isFinite(parsed) ? parsed : value;
|
||||||
|
};
|
||||||
|
const toStringAllowEmpty = (value) => {
|
||||||
|
if (value === void 0 || value === null) {
|
||||||
|
return value;
|
||||||
|
}
|
||||||
|
return String(value);
|
||||||
|
};
|
||||||
|
const kafkaPayloadSchema = z.object({
|
||||||
|
HotelCode: z.preprocess(toNumber, z.number()),
|
||||||
|
MAC: z.preprocess(toStringAllowEmpty, z.string().nullable()).optional().nullable(),
|
||||||
|
HostNumber: z.preprocess(toStringAllowEmpty, z.string().nullable()).optional().nullable(),
|
||||||
|
RoomNumber: z.preprocess(toStringAllowEmpty, z.string().nullable()).optional().nullable(),
|
||||||
|
EndPoint: z.preprocess(toStringAllowEmpty, z.string().nullable()).optional().nullable(),
|
||||||
|
CurrentStatus: z.preprocess(toStringAllowEmpty, z.string().nullable()).optional().nullable(),
|
||||||
|
CurrentTime: z.preprocess(toStringAllowEmpty, z.string().nullable()).optional().nullable(),
|
||||||
|
UnixTime: z.preprocess(toNumber, z.number().nullable()).optional().nullable(),
|
||||||
|
LauncherVersion: z.preprocess(toStringAllowEmpty, z.string().nullable()).optional().nullable(),
|
||||||
|
RebootReason: z.preprocess(toStringAllowEmpty, z.string().nullable()).optional().nullable()
|
||||||
|
});
|
||||||
|
const normalizeText = (value, maxLength) => {
|
||||||
|
if (value === void 0 || value === null) {
|
||||||
|
return null;
|
||||||
|
}
|
||||||
|
const str = String(value);
|
||||||
|
if (maxLength && str.length > maxLength) {
|
||||||
|
return str.substring(0, maxLength);
|
||||||
|
}
|
||||||
|
return str;
|
||||||
|
};
|
||||||
|
const buildRowsFromPayload = (rawPayload) => {
|
||||||
|
const payload = kafkaPayloadSchema.parse(rawPayload);
|
||||||
|
const rebootReason = normalizeText(payload.RebootReason, 255);
|
||||||
|
const currentStatusRaw = normalizeText(payload.CurrentStatus, 255);
|
||||||
|
const hasRebootReason = rebootReason !== null && rebootReason !== "";
|
||||||
|
const currentStatus = hasRebootReason ? "on" : currentStatusRaw;
|
||||||
|
let tsMs = payload.UnixTime;
|
||||||
|
if (typeof tsMs === "number" && tsMs < 1e11) {
|
||||||
|
tsMs = tsMs * 1e3;
|
||||||
|
}
|
||||||
|
if (!tsMs && payload.CurrentTime) {
|
||||||
|
const parsed = Date.parse(payload.CurrentTime);
|
||||||
|
if (!isNaN(parsed)) {
|
||||||
|
tsMs = parsed;
|
||||||
|
}
|
||||||
|
}
|
||||||
|
if (!tsMs) {
|
||||||
|
tsMs = Date.now();
|
||||||
|
}
|
||||||
|
const mac = normalizeText(payload.MAC) || "";
|
||||||
|
const deviceId = normalizeText(payload.HostNumber) || "";
|
||||||
|
const roomId = normalizeText(payload.RoomNumber) || "";
|
||||||
|
const row = {
|
||||||
|
guid: createGuid(),
|
||||||
|
ts_ms: tsMs,
|
||||||
|
write_ts_ms: Date.now(),
|
||||||
|
hotel_id: payload.HotelCode,
|
||||||
|
mac,
|
||||||
|
device_id: deviceId,
|
||||||
|
room_id: roomId,
|
||||||
|
ip: normalizeText(payload.EndPoint),
|
||||||
|
current_status: currentStatus,
|
||||||
|
launcher_version: normalizeText(payload.LauncherVersion, 255),
|
||||||
|
reboot_reason: rebootReason
|
||||||
|
};
|
||||||
|
return [row];
|
||||||
|
};
|
||||||
|
const processKafkaMessage = async ({ message, dbManager: dbManager2, config: config2 }) => {
|
||||||
|
let rows;
|
||||||
|
try {
|
||||||
|
const rawValue = message.value.toString();
|
||||||
|
let payload;
|
||||||
|
try {
|
||||||
|
payload = JSON.parse(rawValue);
|
||||||
|
} catch (e) {
|
||||||
|
logger.error("JSON Parse Error", { error: e.message, rawValue });
|
||||||
|
const error = new Error(`JSON Parse Error: ${e.message}`);
|
||||||
|
error.type = "PARSE_ERROR";
|
||||||
|
throw error;
|
||||||
|
}
|
||||||
|
const validationResult = kafkaPayloadSchema.safeParse(payload);
|
||||||
|
if (!validationResult.success) {
|
||||||
|
logger.error("Schema Validation Failed", {
|
||||||
|
errors: validationResult.error.errors,
|
||||||
|
payload
|
||||||
|
});
|
||||||
|
const error = new Error(`Schema Validation Failed: ${JSON.stringify(validationResult.error.errors)}`);
|
||||||
|
error.type = "VALIDATION_ERROR";
|
||||||
|
throw error;
|
||||||
|
}
|
||||||
|
rows = buildRowsFromPayload(payload);
|
||||||
|
} catch (error) {
|
||||||
|
throw error;
|
||||||
|
}
|
||||||
|
try {
|
||||||
|
await dbManager2.insertRows({ schema: config2.db.schema, table: config2.db.table, rows });
|
||||||
|
} catch (error) {
|
||||||
|
error.type = "DB_ERROR";
|
||||||
|
const sample = rows?.[0];
|
||||||
|
error.dbContext = {
|
||||||
|
rowsLength: rows?.length || 0,
|
||||||
|
sampleRow: sample ? {
|
||||||
|
guid: sample.guid,
|
||||||
|
ts_ms: sample.ts_ms,
|
||||||
|
mac: sample.mac,
|
||||||
|
device_id: sample.device_id,
|
||||||
|
room_id: sample.room_id,
|
||||||
|
current_status: sample.current_status
|
||||||
|
} : null
|
||||||
|
};
|
||||||
|
throw error;
|
||||||
|
}
|
||||||
|
return rows.length;
|
||||||
|
};
|
||||||
|
const createRedisClient = async (config2) => {
|
||||||
|
const client = createClient({
|
||||||
|
socket: {
|
||||||
|
host: config2.host,
|
||||||
|
port: config2.port
|
||||||
|
},
|
||||||
|
password: config2.password,
|
||||||
|
database: config2.db
|
||||||
|
});
|
||||||
|
await client.connect();
|
||||||
|
return client;
|
||||||
|
};
|
||||||
|
class RedisIntegration {
|
||||||
|
constructor(client, projectName, apiBaseUrl) {
|
||||||
|
this.client = client;
|
||||||
|
this.projectName = projectName;
|
||||||
|
this.apiBaseUrl = apiBaseUrl;
|
||||||
|
this.heartbeatKey = "项目心跳";
|
||||||
|
this.logKey = `${projectName}_项目控制台`;
|
||||||
|
}
|
||||||
|
async info(message, context) {
|
||||||
|
const payload = {
|
||||||
|
timestamp: (/* @__PURE__ */ new Date()).toISOString(),
|
||||||
|
level: "info",
|
||||||
|
message,
|
||||||
|
metadata: context || void 0
|
||||||
|
};
|
||||||
|
await this.client.rPush(this.logKey, JSON.stringify(payload));
|
||||||
|
}
|
||||||
|
async error(message, context) {
|
||||||
|
const payload = {
|
||||||
|
timestamp: (/* @__PURE__ */ new Date()).toISOString(),
|
||||||
|
level: "error",
|
||||||
|
message,
|
||||||
|
metadata: context || void 0
|
||||||
|
};
|
||||||
|
await this.client.rPush(this.logKey, JSON.stringify(payload));
|
||||||
|
}
|
||||||
|
startHeartbeat() {
|
||||||
|
setInterval(() => {
|
||||||
|
const payload = {
|
||||||
|
projectName: this.projectName,
|
||||||
|
apiBaseUrl: this.apiBaseUrl,
|
||||||
|
lastActiveAt: Date.now()
|
||||||
|
};
|
||||||
|
this.client.rPush(this.heartbeatKey, JSON.stringify(payload));
|
||||||
|
}, 3e3);
|
||||||
|
}
|
||||||
|
}
|
||||||
|
const buildErrorQueueKey = (projectName) => `${projectName}_error_queue`;
|
||||||
|
const enqueueError = async (client, queueKey, payload) => {
|
||||||
|
try {
|
||||||
|
await client.rPush(queueKey, JSON.stringify(payload));
|
||||||
|
} catch (error) {
|
||||||
|
logger$1.error("Redis enqueue error failed", { error: error?.message });
|
||||||
|
throw error;
|
||||||
|
}
|
||||||
|
};
|
||||||
|
const startErrorRetryWorker = async ({
|
||||||
|
client,
|
||||||
|
queueKey,
|
||||||
|
handler,
|
||||||
|
redisIntegration,
|
||||||
|
maxAttempts = 5
|
||||||
|
}) => {
|
||||||
|
while (true) {
|
||||||
|
const result = await client.blPop(queueKey, 0);
|
||||||
|
const raw = result?.element;
|
||||||
|
if (!raw) {
|
||||||
|
continue;
|
||||||
|
}
|
||||||
|
let item;
|
||||||
|
try {
|
||||||
|
item = JSON.parse(raw);
|
||||||
|
} catch (error) {
|
||||||
|
logger$1.error("Invalid error payload", { error: error?.message });
|
||||||
|
await redisIntegration.error("Invalid error payload", { module: "redis", stack: error?.message });
|
||||||
|
continue;
|
||||||
|
}
|
||||||
|
const attempts = item.attempts || 0;
|
||||||
|
try {
|
||||||
|
await handler(item);
|
||||||
|
} catch (error) {
|
||||||
|
logger$1.error("Retry handler failed", { error: error?.message, stack: error?.stack });
|
||||||
|
const nextPayload = {
|
||||||
|
...item,
|
||||||
|
attempts: attempts + 1,
|
||||||
|
lastError: error?.message,
|
||||||
|
lastAttemptAt: Date.now()
|
||||||
|
};
|
||||||
|
if (nextPayload.attempts >= maxAttempts) {
|
||||||
|
await redisIntegration.error("Retry attempts exceeded", { module: "retry", stack: JSON.stringify(nextPayload) });
|
||||||
|
} else {
|
||||||
|
await enqueueError(client, queueKey, nextPayload);
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
};
|
||||||
|
class MetricCollector {
|
||||||
|
constructor() {
|
||||||
|
this.reset();
|
||||||
|
}
|
||||||
|
reset() {
|
||||||
|
this.metrics = {
|
||||||
|
kafka_pulled: 0,
|
||||||
|
parse_error: 0,
|
||||||
|
db_inserted: 0,
|
||||||
|
db_failed: 0
|
||||||
|
};
|
||||||
|
}
|
||||||
|
increment(metric, count = 1) {
|
||||||
|
if (this.metrics.hasOwnProperty(metric)) {
|
||||||
|
this.metrics[metric] += count;
|
||||||
|
}
|
||||||
|
}
|
||||||
|
getAndReset() {
|
||||||
|
const current = { ...this.metrics };
|
||||||
|
this.reset();
|
||||||
|
return current;
|
||||||
|
}
|
||||||
|
}
|
||||||
|
const bootstrap = async () => {
|
||||||
|
logger$1.info("Starting application with config", {
|
||||||
|
env: process.env.NODE_ENV,
|
||||||
|
db: {
|
||||||
|
host: config.db.host,
|
||||||
|
port: config.db.port,
|
||||||
|
user: config.db.user,
|
||||||
|
database: config.db.database,
|
||||||
|
schema: config.db.schema
|
||||||
|
},
|
||||||
|
kafka: {
|
||||||
|
brokers: config.kafka.brokers,
|
||||||
|
topic: config.kafka.topic,
|
||||||
|
groupId: config.kafka.groupId
|
||||||
|
},
|
||||||
|
redis: {
|
||||||
|
host: config.redis.host,
|
||||||
|
port: config.redis.port
|
||||||
|
}
|
||||||
|
});
|
||||||
|
await dbInitializer.initialize();
|
||||||
|
const metricCollector = new MetricCollector();
|
||||||
|
cron.schedule("0 0 * * *", async () => {
|
||||||
|
logger$1.info("Running scheduled partition maintenance...");
|
||||||
|
try {
|
||||||
|
await partitionManager.ensurePartitions(30);
|
||||||
|
} catch (err) {
|
||||||
|
logger$1.error("Scheduled partition maintenance failed", err);
|
||||||
|
}
|
||||||
|
});
|
||||||
|
const redisClient = await createRedisClient(config.redis);
|
||||||
|
const redisIntegration = new RedisIntegration(
|
||||||
|
redisClient,
|
||||||
|
config.redis.projectName,
|
||||||
|
config.redis.apiBaseUrl
|
||||||
|
);
|
||||||
|
redisIntegration.startHeartbeat();
|
||||||
|
cron.schedule("* * * * *", async () => {
|
||||||
|
const metrics = metricCollector.getAndReset();
|
||||||
|
const report = `[Minute Metrics] Pulled: ${metrics.kafka_pulled}, Parse Error: ${metrics.parse_error}, Inserted: ${metrics.db_inserted}, Failed: ${metrics.db_failed}`;
|
||||||
|
console.log(report);
|
||||||
|
logger$1.info(report, metrics);
|
||||||
|
try {
|
||||||
|
await redisIntegration.info("Minute Metrics", metrics);
|
||||||
|
} catch (err) {
|
||||||
|
logger$1.error("Failed to report metrics to Redis", { error: err?.message });
|
||||||
|
}
|
||||||
|
});
|
||||||
|
const errorQueueKey = buildErrorQueueKey(config.redis.projectName);
|
||||||
|
const handleError = async (error, message) => {
|
||||||
|
logger$1.error("Kafka processing error", {
|
||||||
|
error: error?.message,
|
||||||
|
type: error?.type,
|
||||||
|
stack: error?.stack
|
||||||
|
});
|
||||||
|
try {
|
||||||
|
await redisIntegration.error("Kafka processing error", {
|
||||||
|
module: "kafka",
|
||||||
|
stack: error?.stack || error?.message
|
||||||
|
});
|
||||||
|
} catch (redisError) {
|
||||||
|
logger$1.error("Redis error log failed", { error: redisError?.message });
|
||||||
|
}
|
||||||
|
if (message) {
|
||||||
|
const messageValue = Buffer.isBuffer(message.value) ? message.value.toString("utf8") : message.value;
|
||||||
|
try {
|
||||||
|
await enqueueError(redisClient, errorQueueKey, {
|
||||||
|
attempts: 0,
|
||||||
|
value: messageValue,
|
||||||
|
meta: {
|
||||||
|
topic: message.topic,
|
||||||
|
partition: message.partition,
|
||||||
|
offset: message.offset,
|
||||||
|
key: message.key
|
||||||
|
},
|
||||||
|
timestamp: Date.now()
|
||||||
|
});
|
||||||
|
} catch (enqueueError2) {
|
||||||
|
logger$1.error("Enqueue error payload failed", { error: enqueueError2?.message });
|
||||||
|
}
|
||||||
|
}
|
||||||
|
};
|
||||||
|
const handleMessage = async (message) => {
|
||||||
|
if (message.topic) {
|
||||||
|
metricCollector.increment("kafka_pulled");
|
||||||
|
}
|
||||||
|
const messageValue = Buffer.isBuffer(message.value) ? message.value.toString("utf8") : message.value;
|
||||||
|
Buffer.isBuffer(message.key) ? message.key.toString("utf8") : message.key;
|
||||||
|
({
|
||||||
|
topic: message.topic,
|
||||||
|
partition: message.partition,
|
||||||
|
offset: message.offset,
|
||||||
|
valueLength: !config.kafka.logMessages && typeof messageValue === "string" ? messageValue.length : null
|
||||||
|
});
|
||||||
|
while (true) {
|
||||||
|
try {
|
||||||
|
const inserted = await processKafkaMessage({ message, dbManager, config });
|
||||||
|
metricCollector.increment("db_inserted");
|
||||||
|
return;
|
||||||
|
} catch (error) {
|
||||||
|
const isDbConnectionError = error.code && ["ECONNREFUSED", "57P03", "08006", "08001", "EADDRINUSE", "ETIMEDOUT"].includes(error.code) || error.message && (error.message.includes("ECONNREFUSED") || error.message.includes("connection") || error.message.includes("terminated") || error.message.includes("EADDRINUSE") || error.message.includes("ETIMEDOUT") || error.message.includes("The server does not support SSL connections"));
|
||||||
|
if (isDbConnectionError) {
|
||||||
|
logger$1.error("Database offline. Pausing consumption for 1 minute...", { error: error.message });
|
||||||
|
await new Promise((resolve) => setTimeout(resolve, 6e4));
|
||||||
|
while (true) {
|
||||||
|
const isConnected = await dbManager.checkConnection();
|
||||||
|
if (isConnected) {
|
||||||
|
logger$1.info("Database connection restored. Resuming processing...");
|
||||||
|
break;
|
||||||
|
}
|
||||||
|
logger$1.warn("Database still offline. Waiting 1 minute...");
|
||||||
|
await new Promise((resolve) => setTimeout(resolve, 6e4));
|
||||||
|
}
|
||||||
|
} else {
|
||||||
|
if (error.type === "PARSE_ERROR") {
|
||||||
|
metricCollector.increment("parse_error");
|
||||||
|
} else {
|
||||||
|
metricCollector.increment("db_failed");
|
||||||
|
}
|
||||||
|
logger$1.error("Message processing failed (Data/Logic Error), skipping message", {
|
||||||
|
error: error?.message,
|
||||||
|
type: error?.type
|
||||||
|
});
|
||||||
|
await handleError(error, message);
|
||||||
|
return;
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
};
|
||||||
|
const consumers = createKafkaConsumers({
|
||||||
|
kafkaConfig: config.kafka,
|
||||||
|
onMessage: handleMessage,
|
||||||
|
onError: handleError
|
||||||
|
});
|
||||||
|
startErrorRetryWorker({
|
||||||
|
client: redisClient,
|
||||||
|
queueKey: errorQueueKey,
|
||||||
|
redisIntegration,
|
||||||
|
handler: async (item) => {
|
||||||
|
if (!item?.value) {
|
||||||
|
throw new Error("Missing value in retry payload");
|
||||||
|
}
|
||||||
|
await handleMessage({ value: item.value });
|
||||||
|
}
|
||||||
|
}).catch((err) => {
|
||||||
|
logger$1.error("Retry worker failed", { error: err?.message });
|
||||||
|
});
|
||||||
|
const shutdown = async (signal) => {
|
||||||
|
logger$1.info(`Received ${signal}, shutting down...`);
|
||||||
|
try {
|
||||||
|
if (consumers && consumers.length > 0) {
|
||||||
|
await Promise.all(consumers.map((c) => new Promise((resolve) => c.close(true, resolve))));
|
||||||
|
logger$1.info("Kafka consumer closed", { count: consumers.length });
|
||||||
|
}
|
||||||
|
await redisClient.quit();
|
||||||
|
logger$1.info("Redis client closed");
|
||||||
|
await dbManager.close();
|
||||||
|
logger$1.info("Database connection closed");
|
||||||
|
process.exit(0);
|
||||||
|
} catch (err) {
|
||||||
|
logger$1.error("Error during shutdown", { error: err?.message });
|
||||||
|
process.exit(1);
|
||||||
|
}
|
||||||
|
};
|
||||||
|
process.on("SIGTERM", () => shutdown("SIGTERM"));
|
||||||
|
process.on("SIGINT", () => shutdown("SIGINT"));
|
||||||
|
};
|
||||||
|
bootstrap().catch((error) => {
|
||||||
|
logger$1.error("Service bootstrap failed", { error: error?.message });
|
||||||
|
process.exit(1);
|
||||||
|
});
|
||||||
22
bls-onoffline-backend/ecosystem.config.cjs
Normal file
22
bls-onoffline-backend/ecosystem.config.cjs
Normal file
@@ -0,0 +1,22 @@
|
|||||||
|
module.exports = {
|
||||||
|
apps: [{
|
||||||
|
name: 'bls-onoffline',
|
||||||
|
script: 'dist/index.js',
|
||||||
|
instances: 1,
|
||||||
|
exec_mode: 'fork',
|
||||||
|
autorestart: true,
|
||||||
|
watch: false,
|
||||||
|
max_memory_restart: '1G',
|
||||||
|
env_file: '.env',
|
||||||
|
env: {
|
||||||
|
NODE_ENV: 'production',
|
||||||
|
PORT: 3001
|
||||||
|
},
|
||||||
|
error_file: './logs/error.log',
|
||||||
|
out_file: './logs/out.log',
|
||||||
|
log_date_format: 'YYYY-MM-DD HH:mm:ss Z',
|
||||||
|
merge_logs: true,
|
||||||
|
kill_timeout: 5000,
|
||||||
|
time: true
|
||||||
|
}]
|
||||||
|
};
|
||||||
456
bls-onoffline-backend/openspec/AGENTS.md
Normal file
456
bls-onoffline-backend/openspec/AGENTS.md
Normal file
@@ -0,0 +1,456 @@
|
|||||||
|
# OpenSpec Instructions
|
||||||
|
|
||||||
|
Instructions for AI coding assistants using OpenSpec for spec-driven development.
|
||||||
|
|
||||||
|
## TL;DR Quick Checklist
|
||||||
|
|
||||||
|
- Search existing work: `openspec spec list --long`, `openspec list` (use `rg` only for full-text search)
|
||||||
|
- Decide scope: new capability vs modify existing capability
|
||||||
|
- Pick a unique `change-id`: kebab-case, verb-led (`add-`, `update-`, `remove-`, `refactor-`)
|
||||||
|
- Scaffold: `proposal.md`, `tasks.md`, `design.md` (only if needed), and delta specs per affected capability
|
||||||
|
- Write deltas: use `## ADDED|MODIFIED|REMOVED|RENAMED Requirements`; include at least one `#### Scenario:` per requirement
|
||||||
|
- Validate: `openspec validate [change-id] --strict` and fix issues
|
||||||
|
- Request approval: Do not start implementation until proposal is approved
|
||||||
|
|
||||||
|
## Three-Stage Workflow
|
||||||
|
|
||||||
|
### Stage 1: Creating Changes
|
||||||
|
Create proposal when you need to:
|
||||||
|
- Add features or functionality
|
||||||
|
- Make breaking changes (API, schema)
|
||||||
|
- Change architecture or patterns
|
||||||
|
- Optimize performance (changes behavior)
|
||||||
|
- Update security patterns
|
||||||
|
|
||||||
|
Triggers (examples):
|
||||||
|
- "Help me create a change proposal"
|
||||||
|
- "Help me plan a change"
|
||||||
|
- "Help me create a proposal"
|
||||||
|
- "I want to create a spec proposal"
|
||||||
|
- "I want to create a spec"
|
||||||
|
|
||||||
|
Loose matching guidance:
|
||||||
|
- Contains one of: `proposal`, `change`, `spec`
|
||||||
|
- With one of: `create`, `plan`, `make`, `start`, `help`
|
||||||
|
|
||||||
|
Skip proposal for:
|
||||||
|
- Bug fixes (restore intended behavior)
|
||||||
|
- Typos, formatting, comments
|
||||||
|
- Dependency updates (non-breaking)
|
||||||
|
- Configuration changes
|
||||||
|
- Tests for existing behavior
|
||||||
|
|
||||||
|
**Workflow**
|
||||||
|
1. Review `openspec/project.md`, `openspec list`, and `openspec list --specs` to understand current context.
|
||||||
|
2. Choose a unique verb-led `change-id` and scaffold `proposal.md`, `tasks.md`, optional `design.md`, and spec deltas under `openspec/changes/<id>/`.
|
||||||
|
3. Draft spec deltas using `## ADDED|MODIFIED|REMOVED Requirements` with at least one `#### Scenario:` per requirement.
|
||||||
|
4. Run `openspec validate <id> --strict` and resolve any issues before sharing the proposal.
|
||||||
|
|
||||||
|
### Stage 2: Implementing Changes
|
||||||
|
Track these steps as TODOs and complete them one by one.
|
||||||
|
1. **Read proposal.md** - Understand what's being built
|
||||||
|
2. **Read design.md** (if exists) - Review technical decisions
|
||||||
|
3. **Read tasks.md** - Get implementation checklist
|
||||||
|
4. **Implement tasks sequentially** - Complete in order
|
||||||
|
5. **Confirm completion** - Ensure every item in `tasks.md` is finished before updating statuses
|
||||||
|
6. **Update checklist** - After all work is done, set every task to `- [x]` so the list reflects reality
|
||||||
|
7. **Approval gate** - Do not start implementation until the proposal is reviewed and approved
|
||||||
|
|
||||||
|
### Stage 3: Archiving Changes
|
||||||
|
After deployment, create separate PR to:
|
||||||
|
- Move `changes/[name]/` → `changes/archive/YYYY-MM-DD-[name]/`
|
||||||
|
- Update `specs/` if capabilities changed
|
||||||
|
- Use `openspec archive <change-id> --skip-specs --yes` for tooling-only changes (always pass the change ID explicitly)
|
||||||
|
- Run `openspec validate --strict` to confirm the archived change passes checks
|
||||||
|
|
||||||
|
## Before Any Task
|
||||||
|
|
||||||
|
**Context Checklist:**
|
||||||
|
- [ ] Read relevant specs in `specs/[capability]/spec.md`
|
||||||
|
- [ ] Check pending changes in `changes/` for conflicts
|
||||||
|
- [ ] Read `openspec/project.md` for conventions
|
||||||
|
- [ ] Run `openspec list` to see active changes
|
||||||
|
- [ ] Run `openspec list --specs` to see existing capabilities
|
||||||
|
|
||||||
|
**Before Creating Specs:**
|
||||||
|
- Always check if capability already exists
|
||||||
|
- Prefer modifying existing specs over creating duplicates
|
||||||
|
- Use `openspec show [spec]` to review current state
|
||||||
|
- If request is ambiguous, ask 1–2 clarifying questions before scaffolding
|
||||||
|
|
||||||
|
### Search Guidance
|
||||||
|
- Enumerate specs: `openspec spec list --long` (or `--json` for scripts)
|
||||||
|
- Enumerate changes: `openspec list` (or `openspec change list --json` - deprecated but available)
|
||||||
|
- Show details:
|
||||||
|
- Spec: `openspec show <spec-id> --type spec` (use `--json` for filters)
|
||||||
|
- Change: `openspec show <change-id> --json --deltas-only`
|
||||||
|
- Full-text search (use ripgrep): `rg -n "Requirement:|Scenario:" openspec/specs`
|
||||||
|
|
||||||
|
## Quick Start
|
||||||
|
|
||||||
|
### CLI Commands
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# Essential commands
|
||||||
|
openspec list # List active changes
|
||||||
|
openspec list --specs # List specifications
|
||||||
|
openspec show [item] # Display change or spec
|
||||||
|
openspec validate [item] # Validate changes or specs
|
||||||
|
openspec archive <change-id> [--yes|-y] # Archive after deployment (add --yes for non-interactive runs)
|
||||||
|
|
||||||
|
# Project management
|
||||||
|
openspec init [path] # Initialize OpenSpec
|
||||||
|
openspec update [path] # Update instruction files
|
||||||
|
|
||||||
|
# Interactive mode
|
||||||
|
openspec show # Prompts for selection
|
||||||
|
openspec validate # Bulk validation mode
|
||||||
|
|
||||||
|
# Debugging
|
||||||
|
openspec show [change] --json --deltas-only
|
||||||
|
openspec validate [change] --strict
|
||||||
|
```
|
||||||
|
|
||||||
|
### Command Flags
|
||||||
|
|
||||||
|
- `--json` - Machine-readable output
|
||||||
|
- `--type change|spec` - Disambiguate items
|
||||||
|
- `--strict` - Comprehensive validation
|
||||||
|
- `--no-interactive` - Disable prompts
|
||||||
|
- `--skip-specs` - Archive without spec updates
|
||||||
|
- `--yes`/`-y` - Skip confirmation prompts (non-interactive archive)
|
||||||
|
|
||||||
|
## Directory Structure
|
||||||
|
|
||||||
|
```
|
||||||
|
openspec/
|
||||||
|
├── project.md # Project conventions
|
||||||
|
├── specs/ # Current truth - what IS built
|
||||||
|
│ └── [capability]/ # Single focused capability
|
||||||
|
│ ├── spec.md # Requirements and scenarios
|
||||||
|
│ └── design.md # Technical patterns
|
||||||
|
├── changes/ # Proposals - what SHOULD change
|
||||||
|
│ ├── [change-name]/
|
||||||
|
│ │ ├── proposal.md # Why, what, impact
|
||||||
|
│ │ ├── tasks.md # Implementation checklist
|
||||||
|
│ │ ├── design.md # Technical decisions (optional; see criteria)
|
||||||
|
│ │ └── specs/ # Delta changes
|
||||||
|
│ │ └── [capability]/
|
||||||
|
│ │ └── spec.md # ADDED/MODIFIED/REMOVED
|
||||||
|
│ └── archive/ # Completed changes
|
||||||
|
```
|
||||||
|
|
||||||
|
## Creating Change Proposals
|
||||||
|
|
||||||
|
### Decision Tree
|
||||||
|
|
||||||
|
```
|
||||||
|
New request?
|
||||||
|
├─ Bug fix restoring spec behavior? → Fix directly
|
||||||
|
├─ Typo/format/comment? → Fix directly
|
||||||
|
├─ New feature/capability? → Create proposal
|
||||||
|
├─ Breaking change? → Create proposal
|
||||||
|
├─ Architecture change? → Create proposal
|
||||||
|
└─ Unclear? → Create proposal (safer)
|
||||||
|
```
|
||||||
|
|
||||||
|
### Proposal Structure
|
||||||
|
|
||||||
|
1. **Create directory:** `changes/[change-id]/` (kebab-case, verb-led, unique)
|
||||||
|
|
||||||
|
2. **Write proposal.md:**
|
||||||
|
```markdown
|
||||||
|
# Change: [Brief description of change]
|
||||||
|
|
||||||
|
## Why
|
||||||
|
[1-2 sentences on problem/opportunity]
|
||||||
|
|
||||||
|
## What Changes
|
||||||
|
- [Bullet list of changes]
|
||||||
|
- [Mark breaking changes with **BREAKING**]
|
||||||
|
|
||||||
|
## Impact
|
||||||
|
- Affected specs: [list capabilities]
|
||||||
|
- Affected code: [key files/systems]
|
||||||
|
```
|
||||||
|
|
||||||
|
3. **Create spec deltas:** `specs/[capability]/spec.md`
|
||||||
|
```markdown
|
||||||
|
## ADDED Requirements
|
||||||
|
### Requirement: New Feature
|
||||||
|
The system SHALL provide...
|
||||||
|
|
||||||
|
#### Scenario: Success case
|
||||||
|
- **WHEN** user performs action
|
||||||
|
- **THEN** expected result
|
||||||
|
|
||||||
|
## MODIFIED Requirements
|
||||||
|
### Requirement: Existing Feature
|
||||||
|
[Complete modified requirement]
|
||||||
|
|
||||||
|
## REMOVED Requirements
|
||||||
|
### Requirement: Old Feature
|
||||||
|
**Reason**: [Why removing]
|
||||||
|
**Migration**: [How to handle]
|
||||||
|
```
|
||||||
|
If multiple capabilities are affected, create multiple delta files under `changes/[change-id]/specs/<capability>/spec.md`—one per capability.
|
||||||
|
|
||||||
|
4. **Create tasks.md:**
|
||||||
|
```markdown
|
||||||
|
## 1. Implementation
|
||||||
|
- [ ] 1.1 Create database schema
|
||||||
|
- [ ] 1.2 Implement API endpoint
|
||||||
|
- [ ] 1.3 Add frontend component
|
||||||
|
- [ ] 1.4 Write tests
|
||||||
|
```
|
||||||
|
|
||||||
|
5. **Create design.md when needed:**
|
||||||
|
Create `design.md` if any of the following apply; otherwise omit it:
|
||||||
|
- Cross-cutting change (multiple services/modules) or a new architectural pattern
|
||||||
|
- New external dependency or significant data model changes
|
||||||
|
- Security, performance, or migration complexity
|
||||||
|
- Ambiguity that benefits from technical decisions before coding
|
||||||
|
|
||||||
|
Minimal `design.md` skeleton:
|
||||||
|
```markdown
|
||||||
|
## Context
|
||||||
|
[Background, constraints, stakeholders]
|
||||||
|
|
||||||
|
## Goals / Non-Goals
|
||||||
|
- Goals: [...]
|
||||||
|
- Non-Goals: [...]
|
||||||
|
|
||||||
|
## Decisions
|
||||||
|
- Decision: [What and why]
|
||||||
|
- Alternatives considered: [Options + rationale]
|
||||||
|
|
||||||
|
## Risks / Trade-offs
|
||||||
|
- [Risk] → Mitigation
|
||||||
|
|
||||||
|
## Migration Plan
|
||||||
|
[Steps, rollback]
|
||||||
|
|
||||||
|
## Open Questions
|
||||||
|
- [...]
|
||||||
|
```
|
||||||
|
|
||||||
|
## Spec File Format
|
||||||
|
|
||||||
|
### Critical: Scenario Formatting
|
||||||
|
|
||||||
|
**CORRECT** (use #### headers):
|
||||||
|
```markdown
|
||||||
|
#### Scenario: User login success
|
||||||
|
- **WHEN** valid credentials provided
|
||||||
|
- **THEN** return JWT token
|
||||||
|
```
|
||||||
|
|
||||||
|
**WRONG** (don't use bullets or bold):
|
||||||
|
```markdown
|
||||||
|
- **Scenario: User login** ❌
|
||||||
|
**Scenario**: User login ❌
|
||||||
|
### Scenario: User login ❌
|
||||||
|
```
|
||||||
|
|
||||||
|
Every requirement MUST have at least one scenario.
|
||||||
|
|
||||||
|
### Requirement Wording
|
||||||
|
- Use SHALL/MUST for normative requirements (avoid should/may unless intentionally non-normative)
|
||||||
|
|
||||||
|
### Delta Operations
|
||||||
|
|
||||||
|
- `## ADDED Requirements` - New capabilities
|
||||||
|
- `## MODIFIED Requirements` - Changed behavior
|
||||||
|
- `## REMOVED Requirements` - Deprecated features
|
||||||
|
- `## RENAMED Requirements` - Name changes
|
||||||
|
|
||||||
|
Headers matched with `trim(header)` - whitespace ignored.
|
||||||
|
|
||||||
|
#### When to use ADDED vs MODIFIED
|
||||||
|
- ADDED: Introduces a new capability or sub-capability that can stand alone as a requirement. Prefer ADDED when the change is orthogonal (e.g., adding "Slash Command Configuration") rather than altering the semantics of an existing requirement.
|
||||||
|
- MODIFIED: Changes the behavior, scope, or acceptance criteria of an existing requirement. Always paste the full, updated requirement content (header + all scenarios). The archiver will replace the entire requirement with what you provide here; partial deltas will drop previous details.
|
||||||
|
- RENAMED: Use when only the name changes. If you also change behavior, use RENAMED (name) plus MODIFIED (content) referencing the new name.
|
||||||
|
|
||||||
|
Common pitfall: Using MODIFIED to add a new concern without including the previous text. This causes loss of detail at archive time. If you aren’t explicitly changing the existing requirement, add a new requirement under ADDED instead.
|
||||||
|
|
||||||
|
Authoring a MODIFIED requirement correctly:
|
||||||
|
1) Locate the existing requirement in `openspec/specs/<capability>/spec.md`.
|
||||||
|
2) Copy the entire requirement block (from `### Requirement: ...` through its scenarios).
|
||||||
|
3) Paste it under `## MODIFIED Requirements` and edit to reflect the new behavior.
|
||||||
|
4) Ensure the header text matches exactly (whitespace-insensitive) and keep at least one `#### Scenario:`.
|
||||||
|
|
||||||
|
Example for RENAMED:
|
||||||
|
```markdown
|
||||||
|
## RENAMED Requirements
|
||||||
|
- FROM: `### Requirement: Login`
|
||||||
|
- TO: `### Requirement: User Authentication`
|
||||||
|
```
|
||||||
|
|
||||||
|
## Troubleshooting
|
||||||
|
|
||||||
|
### Common Errors
|
||||||
|
|
||||||
|
**"Change must have at least one delta"**
|
||||||
|
- Check `changes/[name]/specs/` exists with .md files
|
||||||
|
- Verify files have operation prefixes (## ADDED Requirements)
|
||||||
|
|
||||||
|
**"Requirement must have at least one scenario"**
|
||||||
|
- Check scenarios use `#### Scenario:` format (4 hashtags)
|
||||||
|
- Don't use bullet points or bold for scenario headers
|
||||||
|
|
||||||
|
**Silent scenario parsing failures**
|
||||||
|
- Exact format required: `#### Scenario: Name`
|
||||||
|
- Debug with: `openspec show [change] --json --deltas-only`
|
||||||
|
|
||||||
|
### Validation Tips
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# Always use strict mode for comprehensive checks
|
||||||
|
openspec validate [change] --strict
|
||||||
|
|
||||||
|
# Debug delta parsing
|
||||||
|
openspec show [change] --json | jq '.deltas'
|
||||||
|
|
||||||
|
# Check specific requirement
|
||||||
|
openspec show [spec] --json -r 1
|
||||||
|
```
|
||||||
|
|
||||||
|
## Happy Path Script
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# 1) Explore current state
|
||||||
|
openspec spec list --long
|
||||||
|
openspec list
|
||||||
|
# Optional full-text search:
|
||||||
|
# rg -n "Requirement:|Scenario:" openspec/specs
|
||||||
|
# rg -n "^#|Requirement:" openspec/changes
|
||||||
|
|
||||||
|
# 2) Choose change id and scaffold
|
||||||
|
CHANGE=add-two-factor-auth
|
||||||
|
mkdir -p openspec/changes/$CHANGE/{specs/auth}
|
||||||
|
printf "## Why\n...\n\n## What Changes\n- ...\n\n## Impact\n- ...\n" > openspec/changes/$CHANGE/proposal.md
|
||||||
|
printf "## 1. Implementation\n- [ ] 1.1 ...\n" > openspec/changes/$CHANGE/tasks.md
|
||||||
|
|
||||||
|
# 3) Add deltas (example)
|
||||||
|
cat > openspec/changes/$CHANGE/specs/auth/spec.md << 'EOF'
|
||||||
|
## ADDED Requirements
|
||||||
|
### Requirement: Two-Factor Authentication
|
||||||
|
Users MUST provide a second factor during login.
|
||||||
|
|
||||||
|
#### Scenario: OTP required
|
||||||
|
- **WHEN** valid credentials are provided
|
||||||
|
- **THEN** an OTP challenge is required
|
||||||
|
EOF
|
||||||
|
|
||||||
|
# 4) Validate
|
||||||
|
openspec validate $CHANGE --strict
|
||||||
|
```
|
||||||
|
|
||||||
|
## Multi-Capability Example
|
||||||
|
|
||||||
|
```
|
||||||
|
openspec/changes/add-2fa-notify/
|
||||||
|
├── proposal.md
|
||||||
|
├── tasks.md
|
||||||
|
└── specs/
|
||||||
|
├── auth/
|
||||||
|
│ └── spec.md # ADDED: Two-Factor Authentication
|
||||||
|
└── notifications/
|
||||||
|
└── spec.md # ADDED: OTP email notification
|
||||||
|
```
|
||||||
|
|
||||||
|
auth/spec.md
|
||||||
|
```markdown
|
||||||
|
## ADDED Requirements
|
||||||
|
### Requirement: Two-Factor Authentication
|
||||||
|
...
|
||||||
|
```
|
||||||
|
|
||||||
|
notifications/spec.md
|
||||||
|
```markdown
|
||||||
|
## ADDED Requirements
|
||||||
|
### Requirement: OTP Email Notification
|
||||||
|
...
|
||||||
|
```
|
||||||
|
|
||||||
|
## Best Practices
|
||||||
|
|
||||||
|
### Simplicity First
|
||||||
|
- Default to <100 lines of new code
|
||||||
|
- Single-file implementations until proven insufficient
|
||||||
|
- Avoid frameworks without clear justification
|
||||||
|
- Choose boring, proven patterns
|
||||||
|
|
||||||
|
### Complexity Triggers
|
||||||
|
Only add complexity with:
|
||||||
|
- Performance data showing current solution too slow
|
||||||
|
- Concrete scale requirements (>1000 users, >100MB data)
|
||||||
|
- Multiple proven use cases requiring abstraction
|
||||||
|
|
||||||
|
### Clear References
|
||||||
|
- Use `file.ts:42` format for code locations
|
||||||
|
- Reference specs as `specs/auth/spec.md`
|
||||||
|
- Link related changes and PRs
|
||||||
|
|
||||||
|
### Capability Naming
|
||||||
|
- Use verb-noun: `user-auth`, `payment-capture`
|
||||||
|
- Single purpose per capability
|
||||||
|
- 10-minute understandability rule
|
||||||
|
- Split if description needs "AND"
|
||||||
|
|
||||||
|
### Change ID Naming
|
||||||
|
- Use kebab-case, short and descriptive: `add-two-factor-auth`
|
||||||
|
- Prefer verb-led prefixes: `add-`, `update-`, `remove-`, `refactor-`
|
||||||
|
- Ensure uniqueness; if taken, append `-2`, `-3`, etc.
|
||||||
|
|
||||||
|
## Tool Selection Guide
|
||||||
|
|
||||||
|
| Task | Tool | Why |
|
||||||
|
|------|------|-----|
|
||||||
|
| Find files by pattern | Glob | Fast pattern matching |
|
||||||
|
| Search code content | Grep | Optimized regex search |
|
||||||
|
| Read specific files | Read | Direct file access |
|
||||||
|
| Explore unknown scope | Task | Multi-step investigation |
|
||||||
|
|
||||||
|
## Error Recovery
|
||||||
|
|
||||||
|
### Change Conflicts
|
||||||
|
1. Run `openspec list` to see active changes
|
||||||
|
2. Check for overlapping specs
|
||||||
|
3. Coordinate with change owners
|
||||||
|
4. Consider combining proposals
|
||||||
|
|
||||||
|
### Validation Failures
|
||||||
|
1. Run with `--strict` flag
|
||||||
|
2. Check JSON output for details
|
||||||
|
3. Verify spec file format
|
||||||
|
4. Ensure scenarios properly formatted
|
||||||
|
|
||||||
|
### Missing Context
|
||||||
|
1. Read project.md first
|
||||||
|
2. Check related specs
|
||||||
|
3. Review recent archives
|
||||||
|
4. Ask for clarification
|
||||||
|
|
||||||
|
## Quick Reference
|
||||||
|
|
||||||
|
### Stage Indicators
|
||||||
|
- `changes/` - Proposed, not yet built
|
||||||
|
- `specs/` - Built and deployed
|
||||||
|
- `archive/` - Completed changes
|
||||||
|
|
||||||
|
### File Purposes
|
||||||
|
- `proposal.md` - Why and what
|
||||||
|
- `tasks.md` - Implementation steps
|
||||||
|
- `design.md` - Technical decisions
|
||||||
|
- `spec.md` - Requirements and behavior
|
||||||
|
|
||||||
|
### CLI Essentials
|
||||||
|
```bash
|
||||||
|
openspec list # What's in progress?
|
||||||
|
openspec show [item] # View details
|
||||||
|
openspec validate --strict # Is it correct?
|
||||||
|
openspec archive <change-id> [--yes|-y] # Mark complete (add --yes for automation)
|
||||||
|
```
|
||||||
|
|
||||||
|
Remember: Specs are truth. Changes are proposals. Keep them in sync.
|
||||||
@@ -0,0 +1,17 @@
|
|||||||
|
# Change: Fix Kafka Partitioning and Schema Issues
|
||||||
|
|
||||||
|
## Why
|
||||||
|
Production deployment revealed issues with data ingestion:
|
||||||
|
1. Kafka Topic name changed to include partition suffix.
|
||||||
|
2. Legacy data contains second-level timestamps (1970s) causing partition lookup failures in PostgreSQL (which expects ms).
|
||||||
|
3. Variable-length fields (reboot reason, status) exceeded VARCHAR(10) limits, causing crashes.
|
||||||
|
|
||||||
|
## What Changes
|
||||||
|
- **Modified Requirement**: Update Kafka Topic to `blwlog4Nodejs-rcu-onoffline-topic-0`.
|
||||||
|
- **New Requirement**: Implement heuristic timestamp conversion (Sec -> MS) for values < 100B.
|
||||||
|
- **New Requirement**: Truncate specific fields to VARCHAR(255) to prevent DB rejection.
|
||||||
|
- **Modified Requirement**: Update DB Schema to VARCHAR(255) for robustness.
|
||||||
|
|
||||||
|
## Impact
|
||||||
|
- Affected specs: `onoffline`
|
||||||
|
- Affected code: `src/processor/index.js`, `scripts/init_db.sql`
|
||||||
@@ -0,0 +1,25 @@
|
|||||||
|
## MODIFIED Requirements
|
||||||
|
### Requirement: 消费并落库
|
||||||
|
系统 SHALL 从 blwlog4Nodejs-rcu-onoffline-topic-0 消费消息,并写入 log_platform.onoffline.onoffline_record。
|
||||||
|
|
||||||
|
#### Scenario: 非重启数据写入
|
||||||
|
- **GIVEN** RebootReason 为空或不存在
|
||||||
|
- **WHEN** 消息被处理
|
||||||
|
- **THEN** current_status 等于 CurrentStatus (截断至 255 字符)
|
||||||
|
|
||||||
|
## ADDED Requirements
|
||||||
|
### Requirement: 字段长度限制与截断
|
||||||
|
系统 SHALL 将部分变长字段截断至数据库允许的最大长度 (VARCHAR(255)),防止写入失败。
|
||||||
|
|
||||||
|
#### Scenario: 超长字段处理
|
||||||
|
- **GIVEN** LauncherVersion, CurrentStatus 或 RebootReason 超过 255 字符
|
||||||
|
- **WHEN** 消息被处理
|
||||||
|
- **THEN** 字段被截断为前 255 个字符并入库
|
||||||
|
|
||||||
|
### Requirement: 时间戳单位自动识别
|
||||||
|
系统 SHALL 自动识别 UnixTime 字段是秒还是毫秒,并统一转换为毫秒。
|
||||||
|
|
||||||
|
#### Scenario: 秒级时间戳转换
|
||||||
|
- **GIVEN** UnixTime < 100000000000 (约 1973 年前)
|
||||||
|
- **WHEN** 解析时间戳
|
||||||
|
- **THEN** 自动乘以 1000 转换为毫秒
|
||||||
@@ -0,0 +1,6 @@
|
|||||||
|
## 1. Implementation
|
||||||
|
- [x] Update Kafka Topic in .env and config
|
||||||
|
- [x] Implement timestamp unit detection and conversion in processor
|
||||||
|
- [x] Implement field truncation logic in processor
|
||||||
|
- [x] Update database schema definition (init_db.sql) to VARCHAR(255)
|
||||||
|
- [x] Verify data ingestion with production stream
|
||||||
31
bls-onoffline-backend/openspec/project.md
Normal file
31
bls-onoffline-backend/openspec/project.md
Normal file
@@ -0,0 +1,31 @@
|
|||||||
|
# Project Context
|
||||||
|
|
||||||
|
## Purpose
|
||||||
|
[Describe your project's purpose and goals]
|
||||||
|
|
||||||
|
## Tech Stack
|
||||||
|
- [List your primary technologies]
|
||||||
|
- [e.g., TypeScript, React, Node.js]
|
||||||
|
|
||||||
|
## Project Conventions
|
||||||
|
|
||||||
|
### Code Style
|
||||||
|
[Describe your code style preferences, formatting rules, and naming conventions]
|
||||||
|
|
||||||
|
### Architecture Patterns
|
||||||
|
[Document your architectural decisions and patterns]
|
||||||
|
|
||||||
|
### Testing Strategy
|
||||||
|
[Explain your testing approach and requirements]
|
||||||
|
|
||||||
|
### Git Workflow
|
||||||
|
[Describe your branching strategy and commit conventions]
|
||||||
|
|
||||||
|
## Domain Context
|
||||||
|
[Add domain-specific knowledge that AI assistants need to understand]
|
||||||
|
|
||||||
|
## Important Constraints
|
||||||
|
[List any technical, business, or regulatory constraints]
|
||||||
|
|
||||||
|
## External Dependencies
|
||||||
|
[Document key external services, APIs, or systems]
|
||||||
85
bls-onoffline-backend/openspec/specs/onoffline/spec.md
Normal file
85
bls-onoffline-backend/openspec/specs/onoffline/spec.md
Normal file
@@ -0,0 +1,85 @@
|
|||||||
|
# Spec: onoffline-backend
|
||||||
|
|
||||||
|
## Purpose
|
||||||
|
从 Kafka 消费设备上下线事件并按规则写入 PostgreSQL 分区表,确保高可靠性、幂等写入和错误恢复能力。
|
||||||
|
## Requirements
|
||||||
|
### Requirement: 消费并落库
|
||||||
|
系统 SHALL 从 blwlog4Nodejs-rcu-onoffline-topic-0 消费消息,并写入 log_platform.onoffline.onoffline_record。
|
||||||
|
|
||||||
|
#### Scenario: 非重启数据写入
|
||||||
|
- **GIVEN** RebootReason 为空或不存在
|
||||||
|
- **WHEN** 消息被处理
|
||||||
|
- **THEN** current_status 等于 CurrentStatus (截断至 255 字符)
|
||||||
|
|
||||||
|
### Requirement: 重启数据处理
|
||||||
|
系统 SHALL 在 RebootReason 非空时强制 current_status 为 on。
|
||||||
|
|
||||||
|
#### Scenario: 重启数据写入
|
||||||
|
- **GIVEN** RebootReason 为非空值
|
||||||
|
- **WHEN** 消息被处理
|
||||||
|
- **THEN** current_status 等于 on
|
||||||
|
|
||||||
|
### Requirement: 空值保留
|
||||||
|
系统 SHALL 保留上游空值,不对字段进行补 0。
|
||||||
|
|
||||||
|
#### Scenario: 空值写入
|
||||||
|
- **GIVEN** LauncherVersion 或 RebootReason 为空字符串
|
||||||
|
- **WHEN** 消息被处理
|
||||||
|
- **THEN** 数据库存储值为对应的空字符串
|
||||||
|
|
||||||
|
### Requirement: 数据库分区策略
|
||||||
|
系统 SHALL 使用 Range Partitioning 按天分区,并自动维护未来 30 天的分区表。
|
||||||
|
|
||||||
|
#### Scenario: 分区预创建
|
||||||
|
- **GIVEN** 系统启动或每日凌晨
|
||||||
|
- **WHEN** 运行分区维护任务
|
||||||
|
- **THEN** 确保数据库中存在未来 30 天的分区表
|
||||||
|
|
||||||
|
### Requirement: 消费可靠性 (At-Least-Once)
|
||||||
|
系统 SHALL 仅在数据成功写入数据库后,才向 Kafka 提交消费位点。
|
||||||
|
|
||||||
|
#### Scenario: 逐条确认与顺序提交
|
||||||
|
- **GIVEN** 并发处理多条消息 (Offset 1, 2, 3)
|
||||||
|
- **WHEN** Offset 2 先完成,Offset 1 尚未完成
|
||||||
|
- **THEN** 系统不提交 Offset 2,直到 Offset 1 也完成,才提交 Offset 3 (即 1, 2, 3 都完成)
|
||||||
|
|
||||||
|
### Requirement: 数据库离线保护
|
||||||
|
系统 SHALL 在数据库连接丢失时暂停消费,防止数据堆积或丢失。
|
||||||
|
|
||||||
|
#### Scenario: 数据库断连
|
||||||
|
- **GIVEN** 数据库连接失败 (ECONNREFUSED 等)
|
||||||
|
- **WHEN** 消费者尝试写入
|
||||||
|
- **THEN** 暂停 Kafka 消费 1 分钟,并进入轮询检测模式,直到数据库恢复
|
||||||
|
|
||||||
|
### Requirement: 幂等写入
|
||||||
|
系统 SHALL 处理重复消费的数据,防止主键冲突。
|
||||||
|
|
||||||
|
#### Scenario: 重复数据处理
|
||||||
|
- **GIVEN** Kafka 重新投递已处理过的消息
|
||||||
|
- **WHEN** 尝试写入数据库
|
||||||
|
- **THEN** 使用 `ON CONFLICT DO NOTHING` 忽略冲突,视为处理成功
|
||||||
|
|
||||||
|
### Requirement: 性能与日志
|
||||||
|
系统 SHALL 最小化正常运行时的日志输出。
|
||||||
|
|
||||||
|
#### Scenario: 正常运行日志
|
||||||
|
- **GIVEN** 数据正常处理
|
||||||
|
- **WHEN** 写入成功
|
||||||
|
- **THEN** 不输出单条日志,仅每分钟输出聚合统计 (Pulled/Inserted)
|
||||||
|
|
||||||
|
### Requirement: 字段长度限制与截断
|
||||||
|
系统 SHALL 将部分变长字段截断至数据库允许的最大长度 (VARCHAR(255)),防止写入失败。
|
||||||
|
|
||||||
|
#### Scenario: 超长字段处理
|
||||||
|
- **GIVEN** LauncherVersion, CurrentStatus 或 RebootReason 超过 255 字符
|
||||||
|
- **WHEN** 消息被处理
|
||||||
|
- **THEN** 字段被截断为前 255 个字符并入库
|
||||||
|
|
||||||
|
### Requirement: 时间戳单位自动识别
|
||||||
|
系统 SHALL 自动识别 UnixTime 字段是秒还是毫秒,并统一转换为毫秒。
|
||||||
|
|
||||||
|
#### Scenario: 秒级时间戳转换
|
||||||
|
- **GIVEN** UnixTime < 100000000000 (约 1973 年前)
|
||||||
|
- **WHEN** 解析时间戳
|
||||||
|
- **THEN** 自动乘以 1000 转换为毫秒
|
||||||
|
|
||||||
11
bls-onoffline-backend/openspec/specs/onoffline/status.md
Normal file
11
bls-onoffline-backend/openspec/specs/onoffline/status.md
Normal file
@@ -0,0 +1,11 @@
|
|||||||
|
|
||||||
|
## Implementation Status
|
||||||
|
- **Date**: 2026-02-04
|
||||||
|
- **Status**: Completed
|
||||||
|
- **Notes**:
|
||||||
|
- 已完成核心消费逻辑、分区管理、数据库幂等写入。
|
||||||
|
- 已处理数据库连接泄露 (EADDRINUSE) 问题,增加了离线保护机制。
|
||||||
|
- 已修复时间戳单位问题 (Seconds -> MS)。
|
||||||
|
- 已将关键字段长度扩展至 VARCHAR(255) 并增加了代码层截断保护。
|
||||||
|
- 验证了数据积压消费能力。
|
||||||
|
- 本阶段开发任务已归档。
|
||||||
3526
bls-onoffline-backend/package-lock.json
generated
Normal file
3526
bls-onoffline-backend/package-lock.json
generated
Normal file
File diff suppressed because it is too large
Load Diff
27
bls-onoffline-backend/package.json
Normal file
27
bls-onoffline-backend/package.json
Normal file
@@ -0,0 +1,27 @@
|
|||||||
|
{
|
||||||
|
"name": "bls-onoffline-backend",
|
||||||
|
"version": "1.0.0",
|
||||||
|
"type": "module",
|
||||||
|
"private": true,
|
||||||
|
"scripts": {
|
||||||
|
"dev": "node src/index.js",
|
||||||
|
"build": "vite build --ssr src/index.js --outDir dist",
|
||||||
|
"test": "vitest run",
|
||||||
|
"lint": "node scripts/lint.js",
|
||||||
|
"spec:lint": "openspec validate --specs --strict --no-interactive",
|
||||||
|
"spec:validate": "openspec validate --specs --no-interactive",
|
||||||
|
"start": "node dist/index.js"
|
||||||
|
},
|
||||||
|
"dependencies": {
|
||||||
|
"dotenv": "^16.4.5",
|
||||||
|
"kafka-node": "^5.0.0",
|
||||||
|
"node-cron": "^4.2.1",
|
||||||
|
"pg": "^8.11.5",
|
||||||
|
"redis": "^4.6.13",
|
||||||
|
"zod": "^4.3.6"
|
||||||
|
},
|
||||||
|
"devDependencies": {
|
||||||
|
"vite": "^5.4.0",
|
||||||
|
"vitest": "^4.0.18"
|
||||||
|
}
|
||||||
|
}
|
||||||
23
bls-onoffline-backend/scripts/init_db.sql
Normal file
23
bls-onoffline-backend/scripts/init_db.sql
Normal file
@@ -0,0 +1,23 @@
|
|||||||
|
CREATE SCHEMA IF NOT EXISTS onoffline;
|
||||||
|
|
||||||
|
CREATE TABLE IF NOT EXISTS onoffline.onoffline_record (
|
||||||
|
guid VARCHAR(32) NOT NULL,
|
||||||
|
ts_ms BIGINT NOT NULL,
|
||||||
|
write_ts_ms BIGINT NOT NULL,
|
||||||
|
hotel_id SMALLINT NOT NULL,
|
||||||
|
mac VARCHAR(21) NOT NULL,
|
||||||
|
device_id VARCHAR(64) NOT NULL,
|
||||||
|
room_id VARCHAR(64) NOT NULL,
|
||||||
|
ip VARCHAR(21),
|
||||||
|
current_status VARCHAR(255),
|
||||||
|
launcher_version VARCHAR(255),
|
||||||
|
reboot_reason VARCHAR(255),
|
||||||
|
PRIMARY KEY (ts_ms, mac, device_id, room_id)
|
||||||
|
) PARTITION BY RANGE (ts_ms);
|
||||||
|
|
||||||
|
CREATE INDEX IF NOT EXISTS idx_onoffline_ts_ms ON onoffline.onoffline_record (ts_ms);
|
||||||
|
CREATE INDEX IF NOT EXISTS idx_onoffline_hotel_id ON onoffline.onoffline_record (hotel_id);
|
||||||
|
CREATE INDEX IF NOT EXISTS idx_onoffline_mac ON onoffline.onoffline_record (mac);
|
||||||
|
CREATE INDEX IF NOT EXISTS idx_onoffline_device_id ON onoffline.onoffline_record (device_id);
|
||||||
|
CREATE INDEX IF NOT EXISTS idx_onoffline_room_id ON onoffline.onoffline_record (room_id);
|
||||||
|
CREATE INDEX IF NOT EXISTS idx_onoffline_current_status ON onoffline.onoffline_record (current_status);
|
||||||
41
bls-onoffline-backend/scripts/lint.js
Normal file
41
bls-onoffline-backend/scripts/lint.js
Normal file
@@ -0,0 +1,41 @@
|
|||||||
|
import fs from 'fs';
|
||||||
|
import path from 'path';
|
||||||
|
import { fileURLToPath } from 'url';
|
||||||
|
import { spawnSync } from 'child_process';
|
||||||
|
|
||||||
|
const __filename = fileURLToPath(import.meta.url);
|
||||||
|
const __dirname = path.dirname(__filename);
|
||||||
|
const projectRoot = path.resolve(__dirname, '..');
|
||||||
|
const targets = ['src', 'tests'];
|
||||||
|
|
||||||
|
const collectFiles = (dir) => {
|
||||||
|
if (!fs.existsSync(dir)) {
|
||||||
|
return [];
|
||||||
|
}
|
||||||
|
const entries = fs.readdirSync(dir, { withFileTypes: true });
|
||||||
|
return entries.flatMap((entry) => {
|
||||||
|
const fullPath = path.join(dir, entry.name);
|
||||||
|
if (entry.isDirectory()) {
|
||||||
|
return collectFiles(fullPath);
|
||||||
|
}
|
||||||
|
if (entry.isFile() && fullPath.endsWith('.js')) {
|
||||||
|
return [fullPath];
|
||||||
|
}
|
||||||
|
return [];
|
||||||
|
});
|
||||||
|
};
|
||||||
|
|
||||||
|
const files = targets.flatMap((target) => collectFiles(path.join(projectRoot, target)));
|
||||||
|
|
||||||
|
const failures = [];
|
||||||
|
|
||||||
|
files.forEach((file) => {
|
||||||
|
const result = spawnSync(process.execPath, ['--check', file], { stdio: 'inherit' });
|
||||||
|
if (result.status !== 0) {
|
||||||
|
failures.push(file);
|
||||||
|
}
|
||||||
|
});
|
||||||
|
|
||||||
|
if (failures.length > 0) {
|
||||||
|
process.exit(1);
|
||||||
|
}
|
||||||
36
bls-onoffline-backend/scripts/verify_data.js
Normal file
36
bls-onoffline-backend/scripts/verify_data.js
Normal file
@@ -0,0 +1,36 @@
|
|||||||
|
|
||||||
|
import { config } from '../src/config/config.js';
|
||||||
|
import dbManager from '../src/db/databaseManager.js';
|
||||||
|
import { logger } from '../src/utils/logger.js';
|
||||||
|
|
||||||
|
const verifyData = async () => {
|
||||||
|
const client = await dbManager.pool.connect();
|
||||||
|
try {
|
||||||
|
console.log('Verifying data in database...');
|
||||||
|
|
||||||
|
// Count total rows
|
||||||
|
const countSql = `SELECT count(*) FROM ${config.db.schema}.${config.db.table}`;
|
||||||
|
const countRes = await client.query(countSql);
|
||||||
|
console.log(`Total rows in ${config.db.schema}.${config.db.table}: ${countRes.rows[0].count}`);
|
||||||
|
|
||||||
|
// Check recent rows
|
||||||
|
const recentSql = `
|
||||||
|
SELECT * FROM ${config.db.schema}.${config.db.table}
|
||||||
|
ORDER BY ts_ms DESC
|
||||||
|
LIMIT 5
|
||||||
|
`;
|
||||||
|
const recentRes = await client.query(recentSql);
|
||||||
|
console.log('Recent 5 rows:');
|
||||||
|
recentRes.rows.forEach(row => {
|
||||||
|
console.log(JSON.stringify(row));
|
||||||
|
});
|
||||||
|
|
||||||
|
} catch (err) {
|
||||||
|
console.error('Error verifying data:', err);
|
||||||
|
} finally {
|
||||||
|
client.release();
|
||||||
|
await dbManager.pool.end();
|
||||||
|
}
|
||||||
|
};
|
||||||
|
|
||||||
|
verifyData();
|
||||||
61
bls-onoffline-backend/scripts/verify_partitions.js
Normal file
61
bls-onoffline-backend/scripts/verify_partitions.js
Normal file
@@ -0,0 +1,61 @@
|
|||||||
|
import pg from 'pg';
|
||||||
|
import { config } from '../src/config/config.js';
|
||||||
|
|
||||||
|
const { Pool } = pg;
|
||||||
|
|
||||||
|
const verify = async () => {
|
||||||
|
const pool = new Pool({
|
||||||
|
host: config.db.host,
|
||||||
|
port: config.db.port,
|
||||||
|
user: config.db.user,
|
||||||
|
password: config.db.password,
|
||||||
|
database: config.db.database,
|
||||||
|
ssl: config.db.ssl,
|
||||||
|
});
|
||||||
|
|
||||||
|
try {
|
||||||
|
console.log('Verifying partitions for table onoffline_record...');
|
||||||
|
const client = await pool.connect();
|
||||||
|
|
||||||
|
// Check parent table
|
||||||
|
const parentRes = await client.query(`
|
||||||
|
SELECT to_regclass('onoffline.onoffline_record') as oid;
|
||||||
|
`);
|
||||||
|
if (!parentRes.rows[0].oid) {
|
||||||
|
console.error('Parent table onoffline.onoffline_record DOES NOT EXIST.');
|
||||||
|
return;
|
||||||
|
}
|
||||||
|
console.log('Parent table onoffline.onoffline_record exists.');
|
||||||
|
|
||||||
|
// Check partitions
|
||||||
|
const res = await client.query(`
|
||||||
|
SELECT
|
||||||
|
child.relname AS partition_name
|
||||||
|
FROM pg_inherits
|
||||||
|
JOIN pg_class parent ON pg_inherits.inhparent = parent.oid
|
||||||
|
JOIN pg_class child ON pg_inherits.inhrelid = child.oid
|
||||||
|
JOIN pg_namespace ns ON parent.relnamespace = ns.oid
|
||||||
|
WHERE parent.relname = 'onoffline_record' AND ns.nspname = 'onoffline'
|
||||||
|
ORDER BY child.relname;
|
||||||
|
`);
|
||||||
|
|
||||||
|
console.log(`Found ${res.rowCount} partitions.`);
|
||||||
|
res.rows.forEach(row => {
|
||||||
|
console.log(`- ${row.partition_name}`);
|
||||||
|
});
|
||||||
|
|
||||||
|
if (res.rowCount >= 30) {
|
||||||
|
console.log('SUCCESS: At least 30 partitions exist.');
|
||||||
|
} else {
|
||||||
|
console.warn(`WARNING: Expected 30+ partitions, found ${res.rowCount}.`);
|
||||||
|
}
|
||||||
|
|
||||||
|
client.release();
|
||||||
|
} catch (err) {
|
||||||
|
console.error('Verification failed:', err);
|
||||||
|
} finally {
|
||||||
|
await pool.end();
|
||||||
|
}
|
||||||
|
};
|
||||||
|
|
||||||
|
verify();
|
||||||
41
bls-onoffline-backend/spec/onoffline-spec.md
Normal file
41
bls-onoffline-backend/spec/onoffline-spec.md
Normal file
@@ -0,0 +1,41 @@
|
|||||||
|
bls-onoffline-backend 规格说明
|
||||||
|
|
||||||
|
1. Kafka 数据结构
|
||||||
|
{
|
||||||
|
"HotelCode": "1085",
|
||||||
|
"MAC": "00:1A:2B:3C:4D:5E",
|
||||||
|
"HostNumber": "091123987456",
|
||||||
|
"RoomNumber": "8888房",
|
||||||
|
"EndPoint": "50.2.60.1:6543",
|
||||||
|
"CurrentStatus": "on",
|
||||||
|
"CurrentTime": "2026-02-02T10:30:00Z",
|
||||||
|
"UnixTime": 1770000235000,
|
||||||
|
"LauncherVersion": "1.0.0",
|
||||||
|
"RebootReason": "1"
|
||||||
|
}
|
||||||
|
|
||||||
|
2. Kafka 主题
|
||||||
|
Topic:blwlog4Nodejs-rcu-onoffline-topic
|
||||||
|
|
||||||
|
3. 数据库结构
|
||||||
|
数据库:log_platform
|
||||||
|
表:onoffline_record
|
||||||
|
字段:
|
||||||
|
guid varchar(32)
|
||||||
|
ts_ms int8
|
||||||
|
write_ts_ms int8
|
||||||
|
hotel_id int2
|
||||||
|
mac varchar(21)
|
||||||
|
device_id varchar(64)
|
||||||
|
room_id varchar(64)
|
||||||
|
ip varchar(21)
|
||||||
|
current_status varchar(10)
|
||||||
|
launcher_version varchar(10)
|
||||||
|
reboot_reason varchar(10)
|
||||||
|
主键:(ts_ms, mac, device_id, room_id)
|
||||||
|
按 ts_ms 每日分区
|
||||||
|
|
||||||
|
4. 数据处理规则
|
||||||
|
非重启数据:reboot_reason 为空或不存在,current_status 取 CurrentStatus
|
||||||
|
重启数据:reboot_reason 不为空,current_status 固定为 on
|
||||||
|
其余字段直接按 Kafka 原值落库,空值不补 0
|
||||||
56
bls-onoffline-backend/src/config/config.js
Normal file
56
bls-onoffline-backend/src/config/config.js
Normal file
@@ -0,0 +1,56 @@
|
|||||||
|
import dotenv from 'dotenv';
|
||||||
|
|
||||||
|
dotenv.config();
|
||||||
|
|
||||||
|
const parseNumber = (value, defaultValue) => {
|
||||||
|
const parsed = Number(value);
|
||||||
|
return Number.isFinite(parsed) ? parsed : defaultValue;
|
||||||
|
};
|
||||||
|
|
||||||
|
const parseList = (value) =>
|
||||||
|
(value || '')
|
||||||
|
.split(',')
|
||||||
|
.map((item) => item.trim())
|
||||||
|
.filter(Boolean);
|
||||||
|
|
||||||
|
export const config = {
|
||||||
|
env: process.env.NODE_ENV || 'development',
|
||||||
|
port: parseNumber(process.env.PORT, 3001),
|
||||||
|
kafka: {
|
||||||
|
brokers: parseList(process.env.KAFKA_BROKERS),
|
||||||
|
topic: process.env.KAFKA_TOPIC || process.env.KAFKA_TOPICS || 'blwlog4Nodejs-rcu-onoffline-topic',
|
||||||
|
groupId: process.env.KAFKA_GROUP_ID || 'bls-onoffline-group',
|
||||||
|
clientId: process.env.KAFKA_CLIENT_ID || 'bls-onoffline-client',
|
||||||
|
consumerInstances: parseNumber(process.env.KAFKA_CONSUMER_INSTANCES, 1),
|
||||||
|
maxInFlight: parseNumber(process.env.KAFKA_MAX_IN_FLIGHT, 50),
|
||||||
|
fetchMaxBytes: parseNumber(process.env.KAFKA_FETCH_MAX_BYTES, 10 * 1024 * 1024),
|
||||||
|
fetchMinBytes: parseNumber(process.env.KAFKA_FETCH_MIN_BYTES, 1),
|
||||||
|
fetchMaxWaitMs: parseNumber(process.env.KAFKA_FETCH_MAX_WAIT_MS, 100),
|
||||||
|
autoCommitIntervalMs: parseNumber(process.env.KAFKA_AUTO_COMMIT_INTERVAL_MS, 5000),
|
||||||
|
logMessages: process.env.KAFKA_LOG_MESSAGES === 'true',
|
||||||
|
sasl: process.env.KAFKA_SASL_USERNAME && process.env.KAFKA_SASL_PASSWORD ? {
|
||||||
|
mechanism: process.env.KAFKA_SASL_MECHANISM || 'plain',
|
||||||
|
username: process.env.KAFKA_SASL_USERNAME,
|
||||||
|
password: process.env.KAFKA_SASL_PASSWORD
|
||||||
|
} : undefined
|
||||||
|
},
|
||||||
|
db: {
|
||||||
|
host: process.env.DB_HOST || process.env.POSTGRES_HOST || 'localhost',
|
||||||
|
port: parseNumber(process.env.DB_PORT || process.env.POSTGRES_PORT, 5432),
|
||||||
|
user: process.env.DB_USER || process.env.POSTGRES_USER || 'postgres',
|
||||||
|
password: process.env.DB_PASSWORD || process.env.POSTGRES_PASSWORD || '',
|
||||||
|
database: process.env.DB_DATABASE || process.env.POSTGRES_DATABASE || 'log_platform',
|
||||||
|
max: parseNumber(process.env.DB_MAX_CONNECTIONS || process.env.POSTGRES_MAX_CONNECTIONS, 10),
|
||||||
|
ssl: process.env.DB_SSL === 'true' ? { rejectUnauthorized: false } : undefined,
|
||||||
|
schema: process.env.DB_SCHEMA || 'onoffline',
|
||||||
|
table: process.env.DB_TABLE || 'onoffline_record'
|
||||||
|
},
|
||||||
|
redis: {
|
||||||
|
host: process.env.REDIS_HOST || 'localhost',
|
||||||
|
port: parseNumber(process.env.REDIS_PORT, 6379),
|
||||||
|
password: process.env.REDIS_PASSWORD || undefined,
|
||||||
|
db: parseNumber(process.env.REDIS_DB, 0),
|
||||||
|
projectName: process.env.REDIS_PROJECT_NAME || 'bls-onoffline',
|
||||||
|
apiBaseUrl: process.env.REDIS_API_BASE_URL || `http://localhost:${parseNumber(process.env.PORT, 3001)}`
|
||||||
|
}
|
||||||
|
};
|
||||||
103
bls-onoffline-backend/src/db/databaseManager.js
Normal file
103
bls-onoffline-backend/src/db/databaseManager.js
Normal file
@@ -0,0 +1,103 @@
|
|||||||
|
import pg from 'pg';
|
||||||
|
import { config } from '../config/config.js';
|
||||||
|
import { logger } from '../utils/logger.js';
|
||||||
|
|
||||||
|
const { Pool } = pg;
|
||||||
|
|
||||||
|
const columns = [
|
||||||
|
'guid',
|
||||||
|
'ts_ms',
|
||||||
|
'write_ts_ms',
|
||||||
|
'hotel_id',
|
||||||
|
'mac',
|
||||||
|
'device_id',
|
||||||
|
'room_id',
|
||||||
|
'ip',
|
||||||
|
'current_status',
|
||||||
|
'launcher_version',
|
||||||
|
'reboot_reason'
|
||||||
|
];
|
||||||
|
|
||||||
|
export class DatabaseManager {
|
||||||
|
constructor(dbConfig) {
|
||||||
|
this.pool = new Pool({
|
||||||
|
host: dbConfig.host,
|
||||||
|
port: dbConfig.port,
|
||||||
|
user: dbConfig.user,
|
||||||
|
password: dbConfig.password,
|
||||||
|
database: dbConfig.database,
|
||||||
|
max: dbConfig.max,
|
||||||
|
ssl: dbConfig.ssl
|
||||||
|
});
|
||||||
|
}
|
||||||
|
|
||||||
|
async insertRows({ schema, table, rows }) {
|
||||||
|
if (!rows || rows.length === 0) {
|
||||||
|
return;
|
||||||
|
}
|
||||||
|
const values = [];
|
||||||
|
const placeholders = rows.map((row, rowIndex) => {
|
||||||
|
const offset = rowIndex * columns.length;
|
||||||
|
columns.forEach((column) => {
|
||||||
|
values.push(row[column] ?? null);
|
||||||
|
});
|
||||||
|
const params = columns.map((_, columnIndex) => `$${offset + columnIndex + 1}`);
|
||||||
|
return `(${params.join(', ')})`;
|
||||||
|
});
|
||||||
|
const statement = `
|
||||||
|
INSERT INTO ${schema}.${table} (${columns.join(', ')})
|
||||||
|
VALUES ${placeholders.join(', ')}
|
||||||
|
ON CONFLICT DO NOTHING
|
||||||
|
`;
|
||||||
|
try {
|
||||||
|
await this.pool.query(statement, values);
|
||||||
|
} catch (error) {
|
||||||
|
logger.error('Database insert failed', {
|
||||||
|
error: error?.message,
|
||||||
|
schema,
|
||||||
|
table,
|
||||||
|
rowsLength: rows.length
|
||||||
|
});
|
||||||
|
throw error;
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
async checkConnection() {
|
||||||
|
let client;
|
||||||
|
try {
|
||||||
|
const connectPromise = this.pool.connect();
|
||||||
|
|
||||||
|
// Create a timeout promise that rejects after 5000ms
|
||||||
|
const timeoutPromise = new Promise((_, reject) => {
|
||||||
|
setTimeout(() => reject(new Error('Connection timeout')), 5000);
|
||||||
|
});
|
||||||
|
|
||||||
|
try {
|
||||||
|
// Race the connection attempt against the timeout
|
||||||
|
client = await Promise.race([connectPromise, timeoutPromise]);
|
||||||
|
} catch (raceError) {
|
||||||
|
// If we timed out, the connectPromise might still resolve later.
|
||||||
|
// We must ensure that if it does, the client is released back to the pool immediately.
|
||||||
|
connectPromise.then(c => c.release()).catch(() => {});
|
||||||
|
throw raceError;
|
||||||
|
}
|
||||||
|
|
||||||
|
await client.query('SELECT 1');
|
||||||
|
return true;
|
||||||
|
} catch (err) {
|
||||||
|
logger.error('Database check connection failed', { error: err.message });
|
||||||
|
return false;
|
||||||
|
} finally {
|
||||||
|
if (client) {
|
||||||
|
client.release();
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
async close() {
|
||||||
|
await this.pool.end();
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
const dbManager = new DatabaseManager(config.db);
|
||||||
|
export default dbManager;
|
||||||
100
bls-onoffline-backend/src/db/initializer.js
Normal file
100
bls-onoffline-backend/src/db/initializer.js
Normal file
@@ -0,0 +1,100 @@
|
|||||||
|
import pg from 'pg';
|
||||||
|
import fs from 'fs';
|
||||||
|
import path from 'path';
|
||||||
|
import { fileURLToPath } from 'url';
|
||||||
|
import { logger } from '../utils/logger.js';
|
||||||
|
import partitionManager from './partitionManager.js';
|
||||||
|
import dbManager from './databaseManager.js';
|
||||||
|
import { config } from '../config/config.js';
|
||||||
|
|
||||||
|
const __filename = fileURLToPath(import.meta.url);
|
||||||
|
const __dirname = path.dirname(__filename);
|
||||||
|
|
||||||
|
class DatabaseInitializer {
|
||||||
|
async initialize() {
|
||||||
|
logger.info('Starting database initialization check...');
|
||||||
|
|
||||||
|
// 1. Check if database exists, create if not
|
||||||
|
await this.ensureDatabaseExists();
|
||||||
|
|
||||||
|
// 2. Initialize Schema and Parent Table (if not exists)
|
||||||
|
// Note: We need to use dbManager because it connects to the target database
|
||||||
|
await this.ensureSchemaAndTable();
|
||||||
|
|
||||||
|
// 3. Ensure Partitions for the next month
|
||||||
|
await partitionManager.ensurePartitions(30);
|
||||||
|
|
||||||
|
console.log('Database initialization completed successfully.');
|
||||||
|
logger.info('Database initialization completed successfully.');
|
||||||
|
}
|
||||||
|
|
||||||
|
async ensureDatabaseExists() {
|
||||||
|
const { host, port, user, password, database, ssl } = config.db;
|
||||||
|
console.log(`Checking if database '${database}' exists at ${host}:${port}...`);
|
||||||
|
|
||||||
|
// Connect to 'postgres' database to check/create target database
|
||||||
|
const client = new pg.Client({
|
||||||
|
host,
|
||||||
|
port,
|
||||||
|
user,
|
||||||
|
password,
|
||||||
|
database: 'postgres',
|
||||||
|
ssl: ssl ? { rejectUnauthorized: false } : false
|
||||||
|
});
|
||||||
|
|
||||||
|
try {
|
||||||
|
await client.connect();
|
||||||
|
|
||||||
|
const checkRes = await client.query(
|
||||||
|
`SELECT 1 FROM pg_database WHERE datname = $1`,
|
||||||
|
[database]
|
||||||
|
);
|
||||||
|
|
||||||
|
if (checkRes.rowCount === 0) {
|
||||||
|
logger.info(`Database '${database}' does not exist. Creating...`);
|
||||||
|
// CREATE DATABASE cannot run inside a transaction block
|
||||||
|
await client.query(`CREATE DATABASE "${database}"`);
|
||||||
|
console.log(`Database '${database}' created.`);
|
||||||
|
logger.info(`Database '${database}' created.`);
|
||||||
|
} else {
|
||||||
|
console.log(`Database '${database}' already exists.`);
|
||||||
|
logger.info(`Database '${database}' already exists.`);
|
||||||
|
}
|
||||||
|
} catch (err) {
|
||||||
|
logger.error('Error ensuring database exists:', err);
|
||||||
|
throw err;
|
||||||
|
} finally {
|
||||||
|
await client.end();
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
async ensureSchemaAndTable() {
|
||||||
|
// dbManager connects to the target database
|
||||||
|
const client = await dbManager.pool.connect();
|
||||||
|
try {
|
||||||
|
const sqlPathCandidates = [
|
||||||
|
path.resolve(process.cwd(), 'scripts/init_db.sql'),
|
||||||
|
path.resolve(__dirname, '../scripts/init_db.sql'),
|
||||||
|
path.resolve(__dirname, '../../scripts/init_db.sql')
|
||||||
|
];
|
||||||
|
const sqlPath = sqlPathCandidates.find((candidate) => fs.existsSync(candidate));
|
||||||
|
if (!sqlPath) {
|
||||||
|
throw new Error(`init_db.sql not found. Candidates: ${sqlPathCandidates.join(' | ')}`);
|
||||||
|
}
|
||||||
|
const sql = fs.readFileSync(sqlPath, 'utf8');
|
||||||
|
|
||||||
|
console.log(`Executing init_db.sql from ${sqlPath}...`);
|
||||||
|
logger.info('Executing init_db.sql...');
|
||||||
|
await client.query(sql);
|
||||||
|
console.log('Schema and parent table initialized.');
|
||||||
|
logger.info('Schema and parent table initialized.');
|
||||||
|
} catch (err) {
|
||||||
|
logger.error('Error initializing schema and table:', err);
|
||||||
|
throw err;
|
||||||
|
} finally {
|
||||||
|
client.release();
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
export default new DatabaseInitializer();
|
||||||
77
bls-onoffline-backend/src/db/partitionManager.js
Normal file
77
bls-onoffline-backend/src/db/partitionManager.js
Normal file
@@ -0,0 +1,77 @@
|
|||||||
|
import { logger } from '../utils/logger.js';
|
||||||
|
import { config } from '../config/config.js';
|
||||||
|
import dbManager from './databaseManager.js';
|
||||||
|
|
||||||
|
class PartitionManager {
|
||||||
|
/**
|
||||||
|
* Calculate the start and end timestamps (milliseconds) for a given date.
|
||||||
|
* @param {Date} date - The date to calculate for.
|
||||||
|
* @returns {Object} { startMs, endMs, partitionSuffix }
|
||||||
|
*/
|
||||||
|
getPartitionInfo(date) {
|
||||||
|
const yyyy = date.getFullYear();
|
||||||
|
const mm = String(date.getMonth() + 1).padStart(2, '0');
|
||||||
|
const dd = String(date.getDate()).padStart(2, '0');
|
||||||
|
const partitionSuffix = `${yyyy}${mm}${dd}`;
|
||||||
|
|
||||||
|
const start = new Date(date);
|
||||||
|
start.setHours(0, 0, 0, 0);
|
||||||
|
const startMs = start.getTime();
|
||||||
|
|
||||||
|
const end = new Date(date);
|
||||||
|
end.setDate(end.getDate() + 1);
|
||||||
|
end.setHours(0, 0, 0, 0);
|
||||||
|
const endMs = end.getTime();
|
||||||
|
|
||||||
|
return { startMs, endMs, partitionSuffix };
|
||||||
|
}
|
||||||
|
|
||||||
|
/**
|
||||||
|
* Ensure partitions exist for the past M days and next N days.
|
||||||
|
* @param {number} daysAhead - Number of days to pre-create.
|
||||||
|
* @param {number} daysBack - Number of days to look back.
|
||||||
|
*/
|
||||||
|
async ensurePartitions(daysAhead = 30, daysBack = 15) {
|
||||||
|
const client = await dbManager.pool.connect();
|
||||||
|
try {
|
||||||
|
logger.info(`Starting partition check for the past ${daysBack} days and next ${daysAhead} days...`);
|
||||||
|
console.log(`Starting partition check for the past ${daysBack} days and next ${daysAhead} days...`);
|
||||||
|
const now = new Date();
|
||||||
|
|
||||||
|
for (let i = -daysBack; i < daysAhead; i++) {
|
||||||
|
const targetDate = new Date(now);
|
||||||
|
targetDate.setDate(now.getDate() + i);
|
||||||
|
|
||||||
|
const { startMs, endMs, partitionSuffix } = this.getPartitionInfo(targetDate);
|
||||||
|
const schema = config.db.schema;
|
||||||
|
const table = config.db.table;
|
||||||
|
const partitionName = `${schema}.${table}_${partitionSuffix}`;
|
||||||
|
|
||||||
|
// Check if partition exists
|
||||||
|
const checkSql = `
|
||||||
|
SELECT to_regclass($1) as exists;
|
||||||
|
`;
|
||||||
|
const checkRes = await client.query(checkSql, [partitionName]);
|
||||||
|
|
||||||
|
if (!checkRes.rows[0].exists) {
|
||||||
|
logger.info(`Creating partition ${partitionName} for range [${startMs}, ${endMs})`);
|
||||||
|
console.log(`Creating partition ${partitionName} for range [${startMs}, ${endMs})`);
|
||||||
|
const createSql = `
|
||||||
|
CREATE TABLE IF NOT EXISTS ${partitionName}
|
||||||
|
PARTITION OF ${schema}.${table}
|
||||||
|
FOR VALUES FROM (${startMs}) TO (${endMs});
|
||||||
|
`;
|
||||||
|
await client.query(createSql);
|
||||||
|
}
|
||||||
|
}
|
||||||
|
logger.info('Partition check completed.');
|
||||||
|
} catch (err) {
|
||||||
|
logger.error('Error ensuring partitions:', err);
|
||||||
|
throw err;
|
||||||
|
} finally {
|
||||||
|
client.release();
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
export default new PartitionManager();
|
||||||
268
bls-onoffline-backend/src/index.js
Normal file
268
bls-onoffline-backend/src/index.js
Normal file
@@ -0,0 +1,268 @@
|
|||||||
|
import cron from 'node-cron';
|
||||||
|
import { config } from './config/config.js';
|
||||||
|
import dbManager from './db/databaseManager.js';
|
||||||
|
import dbInitializer from './db/initializer.js';
|
||||||
|
import partitionManager from './db/partitionManager.js';
|
||||||
|
import { createKafkaConsumers } from './kafka/consumer.js';
|
||||||
|
import { processKafkaMessage } from './processor/index.js';
|
||||||
|
import { createRedisClient } from './redis/redisClient.js';
|
||||||
|
import { RedisIntegration } from './redis/redisIntegration.js';
|
||||||
|
import { buildErrorQueueKey, enqueueError, startErrorRetryWorker } from './redis/errorQueue.js';
|
||||||
|
import { MetricCollector } from './utils/metricCollector.js';
|
||||||
|
import { logger } from './utils/logger.js';
|
||||||
|
|
||||||
|
const bootstrap = async () => {
|
||||||
|
// Log startup config (masked)
|
||||||
|
logger.info('Starting application with config', {
|
||||||
|
env: process.env.NODE_ENV,
|
||||||
|
db: {
|
||||||
|
host: config.db.host,
|
||||||
|
port: config.db.port,
|
||||||
|
user: config.db.user,
|
||||||
|
database: config.db.database,
|
||||||
|
schema: config.db.schema
|
||||||
|
},
|
||||||
|
kafka: {
|
||||||
|
brokers: config.kafka.brokers,
|
||||||
|
topic: config.kafka.topic,
|
||||||
|
groupId: config.kafka.groupId
|
||||||
|
},
|
||||||
|
redis: {
|
||||||
|
host: config.redis.host,
|
||||||
|
port: config.redis.port
|
||||||
|
}
|
||||||
|
});
|
||||||
|
|
||||||
|
// 0. Initialize Database (Create DB, Schema, Table, Partitions)
|
||||||
|
await dbInitializer.initialize();
|
||||||
|
|
||||||
|
// Metric Collector
|
||||||
|
const metricCollector = new MetricCollector();
|
||||||
|
|
||||||
|
// 1. Setup Partition Maintenance Cron Job (Every day at 00:00)
|
||||||
|
cron.schedule('0 0 * * *', async () => {
|
||||||
|
logger.info('Running scheduled partition maintenance...');
|
||||||
|
try {
|
||||||
|
await partitionManager.ensurePartitions(30);
|
||||||
|
} catch (err) {
|
||||||
|
logger.error('Scheduled partition maintenance failed', err);
|
||||||
|
}
|
||||||
|
});
|
||||||
|
|
||||||
|
// 1.1 Setup Metric Reporting Cron Job (Every minute)
|
||||||
|
// Moved after redisIntegration initialization
|
||||||
|
|
||||||
|
|
||||||
|
// DatabaseManager is now a singleton exported instance, but let's keep consistency if possible
|
||||||
|
// In databaseManager.js it exports `dbManager` instance by default.
|
||||||
|
// The original code was `const dbManager = new DatabaseManager(config.db);` which implies it might have been a class export.
|
||||||
|
// Let's check `databaseManager.js` content.
|
||||||
|
// Wait, I imported `dbManager` from `./db/databaseManager.js`.
|
||||||
|
// If `databaseManager.js` exports an instance as default, I should use that.
|
||||||
|
// If it exports a class, I should instantiate it.
|
||||||
|
|
||||||
|
// Let's assume the previous code `new DatabaseManager` was correct if it was a class.
|
||||||
|
// BUT I used `dbManager.pool` in `partitionManager.js` assuming it's an instance.
|
||||||
|
// I need to verify `databaseManager.js`.
|
||||||
|
|
||||||
|
const redisClient = await createRedisClient(config.redis);
|
||||||
|
const redisIntegration = new RedisIntegration(
|
||||||
|
redisClient,
|
||||||
|
config.redis.projectName,
|
||||||
|
config.redis.apiBaseUrl
|
||||||
|
);
|
||||||
|
redisIntegration.startHeartbeat();
|
||||||
|
|
||||||
|
// 1.1 Setup Metric Reporting Cron Job (Every minute)
|
||||||
|
cron.schedule('* * * * *', async () => {
|
||||||
|
const metrics = metricCollector.getAndReset();
|
||||||
|
const report = `[Minute Metrics] Pulled: ${metrics.kafka_pulled}, Parse Error: ${metrics.parse_error}, Inserted: ${metrics.db_inserted}, Failed: ${metrics.db_failed}`;
|
||||||
|
console.log(report);
|
||||||
|
logger.info(report, metrics);
|
||||||
|
|
||||||
|
try {
|
||||||
|
await redisIntegration.info('Minute Metrics', metrics);
|
||||||
|
} catch (err) {
|
||||||
|
logger.error('Failed to report metrics to Redis', { error: err?.message });
|
||||||
|
}
|
||||||
|
});
|
||||||
|
|
||||||
|
const errorQueueKey = buildErrorQueueKey(config.redis.projectName);
|
||||||
|
|
||||||
|
const handleError = async (error, message) => {
|
||||||
|
logger.error('Kafka processing error', {
|
||||||
|
error: error?.message,
|
||||||
|
type: error?.type,
|
||||||
|
stack: error?.stack
|
||||||
|
});
|
||||||
|
try {
|
||||||
|
await redisIntegration.error('Kafka processing error', {
|
||||||
|
module: 'kafka',
|
||||||
|
stack: error?.stack || error?.message
|
||||||
|
});
|
||||||
|
} catch (redisError) {
|
||||||
|
logger.error('Redis error log failed', { error: redisError?.message });
|
||||||
|
}
|
||||||
|
if (message) {
|
||||||
|
const messageValue = Buffer.isBuffer(message.value)
|
||||||
|
? message.value.toString('utf8')
|
||||||
|
: message.value;
|
||||||
|
try {
|
||||||
|
await enqueueError(redisClient, errorQueueKey, {
|
||||||
|
attempts: 0,
|
||||||
|
value: messageValue,
|
||||||
|
meta: {
|
||||||
|
topic: message.topic,
|
||||||
|
partition: message.partition,
|
||||||
|
offset: message.offset,
|
||||||
|
key: message.key
|
||||||
|
},
|
||||||
|
timestamp: Date.now()
|
||||||
|
});
|
||||||
|
} catch (enqueueError) {
|
||||||
|
logger.error('Enqueue error payload failed', { error: enqueueError?.message });
|
||||||
|
}
|
||||||
|
}
|
||||||
|
};
|
||||||
|
|
||||||
|
const handleMessage = async (message) => {
|
||||||
|
if (message.topic) {
|
||||||
|
metricCollector.increment('kafka_pulled');
|
||||||
|
}
|
||||||
|
|
||||||
|
const messageValue = Buffer.isBuffer(message.value)
|
||||||
|
? message.value.toString('utf8')
|
||||||
|
: message.value;
|
||||||
|
const messageKey = Buffer.isBuffer(message.key)
|
||||||
|
? message.key.toString('utf8')
|
||||||
|
: message.key;
|
||||||
|
|
||||||
|
const logDetails = {
|
||||||
|
topic: message.topic,
|
||||||
|
partition: message.partition,
|
||||||
|
offset: message.offset,
|
||||||
|
key: messageKey,
|
||||||
|
value: config.kafka.logMessages ? messageValue : undefined,
|
||||||
|
valueLength: !config.kafka.logMessages && typeof messageValue === 'string' ? messageValue.length : null
|
||||||
|
};
|
||||||
|
|
||||||
|
// logger.info('Kafka message received', logDetails);
|
||||||
|
|
||||||
|
while (true) {
|
||||||
|
try {
|
||||||
|
const inserted = await processKafkaMessage({ message, dbManager, config });
|
||||||
|
metricCollector.increment('db_inserted');
|
||||||
|
// logger.info('Kafka message processed', { inserted });
|
||||||
|
return; // Success, allowing commit
|
||||||
|
} catch (error) {
|
||||||
|
// Identify DB connection errors
|
||||||
|
const isDbConnectionError =
|
||||||
|
(error.code && ['ECONNREFUSED', '57P03', '08006', '08001', 'EADDRINUSE', 'ETIMEDOUT'].includes(error.code)) ||
|
||||||
|
(error.message && (
|
||||||
|
error.message.includes('ECONNREFUSED') ||
|
||||||
|
error.message.includes('connection') ||
|
||||||
|
error.message.includes('terminated') ||
|
||||||
|
error.message.includes('EADDRINUSE') ||
|
||||||
|
error.message.includes('ETIMEDOUT') ||
|
||||||
|
error.message.includes('The server does not support SSL connections') // Possible if DB restarts without SSL
|
||||||
|
));
|
||||||
|
|
||||||
|
if (isDbConnectionError) {
|
||||||
|
logger.error('Database offline. Pausing consumption for 1 minute...', { error: error.message });
|
||||||
|
// metricCollector.increment('db_failed'); // Maybe not count as fail since we retry? User didn't specify.
|
||||||
|
|
||||||
|
// Wait 1 minute before checking
|
||||||
|
await new Promise(resolve => setTimeout(resolve, 60000));
|
||||||
|
|
||||||
|
// Check connection loop
|
||||||
|
while (true) {
|
||||||
|
const isConnected = await dbManager.checkConnection();
|
||||||
|
if (isConnected) {
|
||||||
|
logger.info('Database connection restored. Resuming processing...');
|
||||||
|
break; // Break check loop to retry processing
|
||||||
|
}
|
||||||
|
logger.warn('Database still offline. Waiting 1 minute...');
|
||||||
|
await new Promise(resolve => setTimeout(resolve, 60000));
|
||||||
|
}
|
||||||
|
} else {
|
||||||
|
// Non-connection error (Data error, Parse error, etc.)
|
||||||
|
if (error.type === 'PARSE_ERROR') {
|
||||||
|
metricCollector.increment('parse_error');
|
||||||
|
} else {
|
||||||
|
metricCollector.increment('db_failed');
|
||||||
|
}
|
||||||
|
|
||||||
|
logger.error('Message processing failed (Data/Logic Error), skipping message', {
|
||||||
|
error: error?.message,
|
||||||
|
type: error?.type
|
||||||
|
});
|
||||||
|
|
||||||
|
// Enqueue to error queue
|
||||||
|
await handleError(error, message);
|
||||||
|
|
||||||
|
// For non-connection errors, we must skip this message and commit the offset
|
||||||
|
// so we don't get stuck in an infinite retry loop.
|
||||||
|
return;
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
};
|
||||||
|
|
||||||
|
const consumers = createKafkaConsumers({
|
||||||
|
kafkaConfig: config.kafka,
|
||||||
|
onMessage: handleMessage,
|
||||||
|
onError: handleError
|
||||||
|
});
|
||||||
|
|
||||||
|
// Start retry worker (non-blocking)
|
||||||
|
startErrorRetryWorker({
|
||||||
|
client: redisClient,
|
||||||
|
queueKey: errorQueueKey,
|
||||||
|
redisIntegration,
|
||||||
|
handler: async (item) => {
|
||||||
|
if (!item?.value) {
|
||||||
|
throw new Error('Missing value in retry payload');
|
||||||
|
}
|
||||||
|
await handleMessage({ value: item.value });
|
||||||
|
}
|
||||||
|
}).catch(err => {
|
||||||
|
logger.error('Retry worker failed', { error: err?.message });
|
||||||
|
});
|
||||||
|
|
||||||
|
// Graceful Shutdown Logic
|
||||||
|
const shutdown = async (signal) => {
|
||||||
|
logger.info(`Received ${signal}, shutting down...`);
|
||||||
|
|
||||||
|
try {
|
||||||
|
// 1. Close Kafka Consumer
|
||||||
|
if (consumers && consumers.length > 0) {
|
||||||
|
await Promise.all(consumers.map(c => new Promise((resolve) => c.close(true, resolve))));
|
||||||
|
logger.info('Kafka consumer closed', { count: consumers.length });
|
||||||
|
}
|
||||||
|
|
||||||
|
// 2. Stop Redis Heartbeat (if method exists, otherwise just close client)
|
||||||
|
// redisIntegration.stopHeartbeat(); // Assuming implementation or just rely on client close
|
||||||
|
|
||||||
|
// 3. Close Redis Client
|
||||||
|
await redisClient.quit();
|
||||||
|
logger.info('Redis client closed');
|
||||||
|
|
||||||
|
// 4. Close Database Pool
|
||||||
|
await dbManager.close();
|
||||||
|
logger.info('Database connection closed');
|
||||||
|
|
||||||
|
process.exit(0);
|
||||||
|
} catch (err) {
|
||||||
|
logger.error('Error during shutdown', { error: err?.message });
|
||||||
|
process.exit(1);
|
||||||
|
}
|
||||||
|
};
|
||||||
|
|
||||||
|
process.on('SIGTERM', () => shutdown('SIGTERM'));
|
||||||
|
process.on('SIGINT', () => shutdown('SIGINT'));
|
||||||
|
};
|
||||||
|
|
||||||
|
bootstrap().catch((error) => {
|
||||||
|
logger.error('Service bootstrap failed', { error: error?.message });
|
||||||
|
process.exit(1);
|
||||||
|
});
|
||||||
140
bls-onoffline-backend/src/kafka/consumer.js
Normal file
140
bls-onoffline-backend/src/kafka/consumer.js
Normal file
@@ -0,0 +1,140 @@
|
|||||||
|
import kafka from 'kafka-node';
|
||||||
|
import { logger } from '../utils/logger.js';
|
||||||
|
|
||||||
|
const { ConsumerGroup } = kafka;
|
||||||
|
|
||||||
|
import { OffsetTracker } from './offsetTracker.js';
|
||||||
|
|
||||||
|
const createOneConsumer = ({ kafkaConfig, onMessage, onError, instanceIndex }) => {
|
||||||
|
const kafkaHost = kafkaConfig.brokers.join(',');
|
||||||
|
const clientId = instanceIndex === 0 ? kafkaConfig.clientId : `${kafkaConfig.clientId}-${instanceIndex}`;
|
||||||
|
const id = `${clientId}-${process.pid}-${Date.now()}`;
|
||||||
|
const maxInFlight = Number.isFinite(kafkaConfig.maxInFlight) ? kafkaConfig.maxInFlight : 50;
|
||||||
|
let inFlight = 0;
|
||||||
|
|
||||||
|
const tracker = new OffsetTracker();
|
||||||
|
|
||||||
|
const consumer = new ConsumerGroup(
|
||||||
|
{
|
||||||
|
kafkaHost,
|
||||||
|
groupId: kafkaConfig.groupId,
|
||||||
|
clientId,
|
||||||
|
id,
|
||||||
|
fromOffset: 'earliest',
|
||||||
|
protocol: ['roundrobin'],
|
||||||
|
outOfRangeOffset: 'latest',
|
||||||
|
autoCommit: false,
|
||||||
|
autoCommitIntervalMs: kafkaConfig.autoCommitIntervalMs,
|
||||||
|
fetchMaxBytes: kafkaConfig.fetchMaxBytes,
|
||||||
|
fetchMinBytes: kafkaConfig.fetchMinBytes,
|
||||||
|
fetchMaxWaitMs: kafkaConfig.fetchMaxWaitMs,
|
||||||
|
sasl: kafkaConfig.sasl
|
||||||
|
},
|
||||||
|
kafkaConfig.topic
|
||||||
|
);
|
||||||
|
|
||||||
|
const tryResume = () => {
|
||||||
|
if (inFlight < maxInFlight && consumer.paused) {
|
||||||
|
consumer.resume();
|
||||||
|
}
|
||||||
|
};
|
||||||
|
|
||||||
|
consumer.on('message', (message) => {
|
||||||
|
inFlight += 1;
|
||||||
|
tracker.add(message.topic, message.partition, message.offset);
|
||||||
|
|
||||||
|
if (inFlight >= maxInFlight) {
|
||||||
|
consumer.pause();
|
||||||
|
}
|
||||||
|
Promise.resolve(onMessage(message))
|
||||||
|
.then(() => {
|
||||||
|
// Mark message as done and check if we can commit
|
||||||
|
const commitOffset = tracker.markDone(message.topic, message.partition, message.offset);
|
||||||
|
|
||||||
|
if (commitOffset !== null) {
|
||||||
|
consumer.sendOffsetCommitRequest([{
|
||||||
|
topic: message.topic,
|
||||||
|
partition: message.partition,
|
||||||
|
offset: commitOffset,
|
||||||
|
metadata: 'm'
|
||||||
|
}], (err) => {
|
||||||
|
if (err) {
|
||||||
|
logger.error('Kafka commit failed', { error: err?.message, topic: message.topic, partition: message.partition, offset: commitOffset });
|
||||||
|
}
|
||||||
|
});
|
||||||
|
}
|
||||||
|
})
|
||||||
|
.catch((error) => {
|
||||||
|
logger.error('Kafka message handling failed', { error: error?.message });
|
||||||
|
if (onError) {
|
||||||
|
onError(error, message);
|
||||||
|
}
|
||||||
|
})
|
||||||
|
.finally(() => {
|
||||||
|
inFlight -= 1;
|
||||||
|
tryResume();
|
||||||
|
});
|
||||||
|
});
|
||||||
|
|
||||||
|
consumer.on('error', (error) => {
|
||||||
|
logger.error('Kafka consumer error', { error: error?.message });
|
||||||
|
if (onError) {
|
||||||
|
onError(error);
|
||||||
|
}
|
||||||
|
});
|
||||||
|
|
||||||
|
consumer.on('connect', () => {
|
||||||
|
logger.info(`Kafka Consumer connected`, {
|
||||||
|
groupId: kafkaConfig.groupId,
|
||||||
|
clientId: clientId
|
||||||
|
});
|
||||||
|
});
|
||||||
|
|
||||||
|
consumer.on('rebalancing', () => {
|
||||||
|
logger.info(`Kafka Consumer rebalancing`, {
|
||||||
|
groupId: kafkaConfig.groupId,
|
||||||
|
clientId: clientId
|
||||||
|
});
|
||||||
|
});
|
||||||
|
|
||||||
|
consumer.on('rebalanced', () => {
|
||||||
|
logger.info('Kafka Consumer rebalanced', { clientId, groupId: kafkaConfig.groupId });
|
||||||
|
});
|
||||||
|
|
||||||
|
consumer.on('error', (err) => {
|
||||||
|
logger.error('Kafka Consumer Error', { error: err.message });
|
||||||
|
});
|
||||||
|
|
||||||
|
consumer.on('offsetOutOfRange', (err) => {
|
||||||
|
logger.warn('Offset out of range', { error: err.message, topic: err.topic, partition: err.partition });
|
||||||
|
});
|
||||||
|
|
||||||
|
|
||||||
|
consumer.on('offsetOutOfRange', (error) => {
|
||||||
|
logger.warn(`Kafka Consumer offset out of range`, {
|
||||||
|
error: error?.message,
|
||||||
|
groupId: kafkaConfig.groupId,
|
||||||
|
clientId: clientId
|
||||||
|
});
|
||||||
|
});
|
||||||
|
|
||||||
|
consumer.on('close', () => {
|
||||||
|
logger.warn(`Kafka Consumer closed`, {
|
||||||
|
groupId: kafkaConfig.groupId,
|
||||||
|
clientId: clientId
|
||||||
|
});
|
||||||
|
});
|
||||||
|
|
||||||
|
return consumer;
|
||||||
|
};
|
||||||
|
|
||||||
|
export const createKafkaConsumers = ({ kafkaConfig, onMessage, onError }) => {
|
||||||
|
const instances = Number.isFinite(kafkaConfig.consumerInstances) ? kafkaConfig.consumerInstances : 1;
|
||||||
|
const count = Math.max(1, instances);
|
||||||
|
return Array.from({ length: count }, (_, idx) =>
|
||||||
|
createOneConsumer({ kafkaConfig, onMessage, onError, instanceIndex: idx })
|
||||||
|
);
|
||||||
|
};
|
||||||
|
|
||||||
|
export const createKafkaConsumer = ({ kafkaConfig, onMessage, onError }) =>
|
||||||
|
createKafkaConsumers({ kafkaConfig, onMessage, onError })[0];
|
||||||
45
bls-onoffline-backend/src/kafka/offsetTracker.js
Normal file
45
bls-onoffline-backend/src/kafka/offsetTracker.js
Normal file
@@ -0,0 +1,45 @@
|
|||||||
|
export class OffsetTracker {
|
||||||
|
constructor() {
|
||||||
|
// Map<topic-partition, Array<{ offset: number, done: boolean }>>
|
||||||
|
this.partitions = new Map();
|
||||||
|
}
|
||||||
|
|
||||||
|
// Called when a message is received (before processing)
|
||||||
|
add(topic, partition, offset) {
|
||||||
|
const key = `${topic}-${partition}`;
|
||||||
|
if (!this.partitions.has(key)) {
|
||||||
|
this.partitions.set(key, []);
|
||||||
|
}
|
||||||
|
this.partitions.get(key).push({ offset, done: false });
|
||||||
|
}
|
||||||
|
|
||||||
|
// Called when a message is successfully processed
|
||||||
|
// Returns the next offset to commit (if any advancement is possible), or null
|
||||||
|
markDone(topic, partition, offset) {
|
||||||
|
const key = `${topic}-${partition}`;
|
||||||
|
const list = this.partitions.get(key);
|
||||||
|
if (!list) return null;
|
||||||
|
|
||||||
|
const item = list.find(i => i.offset === offset);
|
||||||
|
if (item) {
|
||||||
|
item.done = true;
|
||||||
|
}
|
||||||
|
|
||||||
|
// Find the highest continuous committed offset
|
||||||
|
// We can remove items from the front as long as they are done
|
||||||
|
let lastDoneOffset = null;
|
||||||
|
let itemsRemoved = false;
|
||||||
|
|
||||||
|
while (list.length > 0 && list[0].done) {
|
||||||
|
lastDoneOffset = list[0].offset;
|
||||||
|
list.shift();
|
||||||
|
itemsRemoved = true;
|
||||||
|
}
|
||||||
|
|
||||||
|
if (itemsRemoved && lastDoneOffset !== null) {
|
||||||
|
return lastDoneOffset + 1; // Kafka expects the *next* offset to fetch
|
||||||
|
}
|
||||||
|
|
||||||
|
return null;
|
||||||
|
}
|
||||||
|
}
|
||||||
137
bls-onoffline-backend/src/processor/index.js
Normal file
137
bls-onoffline-backend/src/processor/index.js
Normal file
@@ -0,0 +1,137 @@
|
|||||||
|
import { createGuid } from '../utils/uuid.js';
|
||||||
|
import { kafkaPayloadSchema } from '../schema/kafkaPayload.js';
|
||||||
|
|
||||||
|
const parseKafkaPayload = (value) => {
|
||||||
|
const raw = Buffer.isBuffer(value) ? value.toString('utf8') : value;
|
||||||
|
if (typeof raw !== 'string') {
|
||||||
|
throw new Error('Invalid kafka message value');
|
||||||
|
}
|
||||||
|
return JSON.parse(raw);
|
||||||
|
};
|
||||||
|
|
||||||
|
const normalizeText = (value, maxLength) => {
|
||||||
|
if (value === undefined || value === null) {
|
||||||
|
return null;
|
||||||
|
}
|
||||||
|
const str = String(value);
|
||||||
|
if (maxLength && str.length > maxLength) {
|
||||||
|
return str.substring(0, maxLength);
|
||||||
|
}
|
||||||
|
return str;
|
||||||
|
};
|
||||||
|
|
||||||
|
export const buildRowsFromMessageValue = (value) => {
|
||||||
|
const payload = parseKafkaPayload(value);
|
||||||
|
return buildRowsFromPayload(payload);
|
||||||
|
};
|
||||||
|
|
||||||
|
export const buildRowsFromPayload = (rawPayload) => {
|
||||||
|
const payload = kafkaPayloadSchema.parse(rawPayload);
|
||||||
|
|
||||||
|
// Database limit is VARCHAR(255)
|
||||||
|
const rebootReason = normalizeText(payload.RebootReason, 255);
|
||||||
|
const currentStatusRaw = normalizeText(payload.CurrentStatus, 255);
|
||||||
|
const hasRebootReason = rebootReason !== null && rebootReason !== '';
|
||||||
|
const currentStatus = hasRebootReason ? 'on' : currentStatusRaw;
|
||||||
|
|
||||||
|
// Derive timestamp: UnixTime -> CurrentTime -> Date.now()
|
||||||
|
let tsMs = payload.UnixTime;
|
||||||
|
|
||||||
|
// Heuristic: If timestamp is small (e.g., < 100000000000), assume it's seconds and convert to ms
|
||||||
|
if (typeof tsMs === 'number' && tsMs < 100000000000) {
|
||||||
|
tsMs = tsMs * 1000;
|
||||||
|
}
|
||||||
|
|
||||||
|
if (!tsMs && payload.CurrentTime) {
|
||||||
|
const parsed = Date.parse(payload.CurrentTime);
|
||||||
|
if (!isNaN(parsed)) {
|
||||||
|
tsMs = parsed;
|
||||||
|
}
|
||||||
|
}
|
||||||
|
if (!tsMs) {
|
||||||
|
tsMs = Date.now();
|
||||||
|
}
|
||||||
|
|
||||||
|
// Ensure PK fields are not null
|
||||||
|
const mac = normalizeText(payload.MAC) || '';
|
||||||
|
const deviceId = normalizeText(payload.HostNumber) || '';
|
||||||
|
const roomId = normalizeText(payload.RoomNumber) || '';
|
||||||
|
|
||||||
|
const row = {
|
||||||
|
guid: createGuid(),
|
||||||
|
ts_ms: tsMs,
|
||||||
|
write_ts_ms: Date.now(),
|
||||||
|
hotel_id: payload.HotelCode,
|
||||||
|
mac: mac,
|
||||||
|
device_id: deviceId,
|
||||||
|
room_id: roomId,
|
||||||
|
ip: normalizeText(payload.EndPoint),
|
||||||
|
current_status: currentStatus,
|
||||||
|
launcher_version: normalizeText(payload.LauncherVersion, 255),
|
||||||
|
reboot_reason: rebootReason
|
||||||
|
};
|
||||||
|
|
||||||
|
return [row];
|
||||||
|
};
|
||||||
|
|
||||||
|
export const processKafkaMessage = async ({ message, dbManager, config }) => {
|
||||||
|
let rows;
|
||||||
|
try {
|
||||||
|
const rawValue = message.value.toString();
|
||||||
|
// logger.info('Processing message', { offset: message.offset, rawValuePreview: rawValue.substring(0, 100) });
|
||||||
|
|
||||||
|
let payload;
|
||||||
|
try {
|
||||||
|
payload = JSON.parse(rawValue);
|
||||||
|
} catch (e) {
|
||||||
|
logger.error('JSON Parse Error', { error: e.message, rawValue });
|
||||||
|
const error = new Error(`JSON Parse Error: ${e.message}`);
|
||||||
|
error.type = 'PARSE_ERROR';
|
||||||
|
throw error;
|
||||||
|
}
|
||||||
|
|
||||||
|
// logger.info('Payload parsed', { payload });
|
||||||
|
|
||||||
|
const validationResult = kafkaPayloadSchema.safeParse(payload);
|
||||||
|
|
||||||
|
if (!validationResult.success) {
|
||||||
|
logger.error('Schema Validation Failed', {
|
||||||
|
errors: validationResult.error.errors,
|
||||||
|
payload
|
||||||
|
});
|
||||||
|
const error = new Error(`Schema Validation Failed: ${JSON.stringify(validationResult.error.errors)}`);
|
||||||
|
error.type = 'VALIDATION_ERROR';
|
||||||
|
throw error;
|
||||||
|
}
|
||||||
|
|
||||||
|
rows = buildRowsFromPayload(payload);
|
||||||
|
} catch (error) {
|
||||||
|
throw error;
|
||||||
|
}
|
||||||
|
|
||||||
|
try {
|
||||||
|
await dbManager.insertRows({ schema: config.db.schema, table: config.db.table, rows });
|
||||||
|
// if (rows.length > 0) {
|
||||||
|
// console.log(`Inserted ${rows.length} rows. Sample GUID: ${rows[0].guid}, TS: ${rows[0].ts_ms}`);
|
||||||
|
// }
|
||||||
|
} catch (error) {
|
||||||
|
error.type = 'DB_ERROR';
|
||||||
|
const sample = rows?.[0];
|
||||||
|
error.dbContext = {
|
||||||
|
rowsLength: rows?.length || 0,
|
||||||
|
sampleRow: sample
|
||||||
|
? {
|
||||||
|
guid: sample.guid,
|
||||||
|
ts_ms: sample.ts_ms,
|
||||||
|
mac: sample.mac,
|
||||||
|
device_id: sample.device_id,
|
||||||
|
room_id: sample.room_id,
|
||||||
|
current_status: sample.current_status
|
||||||
|
}
|
||||||
|
: null
|
||||||
|
};
|
||||||
|
throw error;
|
||||||
|
}
|
||||||
|
|
||||||
|
return rows.length;
|
||||||
|
};
|
||||||
83
bls-onoffline-backend/src/processor/udpParser.js
Normal file
83
bls-onoffline-backend/src/processor/udpParser.js
Normal file
@@ -0,0 +1,83 @@
|
|||||||
|
const normalizeHex = (hex) => {
|
||||||
|
if (typeof hex !== 'string') {
|
||||||
|
return '';
|
||||||
|
}
|
||||||
|
let cleaned = hex.trim().replace(/^0x/i, '').replace(/\s+/g, '');
|
||||||
|
if (cleaned.length % 2 === 1) {
|
||||||
|
cleaned = `0${cleaned}`;
|
||||||
|
}
|
||||||
|
return cleaned;
|
||||||
|
};
|
||||||
|
|
||||||
|
const toHex = (value) => `0x${value.toString(16).padStart(2, '0')}`;
|
||||||
|
|
||||||
|
const readUInt16 = (buffer, offset) => buffer.readUInt16BE(offset);
|
||||||
|
|
||||||
|
export const parse0x36 = (udpRaw) => {
|
||||||
|
const cleaned = normalizeHex(udpRaw);
|
||||||
|
const buffer = cleaned ? Buffer.from(cleaned, 'hex') : Buffer.alloc(0);
|
||||||
|
const sysLockStatus = buffer.length > 0 ? buffer[0] : null;
|
||||||
|
const reportCount = buffer.length > 7 ? buffer[7] : null;
|
||||||
|
let offset = 8;
|
||||||
|
const devices = [];
|
||||||
|
for (let i = 0; i < (reportCount || 0) && offset + 5 < buffer.length; i += 1) {
|
||||||
|
devices.push({
|
||||||
|
dev_type: buffer[offset],
|
||||||
|
dev_addr: buffer[offset + 1],
|
||||||
|
dev_loop: readUInt16(buffer, offset + 2),
|
||||||
|
dev_data: readUInt16(buffer, offset + 4)
|
||||||
|
});
|
||||||
|
offset += 6;
|
||||||
|
}
|
||||||
|
const faultCount = offset < buffer.length ? buffer[offset] : null;
|
||||||
|
offset += 1;
|
||||||
|
const faults = [];
|
||||||
|
for (let i = 0; i < (faultCount || 0) && offset + 5 < buffer.length; i += 1) {
|
||||||
|
faults.push({
|
||||||
|
fault_dev_type: buffer[offset],
|
||||||
|
fault_dev_addr: buffer[offset + 1],
|
||||||
|
fault_dev_loop: readUInt16(buffer, offset + 2),
|
||||||
|
error_type: buffer[offset + 4],
|
||||||
|
error_data: buffer[offset + 5]
|
||||||
|
});
|
||||||
|
offset += 6;
|
||||||
|
}
|
||||||
|
return {
|
||||||
|
sysLockStatus,
|
||||||
|
reportCount,
|
||||||
|
faultCount,
|
||||||
|
devices,
|
||||||
|
faults
|
||||||
|
};
|
||||||
|
};
|
||||||
|
|
||||||
|
export const parse0x0fDownlink = (udpRaw) => {
|
||||||
|
const cleaned = normalizeHex(udpRaw);
|
||||||
|
const buffer = cleaned ? Buffer.from(cleaned, 'hex') : Buffer.alloc(0);
|
||||||
|
const controlCount = buffer.length > 0 ? buffer[0] : null;
|
||||||
|
let offset = 1;
|
||||||
|
const controlParams = [];
|
||||||
|
for (let i = 0; i < (controlCount || 0) && offset + 5 < buffer.length; i += 1) {
|
||||||
|
const typeValue = readUInt16(buffer, offset + 4);
|
||||||
|
controlParams.push({
|
||||||
|
dev_type: buffer[offset],
|
||||||
|
dev_addr: buffer[offset + 1],
|
||||||
|
loop: readUInt16(buffer, offset + 2),
|
||||||
|
type: typeValue,
|
||||||
|
type_l: buffer[offset + 4],
|
||||||
|
type_h: buffer[offset + 5]
|
||||||
|
});
|
||||||
|
offset += 6;
|
||||||
|
}
|
||||||
|
return {
|
||||||
|
controlCount,
|
||||||
|
controlParams
|
||||||
|
};
|
||||||
|
};
|
||||||
|
|
||||||
|
export const parse0x0fAck = (udpRaw) => {
|
||||||
|
const cleaned = normalizeHex(udpRaw);
|
||||||
|
const buffer = cleaned ? Buffer.from(cleaned, 'hex') : Buffer.alloc(0);
|
||||||
|
const ackCode = buffer.length > 1 ? toHex(buffer[1]) : null;
|
||||||
|
return { ackCode };
|
||||||
|
};
|
||||||
53
bls-onoffline-backend/src/redis/errorQueue.js
Normal file
53
bls-onoffline-backend/src/redis/errorQueue.js
Normal file
@@ -0,0 +1,53 @@
|
|||||||
|
import { logger } from '../utils/logger.js';
|
||||||
|
|
||||||
|
export const buildErrorQueueKey = (projectName) => `${projectName}_error_queue`;
|
||||||
|
|
||||||
|
export const enqueueError = async (client, queueKey, payload) => {
|
||||||
|
try {
|
||||||
|
await client.rPush(queueKey, JSON.stringify(payload));
|
||||||
|
} catch (error) {
|
||||||
|
logger.error('Redis enqueue error failed', { error: error?.message });
|
||||||
|
throw error;
|
||||||
|
}
|
||||||
|
};
|
||||||
|
|
||||||
|
export const startErrorRetryWorker = async ({
|
||||||
|
client,
|
||||||
|
queueKey,
|
||||||
|
handler,
|
||||||
|
redisIntegration,
|
||||||
|
maxAttempts = 5
|
||||||
|
}) => {
|
||||||
|
while (true) {
|
||||||
|
const result = await client.blPop(queueKey, 0);
|
||||||
|
const raw = result?.element;
|
||||||
|
if (!raw) {
|
||||||
|
continue;
|
||||||
|
}
|
||||||
|
let item;
|
||||||
|
try {
|
||||||
|
item = JSON.parse(raw);
|
||||||
|
} catch (error) {
|
||||||
|
logger.error('Invalid error payload', { error: error?.message });
|
||||||
|
await redisIntegration.error('Invalid error payload', { module: 'redis', stack: error?.message });
|
||||||
|
continue;
|
||||||
|
}
|
||||||
|
const attempts = item.attempts || 0;
|
||||||
|
try {
|
||||||
|
await handler(item);
|
||||||
|
} catch (error) {
|
||||||
|
logger.error('Retry handler failed', { error: error?.message, stack: error?.stack });
|
||||||
|
const nextPayload = {
|
||||||
|
...item,
|
||||||
|
attempts: attempts + 1,
|
||||||
|
lastError: error?.message,
|
||||||
|
lastAttemptAt: Date.now()
|
||||||
|
};
|
||||||
|
if (nextPayload.attempts >= maxAttempts) {
|
||||||
|
await redisIntegration.error('Retry attempts exceeded', { module: 'retry', stack: JSON.stringify(nextPayload) });
|
||||||
|
} else {
|
||||||
|
await enqueueError(client, queueKey, nextPayload);
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
};
|
||||||
14
bls-onoffline-backend/src/redis/redisClient.js
Normal file
14
bls-onoffline-backend/src/redis/redisClient.js
Normal file
@@ -0,0 +1,14 @@
|
|||||||
|
import { createClient } from 'redis';
|
||||||
|
|
||||||
|
export const createRedisClient = async (config) => {
|
||||||
|
const client = createClient({
|
||||||
|
socket: {
|
||||||
|
host: config.host,
|
||||||
|
port: config.port
|
||||||
|
},
|
||||||
|
password: config.password,
|
||||||
|
database: config.db
|
||||||
|
});
|
||||||
|
await client.connect();
|
||||||
|
return client;
|
||||||
|
};
|
||||||
40
bls-onoffline-backend/src/redis/redisIntegration.js
Normal file
40
bls-onoffline-backend/src/redis/redisIntegration.js
Normal file
@@ -0,0 +1,40 @@
|
|||||||
|
export class RedisIntegration {
|
||||||
|
constructor(client, projectName, apiBaseUrl) {
|
||||||
|
this.client = client;
|
||||||
|
this.projectName = projectName;
|
||||||
|
this.apiBaseUrl = apiBaseUrl;
|
||||||
|
this.heartbeatKey = '项目心跳';
|
||||||
|
this.logKey = `${projectName}_项目控制台`;
|
||||||
|
}
|
||||||
|
|
||||||
|
async info(message, context) {
|
||||||
|
const payload = {
|
||||||
|
timestamp: new Date().toISOString(),
|
||||||
|
level: 'info',
|
||||||
|
message,
|
||||||
|
metadata: context || undefined
|
||||||
|
};
|
||||||
|
await this.client.rPush(this.logKey, JSON.stringify(payload));
|
||||||
|
}
|
||||||
|
|
||||||
|
async error(message, context) {
|
||||||
|
const payload = {
|
||||||
|
timestamp: new Date().toISOString(),
|
||||||
|
level: 'error',
|
||||||
|
message,
|
||||||
|
metadata: context || undefined
|
||||||
|
};
|
||||||
|
await this.client.rPush(this.logKey, JSON.stringify(payload));
|
||||||
|
}
|
||||||
|
|
||||||
|
startHeartbeat() {
|
||||||
|
setInterval(() => {
|
||||||
|
const payload = {
|
||||||
|
projectName: this.projectName,
|
||||||
|
apiBaseUrl: this.apiBaseUrl,
|
||||||
|
lastActiveAt: Date.now()
|
||||||
|
};
|
||||||
|
this.client.rPush(this.heartbeatKey, JSON.stringify(payload));
|
||||||
|
}, 3000);
|
||||||
|
}
|
||||||
|
}
|
||||||
32
bls-onoffline-backend/src/schema/kafkaPayload.js
Normal file
32
bls-onoffline-backend/src/schema/kafkaPayload.js
Normal file
@@ -0,0 +1,32 @@
|
|||||||
|
import { z } from 'zod';
|
||||||
|
|
||||||
|
const toNumber = (value) => {
|
||||||
|
if (value === undefined || value === null || value === '') {
|
||||||
|
return value;
|
||||||
|
}
|
||||||
|
if (typeof value === 'number') {
|
||||||
|
return value;
|
||||||
|
}
|
||||||
|
const parsed = Number(value);
|
||||||
|
return Number.isFinite(parsed) ? parsed : value;
|
||||||
|
};
|
||||||
|
|
||||||
|
const toStringAllowEmpty = (value) => {
|
||||||
|
if (value === undefined || value === null) {
|
||||||
|
return value;
|
||||||
|
}
|
||||||
|
return String(value);
|
||||||
|
};
|
||||||
|
|
||||||
|
export const kafkaPayloadSchema = z.object({
|
||||||
|
HotelCode: z.preprocess(toNumber, z.number()),
|
||||||
|
MAC: z.preprocess(toStringAllowEmpty, z.string().nullable()).optional().nullable(),
|
||||||
|
HostNumber: z.preprocess(toStringAllowEmpty, z.string().nullable()).optional().nullable(),
|
||||||
|
RoomNumber: z.preprocess(toStringAllowEmpty, z.string().nullable()).optional().nullable(),
|
||||||
|
EndPoint: z.preprocess(toStringAllowEmpty, z.string().nullable()).optional().nullable(),
|
||||||
|
CurrentStatus: z.preprocess(toStringAllowEmpty, z.string().nullable()).optional().nullable(),
|
||||||
|
CurrentTime: z.preprocess(toStringAllowEmpty, z.string().nullable()).optional().nullable(),
|
||||||
|
UnixTime: z.preprocess(toNumber, z.number().nullable()).optional().nullable(),
|
||||||
|
LauncherVersion: z.preprocess(toStringAllowEmpty, z.string().nullable()).optional().nullable(),
|
||||||
|
RebootReason: z.preprocess(toStringAllowEmpty, z.string().nullable()).optional().nullable()
|
||||||
|
});
|
||||||
18
bls-onoffline-backend/src/utils/logger.js
Normal file
18
bls-onoffline-backend/src/utils/logger.js
Normal file
@@ -0,0 +1,18 @@
|
|||||||
|
const format = (level, message, context) => {
|
||||||
|
const payload = {
|
||||||
|
level,
|
||||||
|
message,
|
||||||
|
timestamp: Date.now(),
|
||||||
|
...(context ? { context } : {})
|
||||||
|
};
|
||||||
|
return JSON.stringify(payload);
|
||||||
|
};
|
||||||
|
|
||||||
|
export const logger = {
|
||||||
|
info(message, context) {
|
||||||
|
process.stdout.write(`${format('info', message, context)}\n`);
|
||||||
|
},
|
||||||
|
error(message, context) {
|
||||||
|
process.stderr.write(`${format('error', message, context)}\n`);
|
||||||
|
}
|
||||||
|
};
|
||||||
26
bls-onoffline-backend/src/utils/metricCollector.js
Normal file
26
bls-onoffline-backend/src/utils/metricCollector.js
Normal file
@@ -0,0 +1,26 @@
|
|||||||
|
export class MetricCollector {
|
||||||
|
constructor() {
|
||||||
|
this.reset();
|
||||||
|
}
|
||||||
|
|
||||||
|
reset() {
|
||||||
|
this.metrics = {
|
||||||
|
kafka_pulled: 0,
|
||||||
|
parse_error: 0,
|
||||||
|
db_inserted: 0,
|
||||||
|
db_failed: 0
|
||||||
|
};
|
||||||
|
}
|
||||||
|
|
||||||
|
increment(metric, count = 1) {
|
||||||
|
if (this.metrics.hasOwnProperty(metric)) {
|
||||||
|
this.metrics[metric] += count;
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
getAndReset() {
|
||||||
|
const current = { ...this.metrics };
|
||||||
|
this.reset();
|
||||||
|
return current;
|
||||||
|
}
|
||||||
|
}
|
||||||
3
bls-onoffline-backend/src/utils/uuid.js
Normal file
3
bls-onoffline-backend/src/utils/uuid.js
Normal file
@@ -0,0 +1,3 @@
|
|||||||
|
import { randomUUID } from 'crypto';
|
||||||
|
|
||||||
|
export const createGuid = () => randomUUID().replace(/-/g, '');
|
||||||
45
bls-onoffline-backend/tests/processor.test.js
Normal file
45
bls-onoffline-backend/tests/processor.test.js
Normal file
@@ -0,0 +1,45 @@
|
|||||||
|
import { describe, it, expect } from 'vitest';
|
||||||
|
import { buildRowsFromPayload } from '../src/processor/index.js';
|
||||||
|
|
||||||
|
describe('Processor Logic', () => {
|
||||||
|
const basePayload = {
|
||||||
|
HotelCode: '1085',
|
||||||
|
MAC: '00:1A:2B:3C:4D:5E',
|
||||||
|
HostNumber: '091123987456',
|
||||||
|
RoomNumber: '8888房',
|
||||||
|
EndPoint: '50.2.60.1:6543',
|
||||||
|
CurrentStatus: 'off',
|
||||||
|
CurrentTime: '2026-02-02T10:30:00Z',
|
||||||
|
UnixTime: 1770000235000,
|
||||||
|
LauncherVersion: '1.0.0'
|
||||||
|
};
|
||||||
|
|
||||||
|
it('should validate required fields', () => {
|
||||||
|
expect(() => buildRowsFromPayload({})).toThrow();
|
||||||
|
expect(() => buildRowsFromPayload({ ...basePayload, UnixTime: undefined })).toThrow();
|
||||||
|
});
|
||||||
|
|
||||||
|
it('should use current_status from payload for non-reboot data', () => {
|
||||||
|
const rows = buildRowsFromPayload({ ...basePayload, RebootReason: null });
|
||||||
|
expect(rows).toHaveLength(1);
|
||||||
|
expect(rows[0].current_status).toBe('off');
|
||||||
|
expect(rows[0].reboot_reason).toBeNull();
|
||||||
|
});
|
||||||
|
|
||||||
|
it('should override current_status to on for reboot data', () => {
|
||||||
|
const rows = buildRowsFromPayload({ ...basePayload, CurrentStatus: 'off', RebootReason: '0x01' });
|
||||||
|
expect(rows).toHaveLength(1);
|
||||||
|
expect(rows[0].current_status).toBe('on');
|
||||||
|
expect(rows[0].reboot_reason).toBe('0x01');
|
||||||
|
});
|
||||||
|
|
||||||
|
it('should keep empty optional fields as empty strings', () => {
|
||||||
|
const rows = buildRowsFromPayload({
|
||||||
|
...basePayload,
|
||||||
|
LauncherVersion: '',
|
||||||
|
RebootReason: ''
|
||||||
|
});
|
||||||
|
expect(rows[0].launcher_version).toBe('');
|
||||||
|
expect(rows[0].reboot_reason).toBe('');
|
||||||
|
});
|
||||||
|
});
|
||||||
12
bls-onoffline-backend/vite.config.js
Normal file
12
bls-onoffline-backend/vite.config.js
Normal file
@@ -0,0 +1,12 @@
|
|||||||
|
import { defineConfig } from 'vite';
|
||||||
|
|
||||||
|
export default defineConfig({
|
||||||
|
build: {
|
||||||
|
ssr: 'src/index.js',
|
||||||
|
outDir: 'dist',
|
||||||
|
target: 'node18',
|
||||||
|
rollupOptions: {
|
||||||
|
external: ['dotenv', 'kafka-node', 'pg', 'redis']
|
||||||
|
}
|
||||||
|
}
|
||||||
|
});
|
||||||
45
project.md
Normal file
45
project.md
Normal file
@@ -0,0 +1,45 @@
|
|||||||
|
1. Kafka 数据结构
|
||||||
|
{
|
||||||
|
"HotelCode": "1085", // 酒店编码
|
||||||
|
"MAC": "00:1A:2B:3C:4D:5E", // MAC地址
|
||||||
|
"HostNumber": "091123987456", // 设备编号
|
||||||
|
"RoomNumber": "8888房", // 房号
|
||||||
|
"EndPoint": "50.2.60.1:6543", // IP:端口
|
||||||
|
"CurrentStatus": "on", // 当前状态:枚举值:on/off 如果是重启则为on
|
||||||
|
"CurrentTime": "2026-02-02T10:30:00Z" // 发生时间
|
||||||
|
"UnixTime": 1770000235000, // Unix时间戳(毫秒级)
|
||||||
|
"LauncherVersion": "1.0.0", // 启动器版本,可为空
|
||||||
|
"RebootReason": "1" // 枚举值:0x01:软件复位,0x02:上电复位,0x03:外部手动复位,0x04:下电模式唤醒的复位,0x05:看门狗超时复位,0x06:其他复位,NULL:并非重启数据
|
||||||
|
}
|
||||||
|
2. Kafka主题
|
||||||
|
Topic:blwlog4Nodejs-rcu-onoffline-topic
|
||||||
|
|
||||||
|
3. 数据库结构
|
||||||
|
数据库:log_platform
|
||||||
|
表:onoffline_record
|
||||||
|
字段:
|
||||||
|
guid varchar(32) GUID(32 位无连字符 HEX,小写;自动生成)
|
||||||
|
ts_ms int8 Unix时间戳 对应kafka:UnixTime (索引)
|
||||||
|
write_ts_ms int8 写入时间戳 (写入的时候自动生成)
|
||||||
|
hotel_id int2 酒店ID 对应kafka:HotelCode (索引)
|
||||||
|
mac varchar(21) MAC地址 对应kafka:MAC
|
||||||
|
device_id varchar(64) 设备ID 对应kafka:HostNumber (索引)
|
||||||
|
room_id varchar(64) 房间ID 对应kafka:RoomNumber (索引)
|
||||||
|
ip varchar(21) IP地址 对应kafka:EndPoint
|
||||||
|
current_status varchar(10) 当前状态 对应kafka:CurrentStatus (索引)
|
||||||
|
launcher_version varchar(10) 启动器版本 对应kafka:LauncherVersion 可为空
|
||||||
|
reboot_reason varchar(10) 重启原因 对应kafka:RebootReason 可为空
|
||||||
|
- 主键:(ts_ms, mac, device_id, room_id)
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
4. 开发要求:
|
||||||
|
从kafka拉取数据,然后处理,再写入数据库。同时,要创建好数据库,表要按照ts_ms按天分区。
|
||||||
|
|
||||||
|
数据处理特殊说明:
|
||||||
|
- 有两种结构的数据:
|
||||||
|
- 非重启数据:reboot_reason 为 空 或不存在该字段
|
||||||
|
- 当判断为非重启数据时,current_status 为 对应kafka:CurrentStatus
|
||||||
|
- 重启数据:reboot_reason 不为 空
|
||||||
|
- 当判断为重启数据时,current_status 默认设置为 on
|
||||||
|
- 其他数据直接按照kafka数据写入数据库,不做特殊处理(如果为空,数据库字段也为空,就算是数字类型字段,也不要补0)。
|
||||||
Reference in New Issue
Block a user