feat: 外置数据库初始化与分区管理功能
- 删除主服务中的数据库初始化与分区管理逻辑,降低复杂度。 - 新增 SQL 脚本用于数据库初始化和分区管理,集中在 SQL_Script 目录。 - 移除环境变量 ENABLE_DATABASE_INITIALIZATION,简化配置。 - 更新 package.json,新增数据库初始化和分区管理的 npm 脚本。 - 删除不再使用的初始化和分区管理相关文件。 - 提供统一的命令行接口,支持外部调用数据库初始化和分区创建。
This commit is contained in:
58
SQL_Script/README.md
Normal file
58
SQL_Script/README.md
Normal file
@@ -0,0 +1,58 @@
|
|||||||
|
# SQL_Script
|
||||||
|
|
||||||
|
用于**独立于业务服务**的数据库初始化与分区管理。
|
||||||
|
|
||||||
|
> 目标:主服务 `bls-rcu-action-backend` 只负责消费 Kafka 与写库,不再承担任何建库/建表/建分区职责。
|
||||||
|
|
||||||
|
## 文件说明
|
||||||
|
|
||||||
|
- `init_rcu_action.sql`
|
||||||
|
- 初始化 `rcu_action.rcu_action_events` 主表与索引
|
||||||
|
- `init_room_status.sql`
|
||||||
|
- 初始化 `room_status.room_status_moment` 主表与索引
|
||||||
|
- `partition_rcu_action.sql`
|
||||||
|
- `rcu_action` 按天 RANGE 分区 SQL 模板
|
||||||
|
- `partition_room_status.sql`
|
||||||
|
- `room_status` 按 `hotel_id` LIST 分区 SQL 模板
|
||||||
|
- `db_manager.js`
|
||||||
|
- Node 调用入口(CLI + 可 import)
|
||||||
|
|
||||||
|
## 环境变量
|
||||||
|
|
||||||
|
与主服务统一:
|
||||||
|
|
||||||
|
- `DB_HOST` / `POSTGRES_HOST`
|
||||||
|
- `DB_PORT` / `POSTGRES_PORT`
|
||||||
|
- `DB_USER` / `POSTGRES_USER`
|
||||||
|
- `DB_PASSWORD` / `POSTGRES_PASSWORD`
|
||||||
|
- `DB_DATABASE` / `POSTGRES_DATABASE`
|
||||||
|
- `DB_SSL=true|false`
|
||||||
|
- `DB_ADMIN_DATABASE`(可选,默认 `postgres`)
|
||||||
|
|
||||||
|
## 命令行调用
|
||||||
|
|
||||||
|
在 `bls-rcu-action-backend` 目录执行:
|
||||||
|
|
||||||
|
- `npm run db:init:all`
|
||||||
|
- 创建数据库(若不存在)+ 初始化两套主表
|
||||||
|
- `npm run db:init:rcu-action`
|
||||||
|
- `npm run db:init:room-status`
|
||||||
|
- `npm run db:partition:rcu-action`
|
||||||
|
- 默认预建未来 30 天分区
|
||||||
|
- `npm run db:partition:room-status -- 1001`
|
||||||
|
- 为 hotel_id=1001 建分区
|
||||||
|
|
||||||
|
## 其他程序直接 import
|
||||||
|
|
||||||
|
```js
|
||||||
|
import {
|
||||||
|
initAll,
|
||||||
|
ensureDatabase,
|
||||||
|
ensureRcuPartitions,
|
||||||
|
ensureRoomStatusPartition
|
||||||
|
} from '../SQL_Script/db_manager.js';
|
||||||
|
|
||||||
|
await initAll();
|
||||||
|
await ensureRcuPartitions(45);
|
||||||
|
await ensureRoomStatusPartition(1001);
|
||||||
|
```
|
||||||
159
SQL_Script/db_manager.js
Normal file
159
SQL_Script/db_manager.js
Normal file
@@ -0,0 +1,159 @@
|
|||||||
|
import fs from 'fs';
|
||||||
|
import path from 'path';
|
||||||
|
import pg from 'pg';
|
||||||
|
import { fileURLToPath } from 'url';
|
||||||
|
|
||||||
|
const { Client } = pg;
|
||||||
|
|
||||||
|
const __filename = fileURLToPath(import.meta.url);
|
||||||
|
const __dirname = path.dirname(__filename);
|
||||||
|
const scriptDir = __dirname;
|
||||||
|
|
||||||
|
const parseNumber = (value, defaultValue) => {
|
||||||
|
const parsed = Number(value);
|
||||||
|
return Number.isFinite(parsed) ? parsed : defaultValue;
|
||||||
|
};
|
||||||
|
|
||||||
|
const dbConfig = {
|
||||||
|
host: process.env.DB_HOST || process.env.POSTGRES_HOST || 'localhost',
|
||||||
|
port: parseNumber(process.env.DB_PORT || process.env.POSTGRES_PORT, 5432),
|
||||||
|
user: process.env.DB_USER || process.env.POSTGRES_USER || 'postgres',
|
||||||
|
password: process.env.DB_PASSWORD || process.env.POSTGRES_PASSWORD || '',
|
||||||
|
database: process.env.DB_DATABASE || process.env.POSTGRES_DATABASE || 'bls_rcu_action',
|
||||||
|
ssl: process.env.DB_SSL === 'true' ? { rejectUnauthorized: false } : undefined
|
||||||
|
};
|
||||||
|
|
||||||
|
const withClient = async (runner) => {
|
||||||
|
const client = new Client(dbConfig);
|
||||||
|
await client.connect();
|
||||||
|
try {
|
||||||
|
await runner(client);
|
||||||
|
} finally {
|
||||||
|
await client.end();
|
||||||
|
}
|
||||||
|
};
|
||||||
|
|
||||||
|
const executeSqlFile = async (client, fileName) => {
|
||||||
|
const filePath = path.join(scriptDir, fileName);
|
||||||
|
const sql = fs.readFileSync(filePath, 'utf8');
|
||||||
|
await client.query(sql);
|
||||||
|
};
|
||||||
|
|
||||||
|
export const ensureDatabase = async () => {
|
||||||
|
const adminClient = new Client({
|
||||||
|
...dbConfig,
|
||||||
|
database: process.env.DB_ADMIN_DATABASE || 'postgres'
|
||||||
|
});
|
||||||
|
await adminClient.connect();
|
||||||
|
try {
|
||||||
|
const targetDb = dbConfig.database;
|
||||||
|
const check = await adminClient.query('SELECT 1 FROM pg_database WHERE datname = $1', [targetDb]);
|
||||||
|
if (check.rowCount === 0) {
|
||||||
|
await adminClient.query(`CREATE DATABASE "${targetDb}"`);
|
||||||
|
console.log(`[SQL_Script] created database: ${targetDb}`);
|
||||||
|
} else {
|
||||||
|
console.log(`[SQL_Script] database exists: ${targetDb}`);
|
||||||
|
}
|
||||||
|
} finally {
|
||||||
|
await adminClient.end();
|
||||||
|
}
|
||||||
|
};
|
||||||
|
|
||||||
|
const toPartitionSuffix = (date) => {
|
||||||
|
const yyyy = date.getFullYear();
|
||||||
|
const mm = String(date.getMonth() + 1).padStart(2, '0');
|
||||||
|
const dd = String(date.getDate()).padStart(2, '0');
|
||||||
|
return `${yyyy}${mm}${dd}`;
|
||||||
|
};
|
||||||
|
|
||||||
|
const getDayRange = (date) => {
|
||||||
|
const start = new Date(date);
|
||||||
|
start.setHours(0, 0, 0, 0);
|
||||||
|
const end = new Date(start);
|
||||||
|
end.setDate(end.getDate() + 1);
|
||||||
|
return { startMs: start.getTime(), endMs: end.getTime() };
|
||||||
|
};
|
||||||
|
|
||||||
|
export const ensureRcuPartitions = async (daysAhead = 30) => {
|
||||||
|
const tpl = fs.readFileSync(path.join(scriptDir, 'partition_rcu_action.sql'), 'utf8');
|
||||||
|
await withClient(async (client) => {
|
||||||
|
const now = new Date();
|
||||||
|
for (let i = 0; i < daysAhead; i++) {
|
||||||
|
const d = new Date(now);
|
||||||
|
d.setDate(now.getDate() + i);
|
||||||
|
const suffix = toPartitionSuffix(d);
|
||||||
|
const partitionName = `rcu_action.rcu_action_events_${suffix}`;
|
||||||
|
const { startMs, endMs } = getDayRange(d);
|
||||||
|
|
||||||
|
const sql = tpl
|
||||||
|
.replaceAll('{partition_name}', partitionName)
|
||||||
|
.replaceAll('{start_ms}', String(startMs))
|
||||||
|
.replaceAll('{end_ms}', String(endMs));
|
||||||
|
|
||||||
|
await client.query(sql);
|
||||||
|
}
|
||||||
|
});
|
||||||
|
console.log(`[SQL_Script] ensured rcu_action partitions for ${daysAhead} days`);
|
||||||
|
};
|
||||||
|
|
||||||
|
export const ensureRoomStatusPartition = async (hotelId) => {
|
||||||
|
if (!Number.isFinite(Number(hotelId))) {
|
||||||
|
throw new Error('hotelId is required and must be a number');
|
||||||
|
}
|
||||||
|
const tpl = fs.readFileSync(path.join(scriptDir, 'partition_room_status.sql'), 'utf8');
|
||||||
|
const sql = tpl.replaceAll('{hotel_id}', String(hotelId));
|
||||||
|
|
||||||
|
await withClient(async (client) => {
|
||||||
|
await client.query(sql);
|
||||||
|
});
|
||||||
|
|
||||||
|
console.log(`[SQL_Script] ensured room_status partition for hotel_id=${hotelId}`);
|
||||||
|
};
|
||||||
|
|
||||||
|
export const initAll = async () => {
|
||||||
|
await ensureDatabase();
|
||||||
|
await withClient(async (client) => {
|
||||||
|
await executeSqlFile(client, 'init_rcu_action.sql');
|
||||||
|
await executeSqlFile(client, 'init_room_status.sql');
|
||||||
|
});
|
||||||
|
console.log('[SQL_Script] initialized schemas and tables');
|
||||||
|
};
|
||||||
|
|
||||||
|
const run = async () => {
|
||||||
|
const cmd = process.argv[2];
|
||||||
|
|
||||||
|
if (!cmd) {
|
||||||
|
throw new Error('missing command: init-all | init-rcu | init-room-status | partition-rcu [days] | partition-room-status <hotelId>');
|
||||||
|
}
|
||||||
|
|
||||||
|
switch (cmd) {
|
||||||
|
case 'init-all':
|
||||||
|
await initAll();
|
||||||
|
break;
|
||||||
|
case 'init-rcu':
|
||||||
|
await withClient((client) => executeSqlFile(client, 'init_rcu_action.sql'));
|
||||||
|
console.log('[SQL_Script] initialized rcu_action schema/table');
|
||||||
|
break;
|
||||||
|
case 'init-room-status':
|
||||||
|
await withClient((client) => executeSqlFile(client, 'init_room_status.sql'));
|
||||||
|
console.log('[SQL_Script] initialized room_status schema/table');
|
||||||
|
break;
|
||||||
|
case 'partition-rcu': {
|
||||||
|
const days = parseNumber(process.argv[3], 30);
|
||||||
|
await ensureRcuPartitions(days);
|
||||||
|
break;
|
||||||
|
}
|
||||||
|
case 'partition-room-status': {
|
||||||
|
const hotelId = process.argv[3];
|
||||||
|
await ensureRoomStatusPartition(hotelId);
|
||||||
|
break;
|
||||||
|
}
|
||||||
|
default:
|
||||||
|
throw new Error(`unsupported command: ${cmd}`);
|
||||||
|
}
|
||||||
|
};
|
||||||
|
|
||||||
|
run().catch((err) => {
|
||||||
|
console.error('[SQL_Script] failed:', err?.message || err);
|
||||||
|
process.exit(1);
|
||||||
|
});
|
||||||
47
SQL_Script/init_rcu_action.sql
Normal file
47
SQL_Script/init_rcu_action.sql
Normal file
@@ -0,0 +1,47 @@
|
|||||||
|
-- SQL_Script/init_rcu_action.sql
|
||||||
|
-- RCU Action 主业务表初始化(不包含 CREATE DATABASE)
|
||||||
|
|
||||||
|
CREATE SCHEMA IF NOT EXISTS rcu_action;
|
||||||
|
|
||||||
|
CREATE TABLE IF NOT EXISTS rcu_action.rcu_action_events (
|
||||||
|
guid VARCHAR(32) NOT NULL,
|
||||||
|
ts_ms BIGINT NOT NULL,
|
||||||
|
write_ts_ms BIGINT NOT NULL,
|
||||||
|
hotel_id INTEGER NOT NULL,
|
||||||
|
room_id VARCHAR(32) NOT NULL,
|
||||||
|
device_id VARCHAR(32) NOT NULL,
|
||||||
|
direction VARCHAR(10) NOT NULL,
|
||||||
|
cmd_word VARCHAR(10) NOT NULL,
|
||||||
|
frame_id INTEGER NOT NULL,
|
||||||
|
udp_raw TEXT NOT NULL,
|
||||||
|
action_type VARCHAR(20) NOT NULL,
|
||||||
|
sys_lock_status SMALLINT,
|
||||||
|
report_count SMALLINT,
|
||||||
|
dev_type SMALLINT,
|
||||||
|
dev_addr SMALLINT,
|
||||||
|
dev_loop INTEGER,
|
||||||
|
dev_data INTEGER,
|
||||||
|
fault_count SMALLINT,
|
||||||
|
error_type SMALLINT,
|
||||||
|
error_data SMALLINT,
|
||||||
|
type_l SMALLINT,
|
||||||
|
type_h SMALLINT,
|
||||||
|
details JSONB,
|
||||||
|
extra JSONB,
|
||||||
|
loop_name VARCHAR(255),
|
||||||
|
PRIMARY KEY (ts_ms, guid)
|
||||||
|
) PARTITION BY RANGE (ts_ms);
|
||||||
|
|
||||||
|
ALTER TABLE rcu_action.rcu_action_events
|
||||||
|
ADD COLUMN IF NOT EXISTS device_id VARCHAR(32) NOT NULL DEFAULT '';
|
||||||
|
|
||||||
|
ALTER TABLE rcu_action.rcu_action_events
|
||||||
|
ADD COLUMN IF NOT EXISTS loop_name VARCHAR(255);
|
||||||
|
|
||||||
|
CREATE INDEX IF NOT EXISTS idx_rcu_action_hotel_id ON rcu_action.rcu_action_events (hotel_id);
|
||||||
|
CREATE INDEX IF NOT EXISTS idx_rcu_action_room_id ON rcu_action.rcu_action_events (room_id);
|
||||||
|
CREATE INDEX IF NOT EXISTS idx_rcu_action_device_id ON rcu_action.rcu_action_events (device_id);
|
||||||
|
CREATE INDEX IF NOT EXISTS idx_rcu_action_direction ON rcu_action.rcu_action_events (direction);
|
||||||
|
CREATE INDEX IF NOT EXISTS idx_rcu_action_cmd_word ON rcu_action.rcu_action_events (cmd_word);
|
||||||
|
CREATE INDEX IF NOT EXISTS idx_rcu_action_action_type ON rcu_action.rcu_action_events (action_type);
|
||||||
|
CREATE INDEX IF NOT EXISTS idx_rcu_action_query_main ON rcu_action.rcu_action_events (hotel_id, room_id, ts_ms DESC);
|
||||||
66
SQL_Script/init_room_status.sql
Normal file
66
SQL_Script/init_room_status.sql
Normal file
@@ -0,0 +1,66 @@
|
|||||||
|
-- SQL_Script/init_room_status.sql
|
||||||
|
-- Room Status 快照表初始化(不包含 CREATE DATABASE)
|
||||||
|
|
||||||
|
CREATE SCHEMA IF NOT EXISTS room_status;
|
||||||
|
|
||||||
|
CREATE TABLE IF NOT EXISTS room_status.room_status_moment (
|
||||||
|
guid UUID NOT NULL,
|
||||||
|
ts_ms INT8 NOT NULL DEFAULT (EXTRACT(EPOCH FROM CURRENT_TIMESTAMP) * 1000)::BIGINT,
|
||||||
|
hotel_id INT2 NOT NULL,
|
||||||
|
room_id TEXT NOT NULL,
|
||||||
|
device_id TEXT NOT NULL,
|
||||||
|
|
||||||
|
sys_lock_status INT2,
|
||||||
|
online_status INT2,
|
||||||
|
launcher_version TEXT,
|
||||||
|
app_version TEXT,
|
||||||
|
config_version TEXT,
|
||||||
|
register_ts_ms INT8,
|
||||||
|
upgrade_ts_ms INT8,
|
||||||
|
config_ts_ms INT8,
|
||||||
|
ip TEXT,
|
||||||
|
|
||||||
|
pms_status INT2,
|
||||||
|
power_state INT2,
|
||||||
|
cardless_state INT2,
|
||||||
|
service_mask INT8,
|
||||||
|
insert_card INT2,
|
||||||
|
bright_g INT2,
|
||||||
|
agreement_ver TEXT,
|
||||||
|
|
||||||
|
air_address TEXT[],
|
||||||
|
air_state INT2[],
|
||||||
|
air_model INT2[],
|
||||||
|
air_speed INT2[],
|
||||||
|
air_set_temp INT2[],
|
||||||
|
air_now_temp INT2[],
|
||||||
|
air_solenoid_valve INT2[],
|
||||||
|
|
||||||
|
elec_address TEXT[],
|
||||||
|
elec_voltage DOUBLE PRECISION[],
|
||||||
|
elec_ampere DOUBLE PRECISION[],
|
||||||
|
elec_power DOUBLE PRECISION[],
|
||||||
|
elec_phase DOUBLE PRECISION[],
|
||||||
|
elec_energy DOUBLE PRECISION[],
|
||||||
|
elec_sum_energy DOUBLE PRECISION[],
|
||||||
|
|
||||||
|
carbon_state INT2,
|
||||||
|
dev_loops JSONB,
|
||||||
|
energy_carbon_sum DOUBLE PRECISION,
|
||||||
|
energy_nocard_sum DOUBLE PRECISION,
|
||||||
|
external_device JSONB DEFAULT '{}',
|
||||||
|
faulty_device_count JSONB DEFAULT '{}',
|
||||||
|
|
||||||
|
PRIMARY KEY (hotel_id, room_id, device_id, guid)
|
||||||
|
) PARTITION BY LIST (hotel_id);
|
||||||
|
|
||||||
|
CREATE INDEX IF NOT EXISTS idx_room_status_moment_hotel_room ON room_status.room_status_moment (hotel_id, room_id);
|
||||||
|
CREATE INDEX IF NOT EXISTS idx_room_status_moment_device_id ON room_status.room_status_moment (device_id);
|
||||||
|
CREATE INDEX IF NOT EXISTS idx_room_status_moment_sys_lock ON room_status.room_status_moment (sys_lock_status);
|
||||||
|
CREATE INDEX IF NOT EXISTS idx_room_status_moment_online ON room_status.room_status_moment (online_status);
|
||||||
|
CREATE INDEX IF NOT EXISTS idx_room_status_moment_pms ON room_status.room_status_moment (pms_status);
|
||||||
|
CREATE INDEX IF NOT EXISTS idx_room_status_moment_power ON room_status.room_status_moment (power_state);
|
||||||
|
CREATE INDEX IF NOT EXISTS idx_room_status_moment_cardless ON room_status.room_status_moment (cardless_state);
|
||||||
|
CREATE INDEX IF NOT EXISTS idx_room_status_moment_insert_card ON room_status.room_status_moment (insert_card);
|
||||||
|
CREATE INDEX IF NOT EXISTS idx_room_status_moment_carbon ON room_status.room_status_moment (carbon_state);
|
||||||
|
CREATE UNIQUE INDEX IF NOT EXISTS idx_room_status_unique_device ON room_status.room_status_moment (hotel_id, room_id, device_id);
|
||||||
14
SQL_Script/partition_rcu_action.sql
Normal file
14
SQL_Script/partition_rcu_action.sql
Normal file
@@ -0,0 +1,14 @@
|
|||||||
|
-- SQL_Script/partition_rcu_action.sql
|
||||||
|
-- 说明:此文件提供“按天分区”的 SQL 模板。
|
||||||
|
-- 其他程序可用参数替换后执行:{partition_name} {start_ms} {end_ms}
|
||||||
|
|
||||||
|
-- 示例:
|
||||||
|
-- CREATE TABLE IF NOT EXISTS rcu_action.rcu_action_events_20260304
|
||||||
|
-- PARTITION OF rcu_action.rcu_action_events
|
||||||
|
-- FOR VALUES FROM (1741046400000) TO (1741132800000)
|
||||||
|
-- TABLESPACE ts_hot;
|
||||||
|
|
||||||
|
CREATE TABLE IF NOT EXISTS {partition_name}
|
||||||
|
PARTITION OF rcu_action.rcu_action_events
|
||||||
|
FOR VALUES FROM ({start_ms}) TO ({end_ms})
|
||||||
|
TABLESPACE ts_hot;
|
||||||
12
SQL_Script/partition_room_status.sql
Normal file
12
SQL_Script/partition_room_status.sql
Normal file
@@ -0,0 +1,12 @@
|
|||||||
|
-- SQL_Script/partition_room_status.sql
|
||||||
|
-- 说明:此文件提供 room_status 按 hotel_id LIST 分区 SQL 模板。
|
||||||
|
-- 其他程序可用参数替换后执行:{hotel_id}
|
||||||
|
|
||||||
|
-- 示例:
|
||||||
|
-- CREATE TABLE IF NOT EXISTS room_status.room_status_moment_h1001
|
||||||
|
-- PARTITION OF room_status.room_status_moment
|
||||||
|
-- FOR VALUES IN (1001);
|
||||||
|
|
||||||
|
CREATE TABLE IF NOT EXISTS room_status.room_status_moment_h{hotel_id}
|
||||||
|
PARTITION OF room_status.room_status_moment
|
||||||
|
FOR VALUES IN ({hotel_id});
|
||||||
File diff suppressed because one or more lines are too long
@@ -41,8 +41,3 @@ REDIS_API_BASE_URL=http://localhost:3000
|
|||||||
# ROOM_STATUS_DB_TABLE=room_status_moment
|
# ROOM_STATUS_DB_TABLE=room_status_moment
|
||||||
|
|
||||||
ENABLE_LOOP_NAME_AUTO_GENERATION=true
|
ENABLE_LOOP_NAME_AUTO_GENERATION=true
|
||||||
# Database Initialization Configuration
|
|
||||||
# Set to 'false' to skip automatic database creation, schema setup, and partition management
|
|
||||||
# When disabled, the service will start consuming Kafka messages and writing to existing database immediately
|
|
||||||
# Default: true (enable database initialization)
|
|
||||||
ENABLE_DATABASE_INITIALIZATION=true
|
|
||||||
@@ -7,7 +7,12 @@
|
|||||||
"dev": "node src/index.js",
|
"dev": "node src/index.js",
|
||||||
"build": "vite build --ssr src/index.js --outDir dist",
|
"build": "vite build --ssr src/index.js --outDir dist",
|
||||||
"test": "vitest run",
|
"test": "vitest run",
|
||||||
"start": "node dist/index.js"
|
"start": "node dist/index.js",
|
||||||
|
"db:init:all": "node ../SQL_Script/db_manager.js init-all",
|
||||||
|
"db:init:rcu-action": "node ../SQL_Script/db_manager.js init-rcu",
|
||||||
|
"db:init:room-status": "node ../SQL_Script/db_manager.js init-room-status",
|
||||||
|
"db:partition:rcu-action": "node ../SQL_Script/db_manager.js partition-rcu",
|
||||||
|
"db:partition:room-status": "node ../SQL_Script/db_manager.js partition-room-status"
|
||||||
},
|
},
|
||||||
"dependencies": {
|
"dependencies": {
|
||||||
"dotenv": "^16.4.5",
|
"dotenv": "^16.4.5",
|
||||||
|
|||||||
@@ -1,80 +0,0 @@
|
|||||||
-- Database Initialization Script for BLS RCU Action Server
|
|
||||||
-- 描述:创建 rcu_action 模式及 rcu_action_events 分区表,用于存储 RCU 通讯日志流水
|
|
||||||
|
|
||||||
CREATE SCHEMA IF NOT EXISTS rcu_action;
|
|
||||||
|
|
||||||
CREATE TABLE IF NOT EXISTS rcu_action.rcu_action_events (
|
|
||||||
guid VARCHAR(32) NOT NULL,
|
|
||||||
ts_ms BIGINT NOT NULL,
|
|
||||||
write_ts_ms BIGINT NOT NULL,
|
|
||||||
hotel_id INTEGER NOT NULL,
|
|
||||||
room_id VARCHAR(32) NOT NULL,
|
|
||||||
device_id VARCHAR(32) NOT NULL,
|
|
||||||
direction VARCHAR(10) NOT NULL,
|
|
||||||
cmd_word VARCHAR(10) NOT NULL,
|
|
||||||
frame_id INTEGER NOT NULL,
|
|
||||||
udp_raw TEXT NOT NULL,
|
|
||||||
action_type VARCHAR(20) NOT NULL,
|
|
||||||
sys_lock_status SMALLINT,
|
|
||||||
report_count SMALLINT,
|
|
||||||
dev_type SMALLINT,
|
|
||||||
dev_addr SMALLINT,
|
|
||||||
dev_loop INTEGER,
|
|
||||||
dev_data INTEGER,
|
|
||||||
fault_count SMALLINT,
|
|
||||||
error_type SMALLINT,
|
|
||||||
error_data SMALLINT,
|
|
||||||
type_l SMALLINT,
|
|
||||||
type_h SMALLINT,
|
|
||||||
details JSONB,
|
|
||||||
extra JSONB,
|
|
||||||
loop_name VARCHAR(255),
|
|
||||||
PRIMARY KEY (ts_ms, guid)
|
|
||||||
) PARTITION BY RANGE (ts_ms);
|
|
||||||
|
|
||||||
ALTER TABLE rcu_action.rcu_action_events
|
|
||||||
ADD COLUMN IF NOT EXISTS device_id VARCHAR(32) NOT NULL DEFAULT '';
|
|
||||||
|
|
||||||
ALTER TABLE rcu_action.rcu_action_events
|
|
||||||
ADD COLUMN IF NOT EXISTS loop_name VARCHAR(255);
|
|
||||||
|
|
||||||
-- Indexes for performance (ONLY on parent partitioned table)
|
|
||||||
-- PostgreSQL will create/attach corresponding child-partition indexes automatically.
|
|
||||||
-- Do not create duplicated indexes on partition child tables.
|
|
||||||
CREATE INDEX IF NOT EXISTS idx_rcu_action_hotel_id ON rcu_action.rcu_action_events (hotel_id);
|
|
||||||
CREATE INDEX IF NOT EXISTS idx_rcu_action_room_id ON rcu_action.rcu_action_events (room_id);
|
|
||||||
CREATE INDEX IF NOT EXISTS idx_rcu_action_device_id ON rcu_action.rcu_action_events (device_id);
|
|
||||||
CREATE INDEX IF NOT EXISTS idx_rcu_action_direction ON rcu_action.rcu_action_events (direction);
|
|
||||||
CREATE INDEX IF NOT EXISTS idx_rcu_action_cmd_word ON rcu_action.rcu_action_events (cmd_word);
|
|
||||||
CREATE INDEX IF NOT EXISTS idx_rcu_action_action_type ON rcu_action.rcu_action_events (action_type);
|
|
||||||
|
|
||||||
-- Composite Index for typical query pattern (Hotel + Room + Time)
|
|
||||||
CREATE INDEX IF NOT EXISTS idx_rcu_action_query_main ON rcu_action.rcu_action_events (hotel_id, room_id, ts_ms DESC);
|
|
||||||
|
|
||||||
-- Column Comments
|
|
||||||
COMMENT ON TABLE rcu_action.rcu_action_events IS 'RCU 通讯日志流水表 - 存储从 Kafka 消费的 RCU 设备上报/下发/ACK 事件';
|
|
||||||
COMMENT ON COLUMN rcu_action.rcu_action_events.guid IS '主键,32位无横线 UUID';
|
|
||||||
COMMENT ON COLUMN rcu_action.rcu_action_events.ts_ms IS '日志产生时间戳(毫秒),同时用作分区键';
|
|
||||||
COMMENT ON COLUMN rcu_action.rcu_action_events.write_ts_ms IS '入库时间戳(毫秒),由后端服务写入时生成';
|
|
||||||
COMMENT ON COLUMN rcu_action.rcu_action_events.hotel_id IS '酒店 ID';
|
|
||||||
COMMENT ON COLUMN rcu_action.rcu_action_events.room_id IS '房间 ID';
|
|
||||||
COMMENT ON COLUMN rcu_action.rcu_action_events.device_id IS 'RCU 设备 ID(主板编号)';
|
|
||||||
COMMENT ON COLUMN rcu_action.rcu_action_events.direction IS '数据方向:上报 / 下发';
|
|
||||||
COMMENT ON COLUMN rcu_action.rcu_action_events.cmd_word IS '命令字,如 0x36(状态上报)、0x0F(控制下发/ACK)';
|
|
||||||
COMMENT ON COLUMN rcu_action.rcu_action_events.frame_id IS '通讯帧号,用于串联同一次通讯的命令与状态';
|
|
||||||
COMMENT ON COLUMN rcu_action.rcu_action_events.udp_raw IS 'UDP 消息原文(base64 编码)';
|
|
||||||
COMMENT ON COLUMN rcu_action.rcu_action_events.action_type IS '记录行为类型:用户操作 / 设备回路状态 / 下发控制 / 0FACK / 无效';
|
|
||||||
COMMENT ON COLUMN rcu_action.rcu_action_events.sys_lock_status IS '系统锁状态:0=未锁定, 1=锁定(仅 0x36 上报)';
|
|
||||||
COMMENT ON COLUMN rcu_action.rcu_action_events.report_count IS '本次上报设备数量(对应 device_list 长度)';
|
|
||||||
COMMENT ON COLUMN rcu_action.rcu_action_events.dev_type IS '设备类型编号,拆分自 device_list/fault_list/control_list';
|
|
||||||
COMMENT ON COLUMN rcu_action.rcu_action_events.dev_addr IS '设备地址编号';
|
|
||||||
COMMENT ON COLUMN rcu_action.rcu_action_events.dev_loop IS '设备回路编号';
|
|
||||||
COMMENT ON COLUMN rcu_action.rcu_action_events.dev_data IS '设备状态数值(仅 0x36 状态上报)';
|
|
||||||
COMMENT ON COLUMN rcu_action.rcu_action_events.fault_count IS '本次故障设备数量(对应 fault_list 长度)';
|
|
||||||
COMMENT ON COLUMN rcu_action.rcu_action_events.error_type IS '故障类型:0x01=在线/离线, 0x02=电量, 0x03=电流 等';
|
|
||||||
COMMENT ON COLUMN rcu_action.rcu_action_events.error_data IS '故障内容数据(含义取决于 error_type)';
|
|
||||||
COMMENT ON COLUMN rcu_action.rcu_action_events.type_l IS '执行方式(仅 0x0F 下发控制)';
|
|
||||||
COMMENT ON COLUMN rcu_action.rcu_action_events.type_h IS '执行内容(仅 0x0F 下发控制)';
|
|
||||||
COMMENT ON COLUMN rcu_action.rcu_action_events.details IS '业务详情 JSONB:存储完整的 device_list / fault_list / control_list';
|
|
||||||
COMMENT ON COLUMN rcu_action.rcu_action_events.extra IS '扩展信息 JSONB:存储上游传入的附加字段';
|
|
||||||
COMMENT ON COLUMN rcu_action.rcu_action_events.loop_name IS '回路名称:通过 device_id → room_type_id → loop_address 查询获得';
|
|
||||||
@@ -64,6 +64,5 @@ export const config = {
|
|||||||
schema: process.env.ROOM_STATUS_DB_SCHEMA || 'room_status',
|
schema: process.env.ROOM_STATUS_DB_SCHEMA || 'room_status',
|
||||||
table: process.env.ROOM_STATUS_DB_TABLE || 'room_status_moment'
|
table: process.env.ROOM_STATUS_DB_TABLE || 'room_status_moment'
|
||||||
},
|
},
|
||||||
enableLoopNameAutoGeneration: process.env.ENABLE_LOOP_NAME_AUTO_GENERATION === 'true',
|
enableLoopNameAutoGeneration: process.env.ENABLE_LOOP_NAME_AUTO_GENERATION === 'true'
|
||||||
enableDatabaseInitialization: process.env.ENABLE_DATABASE_INITIALIZATION !== 'false'
|
|
||||||
};
|
};
|
||||||
|
|||||||
@@ -1,110 +0,0 @@
|
|||||||
import pg from 'pg';
|
|
||||||
import fs from 'fs';
|
|
||||||
import path from 'path';
|
|
||||||
import { fileURLToPath } from 'url';
|
|
||||||
import { logger } from '../utils/logger.js';
|
|
||||||
import partitionManager from './partitionManager.js';
|
|
||||||
import dbManager from './databaseManager.js';
|
|
||||||
import { config } from '../config/config.js';
|
|
||||||
|
|
||||||
const __filename = fileURLToPath(import.meta.url);
|
|
||||||
const __dirname = path.dirname(__filename);
|
|
||||||
|
|
||||||
class DatabaseInitializer {
|
|
||||||
async initialize() {
|
|
||||||
logger.info('Starting database initialization check...');
|
|
||||||
|
|
||||||
// 1. Check if database exists, create if not
|
|
||||||
await this.ensureDatabaseExists();
|
|
||||||
|
|
||||||
// 2. Initialize Schema and Parent Table (if not exists)
|
|
||||||
// Note: We need to use dbManager because it connects to the target database
|
|
||||||
await this.ensureSchemaAndTable();
|
|
||||||
|
|
||||||
// 3. Ensure Partitions for the next month
|
|
||||||
await partitionManager.ensurePartitions(30);
|
|
||||||
|
|
||||||
logger.info('Database initialization completed successfully.');
|
|
||||||
}
|
|
||||||
|
|
||||||
async ensureDatabaseExists() {
|
|
||||||
const { host, port, user, password, database, ssl } = config.db;
|
|
||||||
|
|
||||||
// Connect to 'postgres' database to check/create target database
|
|
||||||
const client = new pg.Client({
|
|
||||||
host,
|
|
||||||
port,
|
|
||||||
user,
|
|
||||||
password,
|
|
||||||
database: 'postgres',
|
|
||||||
ssl: ssl ? { rejectUnauthorized: false } : false
|
|
||||||
});
|
|
||||||
|
|
||||||
const maxRetries = 5;
|
|
||||||
let retryCount = 0;
|
|
||||||
|
|
||||||
while (retryCount < maxRetries) {
|
|
||||||
try {
|
|
||||||
await client.connect();
|
|
||||||
break;
|
|
||||||
} catch (err) {
|
|
||||||
if (err.code === 'EADDRINUSE') {
|
|
||||||
retryCount++;
|
|
||||||
logger.warn(`Port conflict (EADDRINUSE) connecting to database, retrying (${retryCount}/${maxRetries})...`);
|
|
||||||
await new Promise(resolve => setTimeout(resolve, 1000));
|
|
||||||
} else {
|
|
||||||
throw err;
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
try {
|
|
||||||
const checkRes = await client.query(
|
|
||||||
`SELECT 1 FROM pg_database WHERE datname = $1`,
|
|
||||||
[database]
|
|
||||||
);
|
|
||||||
|
|
||||||
if (checkRes.rowCount === 0) {
|
|
||||||
logger.info(`Database '${database}' does not exist. Creating...`);
|
|
||||||
// CREATE DATABASE cannot run inside a transaction block
|
|
||||||
await client.query(`CREATE DATABASE "${database}"`);
|
|
||||||
logger.info(`Database '${database}' created.`);
|
|
||||||
} else {
|
|
||||||
logger.info(`Database '${database}' already exists.`);
|
|
||||||
}
|
|
||||||
} catch (err) {
|
|
||||||
logger.error('Error ensuring database exists:', err);
|
|
||||||
throw err;
|
|
||||||
} finally {
|
|
||||||
await client.end();
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
async ensureSchemaAndTable() {
|
|
||||||
// dbManager connects to the target database
|
|
||||||
const client = await dbManager.pool.connect();
|
|
||||||
try {
|
|
||||||
const sqlPathCandidates = [
|
|
||||||
path.resolve(process.cwd(), 'scripts/init_db.sql'),
|
|
||||||
path.resolve(__dirname, '../scripts/init_db.sql'),
|
|
||||||
path.resolve(__dirname, '../../scripts/init_db.sql')
|
|
||||||
];
|
|
||||||
const sqlPath = sqlPathCandidates.find((candidate) => fs.existsSync(candidate));
|
|
||||||
if (!sqlPath) {
|
|
||||||
throw new Error(`init_db.sql not found. Candidates: ${sqlPathCandidates.join(' | ')}`);
|
|
||||||
}
|
|
||||||
const sql = fs.readFileSync(sqlPath, 'utf8');
|
|
||||||
|
|
||||||
logger.info('Executing init_db.sql...');
|
|
||||||
await client.query(sql);
|
|
||||||
logger.info('Schema and parent table initialized.');
|
|
||||||
} catch (err) {
|
|
||||||
logger.error('Error initializing schema and table:', err);
|
|
||||||
throw err;
|
|
||||||
} finally {
|
|
||||||
client.release();
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
export default new DatabaseInitializer();
|
|
||||||
@@ -1,91 +0,0 @@
|
|||||||
import { logger } from '../utils/logger.js';
|
|
||||||
import dbManager from './databaseManager.js';
|
|
||||||
|
|
||||||
const PARENT_TABLE = 'rcu_action.rcu_action_events';
|
|
||||||
const PARTITION_TABLESPACE = 'ts_hot';
|
|
||||||
const PARENT_INDEX_STATEMENTS = [
|
|
||||||
'CREATE INDEX IF NOT EXISTS idx_rcu_action_hotel_id ON rcu_action.rcu_action_events (hotel_id);',
|
|
||||||
'CREATE INDEX IF NOT EXISTS idx_rcu_action_room_id ON rcu_action.rcu_action_events (room_id);',
|
|
||||||
'CREATE INDEX IF NOT EXISTS idx_rcu_action_device_id ON rcu_action.rcu_action_events (device_id);',
|
|
||||||
'CREATE INDEX IF NOT EXISTS idx_rcu_action_direction ON rcu_action.rcu_action_events (direction);',
|
|
||||||
'CREATE INDEX IF NOT EXISTS idx_rcu_action_cmd_word ON rcu_action.rcu_action_events (cmd_word);',
|
|
||||||
'CREATE INDEX IF NOT EXISTS idx_rcu_action_action_type ON rcu_action.rcu_action_events (action_type);',
|
|
||||||
'CREATE INDEX IF NOT EXISTS idx_rcu_action_query_main ON rcu_action.rcu_action_events (hotel_id, room_id, ts_ms DESC);'
|
|
||||||
];
|
|
||||||
|
|
||||||
class PartitionManager {
|
|
||||||
async ensureParentIndexes(client) {
|
|
||||||
for (const sql of PARENT_INDEX_STATEMENTS) {
|
|
||||||
await client.query(sql);
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
/**
|
|
||||||
* Calculate the start and end timestamps (milliseconds) for a given date.
|
|
||||||
* @param {Date} date - The date to calculate for.
|
|
||||||
* @returns {Object} { startMs, endMs, partitionSuffix }
|
|
||||||
*/
|
|
||||||
getPartitionInfo(date) {
|
|
||||||
const yyyy = date.getFullYear();
|
|
||||||
const mm = String(date.getMonth() + 1).padStart(2, '0');
|
|
||||||
const dd = String(date.getDate()).padStart(2, '0');
|
|
||||||
const partitionSuffix = `${yyyy}${mm}${dd}`;
|
|
||||||
|
|
||||||
const start = new Date(date);
|
|
||||||
start.setHours(0, 0, 0, 0);
|
|
||||||
const startMs = start.getTime();
|
|
||||||
|
|
||||||
const end = new Date(date);
|
|
||||||
end.setDate(end.getDate() + 1);
|
|
||||||
end.setHours(0, 0, 0, 0);
|
|
||||||
const endMs = end.getTime();
|
|
||||||
|
|
||||||
return { startMs, endMs, partitionSuffix };
|
|
||||||
}
|
|
||||||
|
|
||||||
/**
|
|
||||||
* Ensure partitions exist for the next N days.
|
|
||||||
* @param {number} daysAhead - Number of days to pre-create.
|
|
||||||
*/
|
|
||||||
async ensurePartitions(daysAhead = 30) {
|
|
||||||
const client = await dbManager.pool.connect();
|
|
||||||
try {
|
|
||||||
logger.info(`Starting partition check for the next ${daysAhead} days...`);
|
|
||||||
await this.ensureParentIndexes(client);
|
|
||||||
const now = new Date();
|
|
||||||
|
|
||||||
for (let i = 0; i < daysAhead; i++) {
|
|
||||||
const targetDate = new Date(now);
|
|
||||||
targetDate.setDate(now.getDate() + i);
|
|
||||||
|
|
||||||
const { startMs, endMs, partitionSuffix } = this.getPartitionInfo(targetDate);
|
|
||||||
const partitionName = `rcu_action.rcu_action_events_${partitionSuffix}`;
|
|
||||||
|
|
||||||
// Check if partition exists
|
|
||||||
const checkSql = `
|
|
||||||
SELECT to_regclass($1) as exists;
|
|
||||||
`;
|
|
||||||
const checkRes = await client.query(checkSql, [partitionName]);
|
|
||||||
|
|
||||||
if (!checkRes.rows[0].exists) {
|
|
||||||
logger.info(`Creating partition ${partitionName} for range [${startMs}, ${endMs})`);
|
|
||||||
const createSql = `
|
|
||||||
CREATE TABLE IF NOT EXISTS ${partitionName}
|
|
||||||
PARTITION OF ${PARENT_TABLE}
|
|
||||||
FOR VALUES FROM (${startMs}) TO (${endMs})
|
|
||||||
TABLESPACE ${PARTITION_TABLESPACE};
|
|
||||||
`;
|
|
||||||
await client.query(createSql);
|
|
||||||
}
|
|
||||||
}
|
|
||||||
logger.info('Partition check completed.');
|
|
||||||
} catch (err) {
|
|
||||||
logger.error('Error ensuring partitions:', err);
|
|
||||||
throw err;
|
|
||||||
} finally {
|
|
||||||
client.release();
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
export default new PartitionManager();
|
|
||||||
@@ -3,11 +3,10 @@
|
|||||||
*
|
*
|
||||||
* Manages an independent PostgreSQL connection pool for
|
* Manages an independent PostgreSQL connection pool for
|
||||||
* the room_status.room_status_moment snapshot table.
|
* the room_status.room_status_moment snapshot table.
|
||||||
* Provides batch upsert with JSONB merge and auto-partition creation.
|
* Provides batch upsert with JSONB merge.
|
||||||
*/
|
*/
|
||||||
import pg from 'pg';
|
import pg from 'pg';
|
||||||
import { randomUUID } from 'crypto';
|
import { randomUUID } from 'crypto';
|
||||||
import { logger } from '../utils/logger.js';
|
|
||||||
|
|
||||||
const { Pool } = pg;
|
const { Pool } = pg;
|
||||||
|
|
||||||
@@ -28,8 +27,6 @@ export class RoomStatusManager {
|
|||||||
this.schema = dbConfig.schema;
|
this.schema = dbConfig.schema;
|
||||||
this.table = dbConfig.table;
|
this.table = dbConfig.table;
|
||||||
this.fullTableName = `${this.schema}.${this.table}`;
|
this.fullTableName = `${this.schema}.${this.table}`;
|
||||||
// Track which partitions we have already ensured
|
|
||||||
this.knownPartitions = new Set();
|
|
||||||
}
|
}
|
||||||
|
|
||||||
/**
|
/**
|
||||||
@@ -41,15 +38,6 @@ export class RoomStatusManager {
|
|||||||
*/
|
*/
|
||||||
async upsertBatch(rows) {
|
async upsertBatch(rows) {
|
||||||
if (!rows || rows.length === 0) return;
|
if (!rows || rows.length === 0) return;
|
||||||
|
|
||||||
// Pre-ensure all needed partitions exist before attempting upsert
|
|
||||||
const newHotelIds = [...new Set(rows.map(r => r.hotel_id))]
|
|
||||||
.filter(id => !this.knownPartitions.has(id));
|
|
||||||
|
|
||||||
if (newHotelIds.length > 0) {
|
|
||||||
await this._ensurePartitionsBatch(newHotelIds);
|
|
||||||
}
|
|
||||||
|
|
||||||
await this._doUpsert(rows);
|
await this._doUpsert(rows);
|
||||||
}
|
}
|
||||||
|
|
||||||
@@ -95,54 +83,6 @@ export class RoomStatusManager {
|
|||||||
|
|
||||||
await this.pool.query(sql, values);
|
await this.pool.query(sql, values);
|
||||||
}
|
}
|
||||||
|
|
||||||
/**
|
|
||||||
* Check if an error is a missing partition error.
|
|
||||||
*/
|
|
||||||
_isPartitionMissingError(error) {
|
|
||||||
const msg = error?.message || '';
|
|
||||||
return msg.includes('no partition') || msg.includes('routing') ||
|
|
||||||
(error?.code === '23514' && msg.includes('partition'));
|
|
||||||
}
|
|
||||||
|
|
||||||
/**
|
|
||||||
* Batch-create LIST partitions for multiple hotel_ids in a single connection.
|
|
||||||
* Uses CREATE TABLE IF NOT EXISTS (idempotent) — no check query needed.
|
|
||||||
*/
|
|
||||||
async _ensurePartitionsBatch(hotelIds) {
|
|
||||||
const client = await this.pool.connect();
|
|
||||||
try {
|
|
||||||
for (const hotelId of hotelIds) {
|
|
||||||
const partitionName = `${this.schema}.${this.table}_h${hotelId}`;
|
|
||||||
try {
|
|
||||||
await client.query(
|
|
||||||
`CREATE TABLE IF NOT EXISTS ${partitionName} PARTITION OF ${this.fullTableName} FOR VALUES IN (${hotelId})`
|
|
||||||
);
|
|
||||||
this.knownPartitions.add(hotelId);
|
|
||||||
} catch (err) {
|
|
||||||
// Partition may already exist (race condition) — safe to ignore
|
|
||||||
if (!err.message?.includes('already exists')) {
|
|
||||||
logger.error('Error creating partition', { error: err?.message, hotelId });
|
|
||||||
}
|
|
||||||
this.knownPartitions.add(hotelId);
|
|
||||||
}
|
|
||||||
}
|
|
||||||
if (hotelIds.length > 0) {
|
|
||||||
logger.info(`Ensured ${hotelIds.length} room_status partitions`);
|
|
||||||
}
|
|
||||||
} finally {
|
|
||||||
client.release();
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
/**
|
|
||||||
* Ensure a LIST partition exists for the given hotel_id (single).
|
|
||||||
*/
|
|
||||||
async ensurePartition(hotelId) {
|
|
||||||
if (this.knownPartitions.has(hotelId)) return;
|
|
||||||
await this._ensurePartitionsBatch([hotelId]);
|
|
||||||
}
|
|
||||||
|
|
||||||
async testConnection() {
|
async testConnection() {
|
||||||
try {
|
try {
|
||||||
await this.pool.query('SELECT 1');
|
await this.pool.query('SELECT 1');
|
||||||
|
|||||||
@@ -1,8 +1,6 @@
|
|||||||
import cron from 'node-cron';
|
import cron from 'node-cron';
|
||||||
import { config } from './config/config.js';
|
import { config } from './config/config.js';
|
||||||
import dbManager from './db/databaseManager.js';
|
import dbManager from './db/databaseManager.js';
|
||||||
import dbInitializer from './db/initializer.js';
|
|
||||||
import partitionManager from './db/partitionManager.js';
|
|
||||||
import projectMetadata from './cache/projectMetadata.js';
|
import projectMetadata from './cache/projectMetadata.js';
|
||||||
import { createKafkaConsumers } from './kafka/consumer.js';
|
import { createKafkaConsumers } from './kafka/consumer.js';
|
||||||
import { processKafkaMessage } from './processor/index.js';
|
import { processKafkaMessage } from './processor/index.js';
|
||||||
@@ -17,52 +15,12 @@ import { logger } from './utils/logger.js';
|
|||||||
import { BatchProcessor } from './db/batchProcessor.js';
|
import { BatchProcessor } from './db/batchProcessor.js';
|
||||||
|
|
||||||
const bootstrap = async () => {
|
const bootstrap = async () => {
|
||||||
// 0. Initialize Database (Create DB, Schema, Table, Partitions)
|
// 0. Initialize Project Metadata Cache
|
||||||
// Only execute initialization if enabled (default: true)
|
|
||||||
if (config.enableDatabaseInitialization) {
|
|
||||||
logger.info('Database initialization is enabled. Starting initialization...');
|
|
||||||
await dbInitializer.initialize();
|
|
||||||
} else {
|
|
||||||
logger.info('Database initialization is disabled. Skipping database initialization, schema creation, and partition setup.');
|
|
||||||
}
|
|
||||||
|
|
||||||
// 0.1 Initialize Project Metadata Cache
|
|
||||||
await projectMetadata.init();
|
await projectMetadata.init();
|
||||||
|
|
||||||
// Metric Collector
|
// Metric Collector
|
||||||
const metricCollector = new MetricCollector();
|
const metricCollector = new MetricCollector();
|
||||||
|
|
||||||
// 1. Setup Partition Maintenance Cron Job (Every day at 00:00)
|
|
||||||
// Only setup partition maintenance cron if database initialization is enabled
|
|
||||||
if (config.enableDatabaseInitialization) {
|
|
||||||
cron.schedule('0 0 * * *', async () => {
|
|
||||||
logger.info('Running scheduled partition maintenance...');
|
|
||||||
try {
|
|
||||||
await partitionManager.ensurePartitions(30);
|
|
||||||
} catch (err) {
|
|
||||||
logger.error('Scheduled partition maintenance failed', err);
|
|
||||||
}
|
|
||||||
});
|
|
||||||
} else {
|
|
||||||
logger.info('Partition maintenance cron job is disabled (database initialization is disabled).');
|
|
||||||
}
|
|
||||||
|
|
||||||
// 1.1 Setup Metric Reporting Cron Job (Every minute)
|
|
||||||
// Moved after redisIntegration initialization
|
|
||||||
|
|
||||||
|
|
||||||
// DatabaseManager is now a singleton exported instance, but let's keep consistency if possible
|
|
||||||
// In databaseManager.js it exports `dbManager` instance by default.
|
|
||||||
// The original code was `const dbManager = new DatabaseManager(config.db);` which implies it might have been a class export.
|
|
||||||
// Let's check `databaseManager.js` content.
|
|
||||||
// Wait, I imported `dbManager` from `./db/databaseManager.js`.
|
|
||||||
// If `databaseManager.js` exports an instance as default, I should use that.
|
|
||||||
// If it exports a class, I should instantiate it.
|
|
||||||
|
|
||||||
// Let's assume the previous code `new DatabaseManager` was correct if it was a class.
|
|
||||||
// BUT I used `dbManager.pool` in `partitionManager.js` assuming it's an instance.
|
|
||||||
// I need to verify `databaseManager.js`.
|
|
||||||
|
|
||||||
const redisClient = await createRedisClient(config.redis);
|
const redisClient = await createRedisClient(config.redis);
|
||||||
const redisIntegration = new RedisIntegration(
|
const redisIntegration = new RedisIntegration(
|
||||||
redisClient,
|
redisClient,
|
||||||
@@ -80,7 +38,7 @@ const bootstrap = async () => {
|
|||||||
});
|
});
|
||||||
logger.info('Room Status sync pipeline initialized');
|
logger.info('Room Status sync pipeline initialized');
|
||||||
|
|
||||||
// 1.1 Setup Metric Reporting Cron Job (Every minute)
|
// 1. Setup Metric Reporting Cron Job (Every minute)
|
||||||
cron.schedule('* * * * *', async () => {
|
cron.schedule('* * * * *', async () => {
|
||||||
const metrics = metricCollector.getAndReset();
|
const metrics = metricCollector.getAndReset();
|
||||||
const report = `[Minute Metrics] Pulled: ${metrics.kafka_pulled}, Parse Error: ${metrics.parse_error}, Inserted: ${metrics.db_inserted}, Failed: ${metrics.db_failed}`;
|
const report = `[Minute Metrics] Pulled: ${metrics.kafka_pulled}, Parse Error: ${metrics.parse_error}, Inserted: ${metrics.db_inserted}, Failed: ${metrics.db_failed}`;
|
||||||
|
|||||||
@@ -0,0 +1,15 @@
|
|||||||
|
# Proposal: Externalize DB Provisioning
|
||||||
|
|
||||||
|
## Why
|
||||||
|
- 降低主服务复杂度与运行时 DDL 风险。
|
||||||
|
- 避免在高并发消费服务中执行建库/建分区。
|
||||||
|
- 便于平台化调度(独立任务/外部程序调用)。
|
||||||
|
|
||||||
|
## What
|
||||||
|
- 删除主服务中的初始化与分区创建能力。
|
||||||
|
- 将建库相关 SQL 与 JS 工具集中到根目录 `SQL_Script`。
|
||||||
|
- 提供统一 npm scripts 入口供外部调用。
|
||||||
|
|
||||||
|
## Non-Goals
|
||||||
|
- 不改动 Kafka 解析与业务写库模型。
|
||||||
|
- 不引入旧开关兼容。
|
||||||
@@ -0,0 +1,64 @@
|
|||||||
|
# Externalize Database Provisioning to SQL_Script
|
||||||
|
|
||||||
|
## 1. 背景
|
||||||
|
当前服务进程中仍包含数据库初始化与分区创建相关职责。为降低主服务复杂度、避免运行时 DDL 风险,并支持由外部程序统一调度,现将“建库/建表/建分区”能力完全外置到根目录 `SQL_Script`。
|
||||||
|
|
||||||
|
## 2. 目标
|
||||||
|
1. 主服务 `bls-rcu-action-backend` 启动后仅执行:拉取 Kafka -> 解析 -> 写库。
|
||||||
|
2. 删除所有运行时建库、建表、建分区与对应定时任务逻辑。
|
||||||
|
3. 不保留旧兼容开关(如 `ENABLE_DATABASE_INITIALIZATION`)。
|
||||||
|
4. 在根目录 `SQL_Script` 提供可复用的 SQL 与 JS 调用入口,供其他程序调用。
|
||||||
|
|
||||||
|
## 3. 变更范围
|
||||||
|
### 3.1 业务服务(剥离)
|
||||||
|
- 删除文件:
|
||||||
|
- `bls-rcu-action-backend/src/db/initializer.js`
|
||||||
|
- `bls-rcu-action-backend/src/db/partitionManager.js`
|
||||||
|
- 修改文件:
|
||||||
|
- `bls-rcu-action-backend/src/index.js`
|
||||||
|
- 移除初始化与定时建分区调用
|
||||||
|
- `bls-rcu-action-backend/src/config/config.js`
|
||||||
|
- 移除 `enableDatabaseInitialization`
|
||||||
|
- `bls-rcu-action-backend/src/db/roomStatusManager.js`
|
||||||
|
- 移除自动建分区逻辑(仅保留 upsert)
|
||||||
|
- `bls-rcu-action-backend/.env`
|
||||||
|
- `bls-rcu-action-backend/.env.example`
|
||||||
|
- 移除 `ENABLE_DATABASE_INITIALIZATION`
|
||||||
|
- 删除旧 SQL:
|
||||||
|
- `bls-rcu-action-backend/scripts/init_db.sql`
|
||||||
|
|
||||||
|
### 3.2 外部脚本(新增)
|
||||||
|
新增根目录 `SQL_Script/`:
|
||||||
|
- `init_rcu_action.sql`
|
||||||
|
- `init_room_status.sql`
|
||||||
|
- `partition_rcu_action.sql`
|
||||||
|
- `partition_room_status.sql`
|
||||||
|
- `db_manager.js`(CLI + import)
|
||||||
|
- `README.md`
|
||||||
|
|
||||||
|
### 3.3 npm scripts(入口)
|
||||||
|
在 `bls-rcu-action-backend/package.json` 新增:
|
||||||
|
- `db:init:all`
|
||||||
|
- `db:init:rcu-action`
|
||||||
|
- `db:init:room-status`
|
||||||
|
- `db:partition:rcu-action`
|
||||||
|
- `db:partition:room-status`
|
||||||
|
|
||||||
|
## 4. 设计约束
|
||||||
|
1. 主服务代码中不得出现 `CREATE TABLE` / `CREATE SCHEMA` / `CREATE INDEX` / 分区创建语句。
|
||||||
|
2. 主服务中不得出现初始化器和分区管理器调用路径。
|
||||||
|
3. 建库能力仅存在于根目录 `SQL_Script`。
|
||||||
|
|
||||||
|
## 5. 验收标准
|
||||||
|
1. `npm run test` 全通过。
|
||||||
|
2. `npm run build` 通过,且 `dist/index.js` 不包含以下关键词:
|
||||||
|
- `dbInitializer`
|
||||||
|
- `partitionManager`
|
||||||
|
- `ENABLE_DATABASE_INITIALIZATION`
|
||||||
|
- `ensurePartitions`
|
||||||
|
3. 通过 `npm run db:init:all` 可以执行外部初始化流程。
|
||||||
|
4. 通过 `npm run db:partition:rcu-action`、`npm run db:partition:room-status -- <hotelId>` 可执行外部分区流程。
|
||||||
|
|
||||||
|
## 6. 迁移说明
|
||||||
|
- 旧部署流程若依赖服务启动自动建库,需改为先执行 `SQL_Script` 对应命令,再启动主服务。
|
||||||
|
- 该变更为有意“去兼容”升级:删除旧开关与旧路径,避免双路径维护。
|
||||||
@@ -0,0 +1,37 @@
|
|||||||
|
# 2026-03-04 外置建库能力改造总结
|
||||||
|
|
||||||
|
## 概述
|
||||||
|
已将建库(初始化 + 分区创建)能力从主服务中完全剥离,迁移至根目录 `SQL_Script`,并提供可供其他程序调用的 JS/SQL 入口。
|
||||||
|
|
||||||
|
## 实施结果
|
||||||
|
|
||||||
|
### 已删除(主服务内)
|
||||||
|
- `src/db/initializer.js`
|
||||||
|
- `src/db/partitionManager.js`
|
||||||
|
- 运行时初始化、定时分区维护逻辑
|
||||||
|
- 配置项:`ENABLE_DATABASE_INITIALIZATION`
|
||||||
|
- 旧 SQL:`bls-rcu-action-backend/scripts/init_db.sql`
|
||||||
|
|
||||||
|
### 已新增(根目录)
|
||||||
|
- `SQL_Script/init_rcu_action.sql`
|
||||||
|
- `SQL_Script/init_room_status.sql`
|
||||||
|
- `SQL_Script/partition_rcu_action.sql`
|
||||||
|
- `SQL_Script/partition_room_status.sql`
|
||||||
|
- `SQL_Script/db_manager.js`
|
||||||
|
- `SQL_Script/README.md`
|
||||||
|
|
||||||
|
### 已新增 npm 入口
|
||||||
|
- `db:init:all`
|
||||||
|
- `db:init:rcu-action`
|
||||||
|
- `db:init:room-status`
|
||||||
|
- `db:partition:rcu-action`
|
||||||
|
- `db:partition:room-status`
|
||||||
|
|
||||||
|
## 验证
|
||||||
|
- `npm run test`:通过(45/45)
|
||||||
|
- `npm run build`:通过
|
||||||
|
- 构建产物不再包含初始化/分区管理逻辑
|
||||||
|
|
||||||
|
## 影响
|
||||||
|
- 主服务职责更单一:只处理 Kafka 消费与写库。
|
||||||
|
- DDL 改为外部可控,适合由调度系统或独立服务统一执行。
|
||||||
Reference in New Issue
Block a user