feat: 初始化临时项目管理项目结构及核心功能
- 新增项目根目录及配置文件,包括 .gitignore、.env.example 和 package.json - 实现数据库连接池、配置加载、日志记录和 HTTP 客户端工具 - 添加数据服务层,支持酒店、房间、房型和回路数据的批量事务处理 - 创建主脚本,实现分阶段数据获取、处理和存储流程 - 提供数据库初始化脚本和测试用例 - 添加项目文档,包括 README.md 和项目需求说明
This commit is contained in:
29
temporary_project_management/.env
Normal file
29
temporary_project_management/.env
Normal file
@@ -0,0 +1,29 @@
|
||||
PORT=3000
|
||||
DB_FILE=./local.db
|
||||
API_BASE_URL=http://www.boonlive-rcu.com:7000/api/values
|
||||
|
||||
PORT=3000
|
||||
|
||||
# 数据库配置 (废弃)
|
||||
#(废弃)DB_HOST=10.8.8.109
|
||||
#(废弃)DB_PORT=5433
|
||||
#(废弃)DB_USER=log_admin
|
||||
#(废弃)DB_PASSWORD=YourActualStrongPasswordForPostgres!
|
||||
#(废弃)DB_NAME=log_platform
|
||||
|
||||
# 数据库配置
|
||||
POSTGRES_HOST=10.8.8.109
|
||||
POSTGRES_PORT=5433
|
||||
POSTGRES_DATABASE=log_platform
|
||||
POSTGRES_USER=log_admin
|
||||
POSTGRES_PASSWORD=YourActualStrongPasswordForPostgres!
|
||||
POSTGRES_MAX_CONNECTIONS=2
|
||||
POSTGRES_IDLE_TIMEOUT_MS=30000
|
||||
# 启用的酒店ID列表
|
||||
ENABLED_HOTEL_IDS=1085,2100-2316
|
||||
|
||||
# 接口启用配置 (true/false)
|
||||
ENABLE_API_HOTEL_LIST=false # 酒店列表
|
||||
ENABLE_API_HOST_LIST=false # 房间列表
|
||||
ENABLE_API_ROOM_TYPE_INFO=false # 房型列表
|
||||
ENABLE_API_ROOM_TYPE_MODAL_INFO=false # 回路列表
|
||||
4
temporary_project_management/.env.example
Normal file
4
temporary_project_management/.env.example
Normal file
@@ -0,0 +1,4 @@
|
||||
PORT=3000
|
||||
DB_FILE=./local.db
|
||||
ENABLED_HOTEL_IDS=1085,2144
|
||||
API_BASE_URL=http://www.boonlive-rcu.com:7000/api/values
|
||||
57
temporary_project_management/README.md
Normal file
57
temporary_project_management/README.md
Normal file
@@ -0,0 +1,57 @@
|
||||
# Temporary Project Management
|
||||
|
||||
This is a local Node.js project for fetching hotel data and loop addresses, managing them in a local SQLite database.
|
||||
|
||||
## Prerequisites
|
||||
|
||||
- Node.js (v14+ recommended)
|
||||
- npm
|
||||
|
||||
## Installation
|
||||
|
||||
1. Clone or navigate to the project directory.
|
||||
2. Install dependencies:
|
||||
```bash
|
||||
npm install
|
||||
```
|
||||
3. Copy `.env.example` to `.env` and configure:
|
||||
```bash
|
||||
cp .env.example .env
|
||||
```
|
||||
(On Windows: `copy .env.example .env`)
|
||||
|
||||
## Configuration (.env)
|
||||
|
||||
- `PORT`: App port (default 3000)
|
||||
- `DB_FILE`: Path to SQLite database file (default `./local.db`)
|
||||
- `ENABLED_HOTEL_IDS`: Comma-separated list of Hotel IDs to fetch loops for (e.g., `1085,2144`)
|
||||
- `API_BASE_URL`: Base URL for external APIs
|
||||
|
||||
## Usage
|
||||
|
||||
- **Development Mode** (with hot reload):
|
||||
```bash
|
||||
npm run dev
|
||||
```
|
||||
- **Start Production**:
|
||||
```bash
|
||||
npm start
|
||||
```
|
||||
- **Run Tests**:
|
||||
```bash
|
||||
npm test
|
||||
```
|
||||
|
||||
## Project Structure
|
||||
|
||||
- `src/config`: Configuration loading
|
||||
- `src/db`: Database connection and schema initialization
|
||||
- `src/services`: Business logic for fetching and saving data
|
||||
- `src/utils`: Helpers (Logger, HTTP client with retry/delay)
|
||||
- `src/scripts`: Main entry point
|
||||
|
||||
## Troubleshooting
|
||||
|
||||
- **Database Locks**: If you encounter `SQLITE_BUSY`, ensure no other process (like a DB viewer) has the file open. The app handles some concurrency but file locks can block it.
|
||||
- **API Timeouts**: If the API is slow, the app is configured with a 100s timeout and 2 retries. Check your network connection.
|
||||
- **Logs**: Check `logs/` directory for daily application logs and error snapshots.
|
||||
BIN
temporary_project_management/local.db
Normal file
BIN
temporary_project_management/local.db
Normal file
Binary file not shown.
4706
temporary_project_management/package-lock.json
generated
Normal file
4706
temporary_project_management/package-lock.json
generated
Normal file
File diff suppressed because it is too large
Load Diff
27
temporary_project_management/package.json
Normal file
27
temporary_project_management/package.json
Normal file
@@ -0,0 +1,27 @@
|
||||
{
|
||||
"name": "temporary_project_management",
|
||||
"version": "1.0.0",
|
||||
"description": "",
|
||||
"main": "src/scripts/main.js",
|
||||
"scripts": {
|
||||
"start": "node src/scripts/main.js",
|
||||
"dev": "nodemon src/scripts/main.js",
|
||||
"init-db": "node src/scripts/init_only.js",
|
||||
"test": "jest"
|
||||
},
|
||||
"keywords": [],
|
||||
"author": "",
|
||||
"license": "ISC",
|
||||
"dependencies": {
|
||||
"axios": "^1.6.0",
|
||||
"dotenv": "^16.0.0",
|
||||
"pg": "^8.0.0",
|
||||
"pino": "^8.0.0",
|
||||
"uuid": "^9.0.0"
|
||||
},
|
||||
"devDependencies": {
|
||||
"jest": "^29.0.0",
|
||||
"nodemon": "^3.0.0",
|
||||
"pino-pretty": "^10.0.0"
|
||||
}
|
||||
}
|
||||
63
temporary_project_management/project1.md
Normal file
63
temporary_project_management/project1.md
Normal file
@@ -0,0 +1,63 @@
|
||||
# 1. 酒店即时状态表
|
||||
- 这个模式需要分4张表:
|
||||
- 酒店表(hotels)
|
||||
- 房间表(rooms)
|
||||
- 房型表(room_type)
|
||||
- 回路表(loops)
|
||||
- 模式名:temporary_project
|
||||
## 1.1 酒店表(hotels)
|
||||
- 表主键(guid):32位无符号uuid。
|
||||
- 酒店code(hotel_id):酒店的唯一标识符。int *索引* {对应接口返回的hotelCode}
|
||||
- 名称(hotel_name):酒店的名称。varchar(255) *索引* {对应接口返回的hotelName}
|
||||
- 酒店ID(id):酒店的ID。int *索引* {对应接口返回的hotelID}
|
||||
|
||||
-- 主键:(hotel_id, guid)
|
||||
-- 接口返回的数据结构:{"hotelID":"1","hotelCode":"1001","hotelName":"默认酒店"}
|
||||
|
||||
## 1.2 房间表(rooms)
|
||||
- 表主键(guid):32位无符号uuid。
|
||||
- 酒店ID(hotel_id):引用酒店表的酒店ID。int *索引* {对应接口返回的hotelID}
|
||||
- 房间名称(room_id):房间的名称。varchar(255) {对应接口返回的roomNumber}
|
||||
- 房型ID(room_type_id):房间的房型ID(id)。int {对应接口返回的roomTypeID}
|
||||
- 主机编号(device_id):房间的主机编号。varchar(50) *索引* {对应接口返回的hostNumber}
|
||||
- mac地址(mac):房间的mac地址。varchar(50) *索引* {对应接口返回的mac}
|
||||
- 房间ID(id):房间的ID。int *索引* {对应接口返回的id} *索引*
|
||||
|
||||
-- 主键:(guid, hotel_id, room_id)
|
||||
-- 接口返回的数据结构:{"id":"53","hotelID":"6","roomTypeID":"18","roomNumber":"320","hostNumber":"238003002090","mac":"34-D0-B8-1F-02-5A"},
|
||||
## 1.3 房型表(room_type)
|
||||
- 表主键(guid):32位无符号uuid。
|
||||
- 房型ID(id):房型的ID。int *索引* {对应接口返回的id}
|
||||
- 名称(room_type_name):房型的名称。varchar(255) *索引* {对应接口返回的roomTypeName}
|
||||
- 酒店ID(hotel_id):引用酒店表的酒店ID(id)。int *索引* {对应接口返回的hotelID}
|
||||
|
||||
-- 主键:(guid, id)
|
||||
-- 接口返回的数据结构:{"id":"220","hotelID":"10","roomTypeName":"语音双人间"}
|
||||
|
||||
## 1.4 回路表(loops)
|
||||
- 表主键(guid):32位无符号uuid。
|
||||
- 回路ID(id):回路的唯一标识符。int *索引* {对应接口返回的id}
|
||||
- 名称(loop_name):回路的名称。varchar(255) *索引* {对应接口返回的name}
|
||||
- 房型ID(room_type_id):引用房型表的房型ID(id)。int *索引* {对应接口返回的roomTypeID}
|
||||
- 回路地址(loop_address):回路的地址。varchar(255) *索引* {对应接口返回的modalAddress}
|
||||
- 回路类型(loop_type):回路的类型。varchar(50) *索引* {对应接口返回的type}
|
||||
|
||||
-- 主键:(guid, id)
|
||||
-- 接口返回的数据结构: {"id": "273","roomTypeID": "2","modalAddress": "015001010","type": "15","name": "向右开关"}
|
||||
|
||||
# 这4张表的对应关系是:
|
||||
- 一个酒店对应若干房间(通过酒店ID),一个房间对应一个房型(通过房型ID),一个房型对应若干回路(通过回路ID)。
|
||||
|
||||
# 注意事项:
|
||||
- 这个数据库是PostgreSQL数据库。
|
||||
- 由于房间表和回路表的数据量非常大,所以要用分区表来优化查询,分别按照酒店ID和房型ID,使用PARTITION BY LIST的方式来分区。
|
||||
|
||||
# 接口
|
||||
url:http://www.boonlive-rcu.com:7000/api/values
|
||||
|
||||
/GetHotelList 获取酒店 get请求
|
||||
/GetHostList 获取主机 get请求
|
||||
/GetRoomType_Info 获取房型 get请求
|
||||
|
||||
/GetRoomType_ModalInfo 获取房型 设备表 post请求
|
||||
- 参数:room_type_id[] 数组,每个元素为一个房型ID
|
||||
50
temporary_project_management/src/config/index.js
Normal file
50
temporary_project_management/src/config/index.js
Normal file
@@ -0,0 +1,50 @@
|
||||
require('dotenv').config();
|
||||
|
||||
module.exports = {
|
||||
port: process.env.PORT || 3000,
|
||||
dbConfig: {
|
||||
host: process.env.POSTGRES_HOST || '10.8.8.109',
|
||||
port: parseInt(process.env.POSTGRES_PORT, 10) || 5433,
|
||||
database: process.env.POSTGRES_DATABASE || 'log_platform',
|
||||
user: process.env.POSTGRES_USER || 'log_admin',
|
||||
password: process.env.POSTGRES_PASSWORD || 'YourActualStrongPasswordForPostgres!',
|
||||
max: parseInt(process.env.POSTGRES_MAX_CONNECTIONS, 10) || 6,
|
||||
idleTimeoutMillis: parseInt(process.env.POSTGRES_IDLE_TIMEOUT_MS, 10) || 30000,
|
||||
},
|
||||
enabledHotelIds: parseHotelIds(process.env.ENABLED_HOTEL_IDS),
|
||||
apiBaseUrl: process.env.API_BASE_URL || 'http://www.boonlive-rcu.com:7000/api/values',
|
||||
apiToggles: {
|
||||
hotelList: process.env.ENABLE_API_HOTEL_LIST !== 'false',
|
||||
hostList: process.env.ENABLE_API_HOST_LIST !== 'false',
|
||||
roomTypeInfo: process.env.ENABLE_API_ROOM_TYPE_INFO !== 'false',
|
||||
roomTypeModalInfo: process.env.ENABLE_API_ROOM_TYPE_MODAL_INFO !== 'false',
|
||||
}
|
||||
};
|
||||
|
||||
function parseHotelIds(envVar) {
|
||||
if (!envVar) return [];
|
||||
const parts = envVar.split(',');
|
||||
const ids = new Set();
|
||||
|
||||
parts.forEach(part => {
|
||||
part = part.trim();
|
||||
if (part.includes('-')) {
|
||||
const [startStr, endStr] = part.split('-');
|
||||
const start = parseInt(startStr.trim(), 10);
|
||||
const end = parseInt(endStr.trim(), 10);
|
||||
|
||||
if (!isNaN(start) && !isNaN(end) && start <= end) {
|
||||
for (let i = start; i <= end; i++) {
|
||||
ids.add(i);
|
||||
}
|
||||
}
|
||||
} else {
|
||||
const num = parseInt(part, 10);
|
||||
if (!isNaN(num)) {
|
||||
ids.add(num);
|
||||
}
|
||||
}
|
||||
});
|
||||
|
||||
return Array.from(ids).sort((a, b) => a - b);
|
||||
}
|
||||
16
temporary_project_management/src/db/index.js
Normal file
16
temporary_project_management/src/db/index.js
Normal file
@@ -0,0 +1,16 @@
|
||||
const { Pool } = require('pg');
|
||||
const { dbConfig } = require('../config');
|
||||
const logger = require('../utils/logger');
|
||||
|
||||
const pool = new Pool(dbConfig);
|
||||
|
||||
pool.on('error', (err, client) => {
|
||||
logger.error({ err }, 'Unexpected error on idle client');
|
||||
process.exit(-1);
|
||||
});
|
||||
|
||||
pool.on('connect', () => {
|
||||
logger.debug('New client connected to database');
|
||||
});
|
||||
|
||||
module.exports = pool;
|
||||
201
temporary_project_management/src/db/init.js
Normal file
201
temporary_project_management/src/db/init.js
Normal file
@@ -0,0 +1,201 @@
|
||||
const { query } = require('./utils');
|
||||
const logger = require('../utils/logger');
|
||||
|
||||
const initDB = async () => {
|
||||
try {
|
||||
logger.info('Initializing database schema...');
|
||||
|
||||
// 1. Create Schema
|
||||
await query('CREATE SCHEMA IF NOT EXISTS temporary_project');
|
||||
|
||||
// Set search path to ensure we use this schema by default
|
||||
// Alternatively, prefix table names. I will prefix table names for clarity in definitions.
|
||||
|
||||
// 1.1 Hotels Table
|
||||
await query(`
|
||||
CREATE TABLE IF NOT EXISTS temporary_project.hotels (
|
||||
guid VARCHAR(32) PRIMARY KEY,
|
||||
hotel_id INTEGER,
|
||||
hotel_name VARCHAR(255),
|
||||
id INTEGER
|
||||
)
|
||||
`);
|
||||
// Indexes
|
||||
await query(`CREATE INDEX IF NOT EXISTS idx_hotels_hotel_id ON temporary_project.hotels(hotel_id)`);
|
||||
await query(`CREATE INDEX IF NOT EXISTS idx_hotels_hotel_name ON temporary_project.hotels(hotel_name)`);
|
||||
await query(`CREATE INDEX IF NOT EXISTS idx_hotels_id ON temporary_project.hotels(id)`);
|
||||
|
||||
// 1.2 Rooms Table
|
||||
// Requirement: Partition by LIST (hotel_id).
|
||||
// To support PARTITION BY LIST, we need to declare it in the CREATE TABLE.
|
||||
// And we must create partitions.
|
||||
// For simplicity and robustness in this "temporary" project where hotel_ids might change or be many,
|
||||
// creating a DEFAULT partition is essential.
|
||||
await query(`
|
||||
CREATE TABLE IF NOT EXISTS temporary_project.rooms (
|
||||
guid VARCHAR(32) NOT NULL,
|
||||
hotel_id INTEGER,
|
||||
room_id VARCHAR(255),
|
||||
room_type_id INTEGER,
|
||||
device_id VARCHAR(50),
|
||||
mac VARCHAR(50),
|
||||
id INTEGER
|
||||
) PARTITION BY LIST (hotel_id)
|
||||
`);
|
||||
|
||||
// Create Default Partition for Rooms
|
||||
await query(`
|
||||
CREATE TABLE IF NOT EXISTS temporary_project.rooms_default PARTITION OF temporary_project.rooms DEFAULT
|
||||
`);
|
||||
|
||||
// Note: Primary Key in partitioned table must include partition key.
|
||||
// So PK cannot be just `guid`. It must be (guid, hotel_id).
|
||||
// MD says "Table PK (guid)". This conflicts with Partitioning in Postgres.
|
||||
// I will make (guid, hotel_id) the PK or just rely on GUID uniqueness logic in app and remove DB constraint if needed.
|
||||
// But let's try to follow MD "Table PK: guid".
|
||||
// If I must use Partitioning, I must include partition key in PK.
|
||||
// So I will drop the PK constraint on just GUID if it existed (it's new table so ok).
|
||||
// I will add a composite PK.
|
||||
// Wait, MD says "PK: (guid, hotel_id, room_id)" for Rooms table?
|
||||
// Let's check MD read earlier:
|
||||
// "## 1.2 房间表(rooms)... -- 主键:(guid, hotel_id, room_id)"
|
||||
// Ah, MD *does* specify composite PK!
|
||||
// "## 1.1 酒店表... -- 主键:(hotel_id, guid)"
|
||||
// "## 1.3 房型表... -- 主键:(guid, id)"
|
||||
// "## 1.4 回路表... -- 主键:(guid, id)"
|
||||
|
||||
// So I should follow MD's PK definitions!
|
||||
// My previous SQLite implementation just used `guid PRIMARY KEY`.
|
||||
// I should fix this to match MD exactly now.
|
||||
|
||||
// Fix Hotels PK
|
||||
// Drop old table if exists? No, user said "Check if exists".
|
||||
// But switching from SQLite to PG means tables don't exist yet.
|
||||
|
||||
// Re-defining Hotels with composite PK
|
||||
// Note: PG doesn't support changing PK easily if data exists, but this is init.
|
||||
// Since we already ran `CREATE TABLE ... hotels (guid PRIMARY KEY)` above,
|
||||
// I should correct the SQL string above before writing file.
|
||||
|
||||
// Let's rewrite the calls with correct PKs.
|
||||
|
||||
} catch (error) {
|
||||
logger.error({ error }, 'Error initializing database schema');
|
||||
throw error;
|
||||
}
|
||||
};
|
||||
|
||||
const initDB_Corrected = async () => {
|
||||
try {
|
||||
logger.info('Initializing database schema (PostgreSQL)...');
|
||||
|
||||
// 1. Create Schema
|
||||
await query('CREATE SCHEMA IF NOT EXISTS temporary_project');
|
||||
|
||||
// 1.1 Hotels Table
|
||||
// MD: 主键:(hotel_id, guid)
|
||||
await query(`
|
||||
CREATE TABLE IF NOT EXISTS temporary_project.hotels (
|
||||
guid VARCHAR(32) NOT NULL,
|
||||
hotel_id INTEGER NOT NULL,
|
||||
hotel_name VARCHAR(255),
|
||||
id INTEGER,
|
||||
PRIMARY KEY (hotel_id, guid)
|
||||
)
|
||||
`);
|
||||
await query(`CREATE INDEX IF NOT EXISTS idx_hotels_hotel_id ON temporary_project.hotels(hotel_id)`);
|
||||
await query(`CREATE INDEX IF NOT EXISTS idx_hotels_hotel_name ON temporary_project.hotels(hotel_name)`);
|
||||
await query(`CREATE INDEX IF NOT EXISTS idx_hotels_id ON temporary_project.hotels(id)`);
|
||||
|
||||
// 1.2 Rooms Table
|
||||
// MD: 主键:(guid, hotel_id, room_id)
|
||||
// MD: Partition by hotel_id
|
||||
// Partition key MUST be part of PK. `hotel_id` is in PK. Good.
|
||||
await query(`
|
||||
CREATE TABLE IF NOT EXISTS temporary_project.rooms (
|
||||
guid VARCHAR(32) NOT NULL,
|
||||
hotel_id INTEGER NOT NULL,
|
||||
room_id VARCHAR(255) NOT NULL,
|
||||
room_type_id INTEGER,
|
||||
device_id VARCHAR(50),
|
||||
mac VARCHAR(50),
|
||||
id INTEGER,
|
||||
PRIMARY KEY (guid, hotel_id, room_id)
|
||||
) PARTITION BY LIST (hotel_id)
|
||||
`);
|
||||
|
||||
// Default Partition
|
||||
await query(`
|
||||
CREATE TABLE IF NOT EXISTS temporary_project.rooms_default PARTITION OF temporary_project.rooms DEFAULT
|
||||
`);
|
||||
|
||||
// Indexes
|
||||
// Note: Indexes on partitioned tables are supported in PG 11+.
|
||||
await query(`CREATE INDEX IF NOT EXISTS idx_rooms_hotel_id ON temporary_project.rooms(hotel_id)`);
|
||||
await query(`CREATE INDEX IF NOT EXISTS idx_rooms_device_id ON temporary_project.rooms(device_id)`);
|
||||
await query(`CREATE INDEX IF NOT EXISTS idx_rooms_mac ON temporary_project.rooms(mac)`);
|
||||
await query(`CREATE INDEX IF NOT EXISTS idx_rooms_id ON temporary_project.rooms(id)`);
|
||||
|
||||
// 1.3 Room Types Table
|
||||
// MD: 主键:(guid, id)
|
||||
await query(`
|
||||
CREATE TABLE IF NOT EXISTS temporary_project.room_type (
|
||||
guid VARCHAR(32) NOT NULL,
|
||||
id INTEGER NOT NULL,
|
||||
room_type_name VARCHAR(255),
|
||||
hotel_id INTEGER,
|
||||
PRIMARY KEY (guid, id)
|
||||
)
|
||||
`);
|
||||
await query(`CREATE INDEX IF NOT EXISTS idx_room_type_id ON temporary_project.room_type(id)`);
|
||||
await query(`CREATE INDEX IF NOT EXISTS idx_room_type_name ON temporary_project.room_type(room_type_name)`);
|
||||
await query(`CREATE INDEX IF NOT EXISTS idx_room_type_hotel_id ON temporary_project.room_type(hotel_id)`);
|
||||
|
||||
// 1.4 Loops Table
|
||||
// MD: 主键:(guid, id)
|
||||
// MD: Partition by room_type_id
|
||||
// Wait, if I partition by `room_type_id`, it MUST be in the PK.
|
||||
// But MD says PK is (guid, id). `room_type_id` is NOT in PK.
|
||||
// This is a conflict in MD or PG constraint.
|
||||
// PostgreSQL requires the partition key to be part of the primary key.
|
||||
// If I strictly follow MD PK, I cannot partition by room_type_id unless I add it to PK.
|
||||
// Since "Project must be fully based on MD", and MD has conflicting instructions (PK vs Partitioning constraints in PG),
|
||||
// I have to make a choice.
|
||||
// Option A: Modify PK to include room_type_id.
|
||||
// Option B: Skip partitioning for this table.
|
||||
// Option C: Use `guid` only as logical PK but DB PK includes partition key.
|
||||
// I will choose Option A (Include room_type_id in PK) because Partitioning is an explicit optimization requirement,
|
||||
// and usually implies the PK should support it.
|
||||
// Also `loops` table "data volume is very large", so partitioning is important.
|
||||
|
||||
await query(`
|
||||
CREATE TABLE IF NOT EXISTS temporary_project.loops (
|
||||
guid VARCHAR(32) NOT NULL,
|
||||
id INTEGER NOT NULL,
|
||||
loop_name VARCHAR(255),
|
||||
room_type_id INTEGER NOT NULL,
|
||||
loop_address VARCHAR(255),
|
||||
loop_type VARCHAR(50),
|
||||
PRIMARY KEY (guid, id, room_type_id)
|
||||
) PARTITION BY LIST (room_type_id)
|
||||
`);
|
||||
|
||||
// Default Partition
|
||||
await query(`
|
||||
CREATE TABLE IF NOT EXISTS temporary_project.loops_default PARTITION OF temporary_project.loops DEFAULT
|
||||
`);
|
||||
|
||||
await query(`CREATE INDEX IF NOT EXISTS idx_loops_id ON temporary_project.loops(id)`);
|
||||
await query(`CREATE INDEX IF NOT EXISTS idx_loops_name ON temporary_project.loops(loop_name)`);
|
||||
await query(`CREATE INDEX IF NOT EXISTS idx_loops_room_type_id ON temporary_project.loops(room_type_id)`);
|
||||
await query(`CREATE INDEX IF NOT EXISTS idx_loops_address ON temporary_project.loops(loop_address)`);
|
||||
await query(`CREATE INDEX IF NOT EXISTS idx_loops_type ON temporary_project.loops(loop_type)`);
|
||||
|
||||
logger.info('Database schema initialized successfully.');
|
||||
} catch (error) {
|
||||
logger.error({ error }, 'Error initializing database schema');
|
||||
throw error;
|
||||
}
|
||||
};
|
||||
|
||||
module.exports = initDB_Corrected;
|
||||
14
temporary_project_management/src/db/utils.js
Normal file
14
temporary_project_management/src/db/utils.js
Normal file
@@ -0,0 +1,14 @@
|
||||
const pool = require('./index');
|
||||
|
||||
const query = (text, params) => pool.query(text, params);
|
||||
|
||||
const getClient = () => pool.connect();
|
||||
|
||||
const close = () => pool.end();
|
||||
|
||||
module.exports = {
|
||||
query,
|
||||
getClient,
|
||||
close,
|
||||
pool
|
||||
};
|
||||
13
temporary_project_management/src/scripts/init_only.js
Normal file
13
temporary_project_management/src/scripts/init_only.js
Normal file
@@ -0,0 +1,13 @@
|
||||
const initDB = require('../db/init');
|
||||
const { close } = require('../db/utils');
|
||||
|
||||
(async () => {
|
||||
try {
|
||||
await initDB();
|
||||
console.log('Database initialized successfully.');
|
||||
process.exit(0);
|
||||
} catch (e) {
|
||||
console.error('Database initialization failed:', e);
|
||||
process.exit(1);
|
||||
}
|
||||
})();
|
||||
135
temporary_project_management/src/scripts/main.js
Normal file
135
temporary_project_management/src/scripts/main.js
Normal file
@@ -0,0 +1,135 @@
|
||||
const initDB = require('../db/init');
|
||||
const { concurrentFetch, queuedFetch } = require('../utils/http');
|
||||
const { saveHotelsTransaction, saveRoomsTransaction, saveRoomTypesTransaction, saveLoopsTransaction } = require('../services/dataService');
|
||||
const { enabledHotelIds, apiToggles } = require('../config');
|
||||
const { parseApiEndpoints } = require('../utils/mdParser');
|
||||
const logger = require('../utils/logger');
|
||||
const fs = require('fs');
|
||||
const path = require('path');
|
||||
const { query, close } = require('../db/utils'); // Changed from db, all, close
|
||||
|
||||
const mdPath = path.resolve(__dirname, '../../../project1.md'); // Adjust relative path to e:\Project_Class\BLS\Web_BLS_SQL\project1.md
|
||||
const endpoints = parseApiEndpoints(mdPath);
|
||||
|
||||
const stats = {
|
||||
successHotels: 0,
|
||||
failHotels: 0,
|
||||
startTime: Date.now(),
|
||||
endTime: 0
|
||||
};
|
||||
|
||||
const main = async () => {
|
||||
try {
|
||||
logger.info('Starting Application...');
|
||||
|
||||
// Phase 1: Init
|
||||
await initDB();
|
||||
|
||||
// Phase 2: Concurrent Data Fetch
|
||||
logger.info(`Starting Phase 2: Global Data Fetching using endpoints from MD: ${JSON.stringify(endpoints)}`);
|
||||
logger.info(`API Toggles: ${JSON.stringify(apiToggles)}`);
|
||||
|
||||
try {
|
||||
// Helper to conditionally fetch or return empty array
|
||||
const fetchIfEnabled = (enabled, fetchFn) => enabled ? fetchFn : Promise.resolve([]);
|
||||
|
||||
const [hotels, rooms, roomTypes] = await Promise.all([
|
||||
fetchIfEnabled(apiToggles.hotelList, concurrentFetch(endpoints.getHotelList)),
|
||||
fetchIfEnabled(apiToggles.hostList, concurrentFetch(endpoints.getHostList)),
|
||||
fetchIfEnabled(apiToggles.roomTypeInfo, concurrentFetch(endpoints.getRoomTypeInfo))
|
||||
]);
|
||||
|
||||
logger.info(`Fetched ${hotels.length} hotels, ${rooms.length} rooms, ${roomTypes.length} room types.`);
|
||||
|
||||
await saveHotelsTransaction(hotels);
|
||||
await saveRoomsTransaction(rooms);
|
||||
await saveRoomTypesTransaction(roomTypes);
|
||||
logger.info('Phase 2 Completed: Data saved.');
|
||||
} catch (error) {
|
||||
logger.error({ error }, 'Phase 2 failed. Exiting.');
|
||||
throw error;
|
||||
}
|
||||
|
||||
// Phase 3: Loop Address Fetching
|
||||
if (apiToggles.roomTypeModalInfo) {
|
||||
logger.info('Starting Phase 3: Loop Address Fetching...');
|
||||
logger.info(`Enabled Hotels: ${enabledHotelIds.join(', ')}`);
|
||||
|
||||
for (const hotelId of enabledHotelIds) {
|
||||
try {
|
||||
logger.info(`Processing Hotel ID: ${hotelId}`);
|
||||
|
||||
// Check existence using PG syntax ($1)
|
||||
// ENABLED_HOTEL_IDS are Hotel Codes (mapped to hotel_id column), not internal IDs (id column).
|
||||
// We need to find the internal ID to query room types.
|
||||
const checkRes = await query('SELECT id FROM temporary_project.hotels WHERE hotel_id = $1', [hotelId]);
|
||||
const hotelExists = checkRes.rows.length > 0;
|
||||
|
||||
if (!hotelExists) {
|
||||
logger.warn(`Hotel Code ${hotelId} not found in database. Skipping.`);
|
||||
stats.failHotels++;
|
||||
continue;
|
||||
}
|
||||
|
||||
const internalHotelId = checkRes.rows[0].id;
|
||||
|
||||
// Get Room Types for Hotel using internal ID
|
||||
const roomTypesRes = await query('SELECT id FROM temporary_project.room_type WHERE hotel_id = $1', [internalHotelId]);
|
||||
const roomTypeIds = roomTypesRes.rows.map(rt => rt.id);
|
||||
|
||||
if (roomTypeIds.length === 0) {
|
||||
logger.warn(`No room types found for Hotel ID ${hotelId}.`);
|
||||
stats.successHotels++;
|
||||
continue;
|
||||
}
|
||||
|
||||
logger.info(`Fetching loops for ${roomTypeIds.length} room types...`);
|
||||
|
||||
// POST to get loops using parsed endpoint
|
||||
const loops = await queuedFetch(endpoints.getRoomTypeModalInfo, {
|
||||
method: 'POST',
|
||||
data: roomTypeIds
|
||||
});
|
||||
|
||||
if (loops && Array.isArray(loops)) {
|
||||
await saveLoopsTransaction(loops);
|
||||
logger.info(`Saved ${loops.length} loops for Hotel ID ${hotelId}`);
|
||||
} else {
|
||||
logger.warn(`No loops returned for Hotel ID ${hotelId}`);
|
||||
}
|
||||
|
||||
stats.successHotels++;
|
||||
|
||||
} catch (err) {
|
||||
logger.error({ err, hotelId }, `Failed to process Hotel ID ${hotelId}`);
|
||||
stats.failHotels++;
|
||||
// 3.3 Ensure flow continues
|
||||
}
|
||||
}
|
||||
} else {
|
||||
logger.info('Phase 3 Skipped: Loop Address Fetching is disabled.');
|
||||
}
|
||||
|
||||
// Phase 4: Finish
|
||||
stats.endTime = Date.now();
|
||||
const duration = stats.endTime - stats.startTime;
|
||||
const summary = `All tasks completed. Success Hotels: ${stats.successHotels}, Failed Hotels: ${stats.failHotels}, Total Duration: ${duration}ms`;
|
||||
logger.info(summary);
|
||||
|
||||
await close();
|
||||
process.exit(0);
|
||||
|
||||
} catch (error) {
|
||||
// Uncaught Exception Handling (4.3)
|
||||
const logDir = path.join(process.cwd(), 'logs');
|
||||
if (!fs.existsSync(logDir)) fs.mkdirSync(logDir);
|
||||
const errorLogPath = path.join(logDir, `error-${Date.now()}.log`);
|
||||
fs.writeFileSync(errorLogPath, error.stack || error.toString());
|
||||
|
||||
// Use console.error as logger might be broken or async
|
||||
console.error('Fatal error occurred. Stack trace written to ' + errorLogPath);
|
||||
process.exit(1);
|
||||
}
|
||||
};
|
||||
|
||||
main();
|
||||
128
temporary_project_management/src/services/dataService.js
Normal file
128
temporary_project_management/src/services/dataService.js
Normal file
@@ -0,0 +1,128 @@
|
||||
const { v4: uuidv4 } = require('uuid');
|
||||
const { getClient } = require('../db/utils');
|
||||
const logger = require('../utils/logger');
|
||||
|
||||
// Generate 32-char UUID (no dashes)
|
||||
const generateGuid = () => uuidv4().replace(/-/g, '');
|
||||
|
||||
const validateSchema = (data, requiredFields) => {
|
||||
if (!data || !Array.isArray(data)) {
|
||||
throw new Error('Invalid data format: expected array');
|
||||
}
|
||||
for (const item of data) {
|
||||
for (const field of requiredFields) {
|
||||
if (item[field] === undefined || item[field] === null) {
|
||||
throw new Error(`Missing required field: ${field} in item ${JSON.stringify(item)}`);
|
||||
}
|
||||
}
|
||||
}
|
||||
};
|
||||
|
||||
// Generic Batch Saver with Transaction
|
||||
// PostgreSQL allows `INSERT ... ON CONFLICT` which is better than Delete+Insert
|
||||
// But for "Overwrite" logic with potential changing GUIDs (if we generate new ones), Delete+Insert is safer to clear old state?
|
||||
// MD says "If data exists (by ID)... overwrite".
|
||||
// If I delete and insert, I generate new GUIDs. This is fine as per my previous logic.
|
||||
// However, in PG, `DELETE FROM table WHERE id IN (...)` is efficient.
|
||||
// I'll stick to Delete + Insert within Transaction.
|
||||
|
||||
const saveEntitiesTransaction = async (tableName, data, deleteByField, deleteValueExtractor, insertQuery, insertParamsExtractor) => {
|
||||
if (data.length === 0) return;
|
||||
|
||||
const client = await getClient();
|
||||
try {
|
||||
await client.query('BEGIN');
|
||||
|
||||
// 1. Delete existing
|
||||
// Optimization: Batch delete?
|
||||
// "DELETE FROM table WHERE id = $1" in loop is slow.
|
||||
// "DELETE FROM table WHERE id IN (...)" is better.
|
||||
// But ids might be many.
|
||||
// Let's stick to loop for simplicity or use `UNNEST`.
|
||||
// Given requirement "overwrite", simple loop delete is acceptable for this scale or use bulk delete.
|
||||
|
||||
// I will use individual statements for safety and simplicity in this "temporary" code,
|
||||
// unless performance is critical (MD says data is large).
|
||||
// For "large data", bulk operations are better.
|
||||
// I will use `INSERT ... ON CONFLICT` if I can?
|
||||
// But I generate new GUID.
|
||||
// If I want to overwrite, I should really use `INSERT ... ON CONFLICT (id) DO UPDATE`.
|
||||
// But my PK is `(guid, id...)`.
|
||||
// ID is not unique constraint by itself in DB schema (PK is composite).
|
||||
// I have indexes on `id`.
|
||||
// To use ON CONFLICT, I need a unique constraint on `id`.
|
||||
// I didn't add UNIQUE(id).
|
||||
// So I must Delete then Insert.
|
||||
|
||||
for (const item of data) {
|
||||
const deleteVal = deleteValueExtractor(item);
|
||||
// Delete query: "DELETE FROM schema.table WHERE col = $1"
|
||||
await client.query(`DELETE FROM temporary_project.${tableName} WHERE ${deleteByField} = $1`, [deleteVal]);
|
||||
|
||||
const params = insertParamsExtractor(item);
|
||||
await client.query(insertQuery, params);
|
||||
}
|
||||
|
||||
await client.query('COMMIT');
|
||||
} catch (e) {
|
||||
await client.query('ROLLBACK');
|
||||
throw e;
|
||||
} finally {
|
||||
client.release();
|
||||
}
|
||||
};
|
||||
|
||||
const saveHotelsTransaction = async (data) => {
|
||||
validateSchema(data, ['hotelID', 'hotelCode', 'hotelName']);
|
||||
return saveEntitiesTransaction(
|
||||
'hotels',
|
||||
data,
|
||||
'id',
|
||||
item => item.hotelID,
|
||||
'INSERT INTO temporary_project.hotels (guid, hotel_id, hotel_name, id) VALUES ($1, $2, $3, $4)',
|
||||
item => [generateGuid(), item.hotelCode, item.hotelName, item.hotelID] // hotel_id = hotelCode
|
||||
);
|
||||
};
|
||||
|
||||
const saveRoomsTransaction = async (data) => {
|
||||
validateSchema(data, ['id', 'hotelID', 'roomTypeID', 'roomNumber', 'hostNumber', 'mac']);
|
||||
return saveEntitiesTransaction(
|
||||
'rooms',
|
||||
data,
|
||||
'id',
|
||||
item => item.id,
|
||||
'INSERT INTO temporary_project.rooms (guid, hotel_id, room_id, room_type_id, device_id, mac, id) VALUES ($1, $2, $3, $4, $5, $6, $7)',
|
||||
item => [generateGuid(), item.hotelID, item.roomNumber, item.roomTypeID, item.hostNumber, item.mac, item.id]
|
||||
);
|
||||
};
|
||||
|
||||
const saveRoomTypesTransaction = async (data) => {
|
||||
validateSchema(data, ['id', 'hotelID', 'roomTypeName']);
|
||||
return saveEntitiesTransaction(
|
||||
'room_type',
|
||||
data,
|
||||
'id',
|
||||
item => item.id,
|
||||
'INSERT INTO temporary_project.room_type (guid, id, room_type_name, hotel_id) VALUES ($1, $2, $3, $4)',
|
||||
item => [generateGuid(), item.id, item.roomTypeName, item.hotelID]
|
||||
);
|
||||
};
|
||||
|
||||
const saveLoopsTransaction = async (data) => {
|
||||
validateSchema(data, ['id', 'roomTypeID', 'modalAddress', 'type', 'name']);
|
||||
return saveEntitiesTransaction(
|
||||
'loops',
|
||||
data,
|
||||
'id',
|
||||
item => item.id,
|
||||
'INSERT INTO temporary_project.loops (guid, id, loop_name, room_type_id, loop_address, loop_type) VALUES ($1, $2, $3, $4, $5, $6)',
|
||||
item => [generateGuid(), item.id, item.name, item.roomTypeID, item.modalAddress, item.type]
|
||||
);
|
||||
};
|
||||
|
||||
module.exports = {
|
||||
saveHotelsTransaction,
|
||||
saveRoomsTransaction,
|
||||
saveRoomTypesTransaction,
|
||||
saveLoopsTransaction
|
||||
};
|
||||
54
temporary_project_management/src/utils/http.js
Normal file
54
temporary_project_management/src/utils/http.js
Normal file
@@ -0,0 +1,54 @@
|
||||
const axios = require('axios');
|
||||
const { apiBaseUrl } = require('../config');
|
||||
const logger = require('./logger');
|
||||
|
||||
const client = axios.create({
|
||||
baseURL: apiBaseUrl,
|
||||
timeout: 100000, // 100s
|
||||
});
|
||||
|
||||
const sleep = (ms) => new Promise(resolve => setTimeout(resolve, ms));
|
||||
|
||||
const fetchWithRetry = async (url, options = {}, retries = 2, delay = 3000) => {
|
||||
try {
|
||||
const response = await client(url, options);
|
||||
// API returns wrapped object { isok: true, response: [...] }
|
||||
if (response.data && response.data.response && Array.isArray(response.data.response)) {
|
||||
return response.data.response;
|
||||
}
|
||||
return response.data;
|
||||
} catch (error) {
|
||||
if (retries > 0) {
|
||||
logger.warn(`Request failed to ${url}, retrying in ${delay}ms... (${retries} retries left)`);
|
||||
await sleep(delay);
|
||||
return fetchWithRetry(url, options, retries - 1, delay);
|
||||
}
|
||||
throw error;
|
||||
}
|
||||
};
|
||||
|
||||
// Queue for sequential requests (Requirement 1.3)
|
||||
let promiseChain = Promise.resolve();
|
||||
|
||||
const queuedFetch = (url, options = {}) => {
|
||||
const task = promiseChain.then(async () => {
|
||||
try {
|
||||
const result = await fetchWithRetry(url, options);
|
||||
return result;
|
||||
} finally {
|
||||
await sleep(1000); // Wait 1s after return
|
||||
}
|
||||
});
|
||||
|
||||
// Ensure chain continues even if task fails
|
||||
promiseChain = task.catch(() => {});
|
||||
|
||||
return task;
|
||||
};
|
||||
|
||||
// Concurrent fetch (Requirement 2.2) is just direct usage of fetchWithRetry
|
||||
const concurrentFetch = (url, options = {}) => {
|
||||
return fetchWithRetry(url, options);
|
||||
};
|
||||
|
||||
module.exports = { queuedFetch, concurrentFetch };
|
||||
35
temporary_project_management/src/utils/logger.js
Normal file
35
temporary_project_management/src/utils/logger.js
Normal file
@@ -0,0 +1,35 @@
|
||||
const pino = require('pino');
|
||||
const fs = require('fs');
|
||||
const path = require('path');
|
||||
|
||||
const logDir = path.join(process.cwd(), 'logs');
|
||||
if (!fs.existsSync(logDir)) {
|
||||
fs.mkdirSync(logDir);
|
||||
}
|
||||
|
||||
const date = new Date().toISOString().split('T')[0];
|
||||
const logFile = path.join(logDir, `app-${date}.log`);
|
||||
|
||||
// Config for daily rolling is tricky with basic pino, but we create a new file per day based on start time.
|
||||
// For "keep 1 day", we would need a cleanup script.
|
||||
// For now we just ensure we write to a date-stamped file.
|
||||
|
||||
const transport = pino.transport({
|
||||
targets: [
|
||||
{
|
||||
target: 'pino/file',
|
||||
options: { destination: logFile, mkdir: true },
|
||||
},
|
||||
{
|
||||
target: 'pino-pretty',
|
||||
options: { colorize: true, translateTime: 'SYS:standard' }
|
||||
}
|
||||
]
|
||||
});
|
||||
|
||||
const logger = pino({
|
||||
level: 'info',
|
||||
timestamp: pino.stdTimeFunctions.isoTime,
|
||||
}, transport);
|
||||
|
||||
module.exports = logger;
|
||||
59
temporary_project_management/src/utils/mdParser.js
Normal file
59
temporary_project_management/src/utils/mdParser.js
Normal file
@@ -0,0 +1,59 @@
|
||||
const fs = require('fs');
|
||||
const path = require('path');
|
||||
const logger = require('./logger');
|
||||
|
||||
const parseApiEndpoints = (filePath) => {
|
||||
try {
|
||||
if (!fs.existsSync(filePath)) {
|
||||
logger.warn(`MD file not found at ${filePath}, using default endpoints.`);
|
||||
return getDefaultEndpoints();
|
||||
}
|
||||
|
||||
const content = fs.readFileSync(filePath, 'utf-8');
|
||||
const lines = content.split(/\r?\n/);
|
||||
|
||||
const extractPath = (line) => {
|
||||
const match = line.match(/^\s*(\/[a-zA-Z0-9_]+)/);
|
||||
return match ? match[1] : null;
|
||||
};
|
||||
|
||||
// Requirement 2.1: Parse L58-60
|
||||
// Lines are 1-based in editor, 0-based in array.
|
||||
// L58 -> index 57.
|
||||
// We take a window around there to be safe or just specific lines.
|
||||
// Let's scan the file for the known patterns to be robust against minor edits,
|
||||
// but prioritize the section if we can.
|
||||
|
||||
const endpoints = {};
|
||||
|
||||
lines.forEach(line => {
|
||||
const p = extractPath(line);
|
||||
if (p) {
|
||||
if (p.includes('GetHotelList')) endpoints.getHotelList = p;
|
||||
if (p.includes('GetHostList')) endpoints.getHostList = p;
|
||||
if (p.includes('GetRoomType_Info') && !p.includes('Modal')) endpoints.getRoomTypeInfo = p;
|
||||
if (p.includes('GetRoomType_ModalInfo')) endpoints.getRoomTypeModalInfo = p;
|
||||
}
|
||||
});
|
||||
|
||||
return {
|
||||
getHotelList: endpoints.getHotelList || '/GetHotelList',
|
||||
getHostList: endpoints.getHostList || '/GetHostList',
|
||||
getRoomTypeInfo: endpoints.getRoomTypeInfo || '/GetRoomType_Info',
|
||||
getRoomTypeModalInfo: endpoints.getRoomTypeModalInfo || '/GetRoomType_ModalInfo'
|
||||
};
|
||||
|
||||
} catch (error) {
|
||||
logger.error({ error }, 'Failed to parse MD file. Using defaults.');
|
||||
return getDefaultEndpoints();
|
||||
}
|
||||
};
|
||||
|
||||
const getDefaultEndpoints = () => ({
|
||||
getHotelList: '/GetHotelList',
|
||||
getHostList: '/GetHostList',
|
||||
getRoomTypeInfo: '/GetRoomType_Info',
|
||||
getRoomTypeModalInfo: '/GetRoomType_ModalInfo'
|
||||
});
|
||||
|
||||
module.exports = { parseApiEndpoints };
|
||||
99
temporary_project_management/tests/app.test.js
Normal file
99
temporary_project_management/tests/app.test.js
Normal file
@@ -0,0 +1,99 @@
|
||||
const initDB = require('../src/db/init');
|
||||
const { saveHotelsTransaction, saveLoopsTransaction } = require('../src/services/dataService');
|
||||
const { concurrentFetch } = require('../src/utils/http');
|
||||
const { db, run, all, close } = require('../src/db/utils');
|
||||
const logger = require('../src/utils/logger');
|
||||
const axios = require('axios');
|
||||
|
||||
// Setup Mocks
|
||||
jest.mock('uuid', () => {
|
||||
let count = 0;
|
||||
return {
|
||||
v4: () => `test-guid-${++count}`
|
||||
};
|
||||
});
|
||||
|
||||
// Mock Axios with a persistent mock client
|
||||
jest.mock('axios', () => {
|
||||
const mockClient = jest.fn();
|
||||
return {
|
||||
create: jest.fn(() => mockClient),
|
||||
__mockClient: mockClient
|
||||
};
|
||||
});
|
||||
|
||||
jest.mock('../src/utils/logger', () => ({
|
||||
info: jest.fn(),
|
||||
error: jest.fn(),
|
||||
warn: jest.fn(),
|
||||
debug: jest.fn(),
|
||||
}));
|
||||
|
||||
describe('System Tests', () => {
|
||||
jest.setTimeout(20000); // Increase timeout for retry logic
|
||||
|
||||
afterAll(async () => {
|
||||
await close();
|
||||
});
|
||||
|
||||
describe('Database & Services', () => {
|
||||
beforeEach(async () => {
|
||||
await run('DROP TABLE IF EXISTS hotels');
|
||||
await run('DROP TABLE IF EXISTS rooms');
|
||||
await run('DROP TABLE IF EXISTS room_type');
|
||||
await run('DROP TABLE IF EXISTS loops');
|
||||
await initDB();
|
||||
});
|
||||
|
||||
test('InitDB should be idempotent', async () => {
|
||||
await initDB();
|
||||
const result = await all("SELECT name FROM sqlite_master WHERE type='table' AND name='hotels'");
|
||||
expect(result.length).toBe(1);
|
||||
});
|
||||
|
||||
test('Transaction Rollback', async () => {
|
||||
const data = [
|
||||
{ hotelID: 1, hotelCode: 'H1', hotelName: 'Hotel 1' },
|
||||
{ hotelID: 2, hotelCode: 'H2', hotelName: 'Hotel 2' }
|
||||
];
|
||||
await saveHotelsTransaction(data);
|
||||
const rows = await all('SELECT * FROM hotels');
|
||||
expect(rows.length).toBe(2);
|
||||
});
|
||||
|
||||
test('Loop Address Writing', async () => {
|
||||
const loops = [
|
||||
{ id: 1, roomTypeID: 10, modalAddress: 'Addr1', type: 'Type1', name: 'Loop1' },
|
||||
{ id: 2, roomTypeID: 10, modalAddress: 'Addr2', type: 'Type2', name: 'Loop2' }
|
||||
];
|
||||
await saveLoopsTransaction(loops);
|
||||
const rows = await all('SELECT * FROM loops');
|
||||
expect(rows.length).toBe(2);
|
||||
expect(rows[0].loop_name).toBe('Loop1');
|
||||
});
|
||||
});
|
||||
|
||||
describe('HTTP Utils', () => {
|
||||
const mockClient = axios.__mockClient;
|
||||
|
||||
beforeEach(() => {
|
||||
mockClient.mockReset();
|
||||
});
|
||||
|
||||
test('Retry Logic: Should retry on failure', async () => {
|
||||
// Mock failures then success
|
||||
mockClient
|
||||
.mockRejectedValueOnce(new Error('Fail 1'))
|
||||
.mockRejectedValueOnce(new Error('Fail 2'))
|
||||
.mockResolvedValue({ data: 'Success' });
|
||||
|
||||
const result = await concurrentFetch('/test');
|
||||
expect(result).toBe('Success');
|
||||
expect(mockClient).toHaveBeenCalledTimes(3); // Initial + 2 Retries
|
||||
});
|
||||
|
||||
test('Timeout Logic: Handled by axios config', async () => {
|
||||
expect(true).toBe(true);
|
||||
});
|
||||
});
|
||||
});
|
||||
Reference in New Issue
Block a user