feat(日志): 实现定时日志清理功能并统一服务端口为19070

- 添加每小时定时清理项目控制台日志功能,保留最新1000条且不超过24小时
- 将所有服务端口统一调整为19070,包括后端API、Nginx配置和文档
- 优化前端日志显示,支持单行折叠和点击展开
- 更新相关测试用例和部署文档
This commit is contained in:
2026-01-22 14:06:44 +08:00
parent 4551ae5733
commit 240e708fbe
21 changed files with 472 additions and 211 deletions

View File

@@ -254,7 +254,7 @@ REDIS_PASSWORD=your-redis-password
REDIS_DB=15 REDIS_DB=15
# 服务器配置 # 服务器配置
PORT=3001 PORT=19070
NODE_ENV=production NODE_ENV=production
``` ```

View File

@@ -9,14 +9,14 @@
# /vol1/1000/Docker/nginx/sites/bls_project_console/ # /vol1/1000/Docker/nginx/sites/bls_project_console/
# and will be served from: # and will be served from:
# /var/www/bls_project_console/ # /var/www/bls_project_console/
# - Backend runs on the HOST at 127.0.0.1:3001. # - Backend runs on the HOST at 127.0.0.1:19070.
# Nginx container reaches host via host.docker.internal. # Nginx container reaches host via host.docker.internal.
# On Linux you typically need in nginx docker-compose: # On Linux you typically need in nginx docker-compose:
# extra_hosts: # extra_hosts:
# - "host.docker.internal:host-gateway" # - "host.docker.internal:host-gateway"
server { server {
listen 80; listen 19199;
server_name blv-rd.tech; server_name blv-rd.tech;
root /var/www/bls_project_console; root /var/www/bls_project_console;
@@ -24,7 +24,7 @@ server {
# API reverse proxy # API reverse proxy
location /api/ { location /api/ {
proxy_pass http://host.docker.internal:3001; proxy_pass http://host.docker.internal:19070;
proxy_http_version 1.1; proxy_http_version 1.1;
proxy_set_header Host $host; proxy_set_header Host $host;

View File

@@ -100,8 +100,8 @@
- 服务器类型NAS - 服务器类型NAS
【访问信息】 【访问信息】
- 前端访问地址blv-rd.tech:20001 - 前端访问地址blv-rd.tech:19199
- 后端API地址http://127.0.0.1:3001 - 后端API地址http://127.0.0.1:19070
【文件路径】 【文件路径】
- 项目文件目录:/vol1/1000/Docker/nginx/project/bls/bls_project_console - 项目文件目录:/vol1/1000/Docker/nginx/project/bls/bls_project_console

View File

@@ -12,7 +12,7 @@ RestartSec=10
StandardOutput=append:/vol1/1000/Docker/nginx/project/bls/bls_project_console/backend/logs/systemd-out.log StandardOutput=append:/vol1/1000/Docker/nginx/project/bls/bls_project_console/backend/logs/systemd-out.log
StandardError=append:/vol1/1000/Docker/nginx/project/bls/bls_project_console/backend/logs/systemd-err.log StandardError=append:/vol1/1000/Docker/nginx/project/bls/bls_project_console/backend/logs/systemd-err.log
Environment=NODE_ENV=production Environment=NODE_ENV=production
Environment=PORT=19910 Environment=PORT=19070
[Install] [Install]
WantedBy=multi-user.target WantedBy=multi-user.target

View File

@@ -6,8 +6,8 @@
**检查日期**: 2026-01-16 **检查日期**: 2026-01-16
**部署架构**: NginxDocker容器+ Express后端systemd管理 **部署架构**: NginxDocker容器+ Express后端systemd管理
**前端访问地址**: blv-rd.tech:20100 **前端访问地址**: blv-rd.tech:19199
**后端API地址**: http://127.0.0.1:19910 **后端API地址**: http://127.0.0.1:19070
## 二、配置文件清单 ## 二、配置文件清单
@@ -21,7 +21,7 @@
```nginx ```nginx
server { server {
listen 20001; listen 19199;
server_name blv-rd.tech; server_name blv-rd.tech;
root /var/www/bls_project_console; root /var/www/bls_project_console;
@@ -30,7 +30,7 @@ server {
client_max_body_size 100M; client_max_body_size 100M;
location /api/ { location /api/ {
proxy_pass http://host.docker.internal:3001; proxy_pass http://host.docker.internal:19070;
proxy_http_version 1.1; proxy_http_version 1.1;
proxy_set_header Host $host; proxy_set_header Host $host;
@@ -54,10 +54,10 @@ server {
**检查项**: **检查项**:
- ✅ 监听端口: 20100(正确) - ✅ 监听端口: 19199(正确)
- ✅ 服务器名称: blv-rd.tech正确 - ✅ 服务器名称: blv-rd.tech正确
- ✅ 静态文件根目录: /var/www/bls_project_console正确 - ✅ 静态文件根目录: /var/www/bls_project_console正确
- ✅ API代理地址: http://host.docker.internal:19910正确Nginx在Docker容器中 - ✅ API代理地址: http://host.docker.internal:19070正确Nginx在Docker容器中
- ✅ 文件上传大小限制: 100M正确 - ✅ 文件上传大小限制: 100M正确
- ✅ Vue Router history模式支持: try_files $uri $uri/ /index.html正确 - ✅ Vue Router history模式支持: try_files $uri $uri/ /index.html正确
- ✅ 超时设置: 连接60s、发送60s、读取300s正确 - ✅ 超时设置: 连接60s、发送60s、读取300s正确
@@ -93,7 +93,7 @@ RestartSec=10
StandardOutput=append:/vol1/1000/Docker/nginx/project/bls/bls_project_console/backend/logs/systemd-out.log StandardOutput=append:/vol1/1000/Docker/nginx/project/bls/bls_project_console/backend/logs/systemd-out.log
StandardError=append:/vol1/1000/Docker/nginx/project/bls/bls_project_console/backend/logs/systemd-err.log StandardError=append:/vol1/1000/Docker/nginx/project/bls/bls_project_console/backend/logs/systemd-err.log
Environment=NODE_ENV=production Environment=NODE_ENV=production
Environment=PORT=3001 Environment=PORT=19070
[Install] [Install]
WantedBy=multi-user.target WantedBy=multi-user.target
@@ -109,7 +109,7 @@ WantedBy=multi-user.target
- ✅ 重启策略: on-failure正确 - ✅ 重启策略: on-failure正确
- ✅ 重启延迟: 10秒合理 - ✅ 重启延迟: 10秒合理
- ✅ 日志输出: 标准输出和错误日志分离(正确) - ✅ 日志输出: 标准输出和错误日志分离(正确)
- ✅ 环境变量: NODE_ENV=production, PORT=19910正确 - ✅ 环境变量: NODE_ENV=production, PORT=19070正确
- ✅ 开机自启: WantedBy=multi-user.target正确 - ✅ 开机自启: WantedBy=multi-user.target正确
**说明**: **说明**:
@@ -129,7 +129,7 @@ WantedBy=multi-user.target
**关键配置**: **关键配置**:
```javascript ```javascript
const PORT = 19910; const PORT = parsePort(process.env.PORT, 19070);
app.use(cors()); app.use(cors());
app.use(express.json()); app.use(express.json());
@@ -145,7 +145,7 @@ app.get('/api/health', (req, res) => {
**检查项**: **检查项**:
- ✅ 端口配置: 19910与systemd配置一致 - ✅ 端口配置: 19070与systemd配置一致
- ✅ CORS中间件: 已启用(正确) - ✅ CORS中间件: 已启用(正确)
- ✅ JSON解析: 已启用(正确) - ✅ JSON解析: 已启用(正确)
- ✅ API路由: /api/logs, /api/commands, /api/projects正确 - ✅ API路由: /api/logs, /api/commands, /api/projects正确
@@ -155,7 +155,7 @@ app.get('/api/health', (req, res) => {
**说明**: **说明**:
- 端口19910与systemd服务配置中的PORT环境变量一致 - 端口19070与systemd服务配置中的PORT环境变量一致
- 提供健康检查端点便于监控 - 提供健康检查端点便于监控
- 支持优雅关闭确保Redis连接正确关闭 - 支持优雅关闭确保Redis连接正确关闭
@@ -181,7 +181,7 @@ export default defineConfig({
port: 3000, port: 3000,
proxy: { proxy: {
'/api': { '/api': {
target: 'http://localhost:3001', target: 'http://localhost:19070',
changeOrigin: true, changeOrigin: true,
}, },
}, },
@@ -200,7 +200,7 @@ export default defineConfig({
- ✅ 源码根目录: src/frontend正确 - ✅ 源码根目录: src/frontend正确
- ✅ 输出目录: ../../dist正确 - ✅ 输出目录: ../../dist正确
- ✅ 开发服务器端口: 3000正确 - ✅ 开发服务器端口: 3000正确
- ✅ API代理: /api -> http://localhost:3001(正确,仅用于开发环境) - ✅ API代理: /api -> http://localhost:19070(正确,仅用于开发环境)
- ✅ 路径别名: @ -> src/frontend正确 - ✅ 路径别名: @ -> src/frontend正确
**说明**: **说明**:
@@ -270,10 +270,10 @@ app.mount('#app');
| 配置项 | 端口 | 状态 | | 配置项 | 端口 | 状态 |
| -------------------------- | ----- | --------------- | | -------------------------- | ----- | --------------- |
| 后端服务器 (server.js) | 19910 | ✅ | | 后端服务器 (server.js) | 19070 | ✅ |
| Systemd服务 (PORT环境变量) | 19910 | ✅ | | Systemd服务 (PORT环境变量) | 19070 | ✅ |
| Nginx代理目标 | 19910 | ✅ | | Nginx代理目标 | 19070 | ✅ |
| Nginx监听端口 | 20100 | ✅ | | Nginx监听端口 | 19199 | ✅ |
| Vite开发服务器 | 3000 | ✅ (仅开发环境) | | Vite开发服务器 | 3000 | ✅ (仅开发环境) |
### 路径配置一致性 ### 路径配置一致性
@@ -303,7 +303,7 @@ app.mount('#app');
2. ✅ 上传dist文件夹内容到: /vol1/1000/Docker/nginx/project/bls/bls_project_console 2. ✅ 上传dist文件夹内容到: /vol1/1000/Docker/nginx/project/bls/bls_project_console
3. ✅ 上传Nginx配置到: /vol1/1000/Docker/nginx/conf.d/bls_project_console.conf 3. ✅ 上传Nginx配置到: /vol1/1000/Docker/nginx/conf.d/bls_project_console.conf
4. ✅ 重启Nginx容器: `docker restart nginx` 4. ✅ 重启Nginx容器: `docker restart nginx`
5. ✅ 访问地址: http://blv-rd.tech:20100 5. ✅ 访问地址: http://blv-rd.tech:19199
### 后端部署流程 ### 后端部署流程
@@ -321,7 +321,7 @@ app.mount('#app');
### 1. Nginx容器网络配置 ### 1. Nginx容器网络配置
**问题**: Nginx容器需要能够访问宿主机的3001端口 **问题**: Nginx容器需要能够访问宿主机的19070端口
**建议**: **建议**:
@@ -386,8 +386,8 @@ npm --version
redis-cli ping redis-cli ping
# 4. 检查端口占用 # 4. 检查端口占用
netstat -tlnp | grep 3001 netstat -tlnp | grep 19070
netstat -tlnp | grep 20001 netstat -tlnp | grep 19199
``` ```
### 部署后验证 ### 部署后验证
@@ -397,16 +397,16 @@ netstat -tlnp | grep 20001
systemctl status bls-project-console.service systemctl status bls-project-console.service
# 2. 检查后端服务 # 2. 检查后端服务
curl http://localhost:3001/api/health curl http://localhost:19070/api/health
# 3. 检查Nginx容器 # 3. 检查Nginx容器
docker ps | grep nginx docker ps | grep nginx
# 4. 检查前端访问 # 4. 检查前端访问
curl http://blv-rd.tech:20001 curl http://blv-rd.tech:19199
# 5. 检查API代理 # 5. 检查API代理
curl http://blv-rd.tech:20001/api/health curl http://blv-rd.tech:19199/api/health
``` ```
--- ---

View File

@@ -9,8 +9,8 @@
## 二、环境信息 ## 二、环境信息
- **前端访问地址**: blv-rd.tech:20100 - **前端访问地址**: blv-rd.tech:19199
- **后端API地址**: http://127.0.0.1:19910 - **后端API地址**: http://127.0.0.1:19070
- **NAS项目文件目录**: `/vol1/1000/Docker/nginx/project/bls/bls_project_console` - **NAS项目文件目录**: `/vol1/1000/Docker/nginx/project/bls/bls_project_console`
- **NAS配置文件目录**: `/vol1/1000/Docker/nginx/conf.d` - **NAS配置文件目录**: `/vol1/1000/Docker/nginx/conf.d`
- **Systemd服务目录**: `/etc/systemd/system/` - **Systemd服务目录**: `/etc/systemd/system/`
@@ -45,11 +45,11 @@ docker port nginx
### 3. 检查端口占用 ### 3. 检查端口占用
```bash ```bash
# 检查后端端口19910是否被占用 # 检查后端端口19070是否被占用
netstat -tlnp | grep 19910 netstat -tlnp | grep 19070
# 检查前端端口20100是否被占用 # 检查前端端口19199是否被占用
netstat -tlnp | grep 20100 netstat -tlnp | grep 19199
``` ```
### 4. 检查Redis服务 ### 4. 检查Redis服务
@@ -166,7 +166,7 @@ docker exec nginx nginx -s reload
在浏览器中访问: 在浏览器中访问:
``` ```
http://blv-rd.tech:20100 http://blv-rd.tech:19199
``` ```
应该能看到项目的前端页面。 应该能看到项目的前端页面。
@@ -265,7 +265,7 @@ nano .env
```env ```env
NODE_ENV=production NODE_ENV=production
PORT=19910 PORT=19070
REDIS_HOST=localhost REDIS_HOST=localhost
REDIS_PORT=6379 REDIS_PORT=6379
``` ```
@@ -283,7 +283,7 @@ cd /vol1/1000/Docker/nginx/project/bls/bls_project_console/backend
node server.js node server.js
# 如果看到类似以下输出,说明启动成功: # 如果看到类似以下输出,说明启动成功:
# BLS Project Console backend server is running on port 19910 # BLS Project Console backend server is running on port 19070
``` ```
如果启动失败,查看错误信息并修复: 如果启动失败,查看错误信息并修复:
@@ -293,7 +293,7 @@ node server.js
redis-cli ping redis-cli ping
# 检查端口占用 # 检查端口占用
netstat -tlnp | grep 19910 netstat -tlnp | grep 19070
# 查看详细错误日志 # 查看详细错误日志
node server.js 2>&1 | tee startup.log node server.js 2>&1 | tee startup.log
@@ -332,7 +332,7 @@ RestartSec=10
StandardOutput=append:/vol1/1000/Docker/nginx/project/bls/bls_project_console/backend/logs/systemd-out.log StandardOutput=append:/vol1/1000/Docker/nginx/project/bls/bls_project_console/backend/logs/systemd-out.log
StandardError=append:/vol1/1000/Docker/nginx/project/bls/bls_project_console/backend/logs/systemd-err.log StandardError=append:/vol1/1000/Docker/nginx/project/bls/bls_project_console/backend/logs/systemd-err.log
Environment=NODE_ENV=production Environment=NODE_ENV=production
Environment=PORT=19910 Environment=PORT=19070
[Install] [Install]
WantedBy=multi-user.target WantedBy=multi-user.target
@@ -401,10 +401,10 @@ tail -f /vol1/1000/Docker/nginx/project/bls/bls_project_console/backend/logs/sys
```bash ```bash
# 检查端口监听 # 检查端口监听
netstat -tlnp | grep 19910 netstat -tlnp | grep 19070
# 测试API接口 # 测试API接口
curl http://localhost:19910/api/projects curl http://localhost:19070/api/projects
# 查看服务进程 # 查看服务进程
ps aux | grep "node server.js" ps aux | grep "node server.js"
@@ -413,7 +413,7 @@ ps aux | grep "node server.js"
在浏览器中访问: 在浏览器中访问:
``` ```
http://blv-rd.tech:20100/api/projects http://blv-rd.tech:19199/api/projects
``` ```
应该能返回JSON数据。 应该能返回JSON数据。
@@ -573,7 +573,7 @@ docker logs nginx --tail 100
**可能原因**: **可能原因**:
- 后端服务未启动 - 后端服务未启动
- 后端端口不是19910 - 后端端口不是19070
- Redis连接失败 - Redis连接失败
- 防火墙阻止了连接 - 防火墙阻止了连接
@@ -584,7 +584,7 @@ docker logs nginx --tail 100
sudo systemctl status bls-project-console.service sudo systemctl status bls-project-console.service
# 2. 检查后端端口 # 2. 检查后端端口
netstat -tlnp | grep 19910 netstat -tlnp | grep 19070
# 3. 查看服务日志 # 3. 查看服务日志
sudo journalctl -u bls-project-console.service -n 50 sudo journalctl -u bls-project-console.service -n 50
@@ -593,7 +593,7 @@ sudo journalctl -u bls-project-console.service -n 50
redis-cli ping redis-cli ping
# 5. 测试后端API # 5. 测试后端API
curl http://localhost:19910/api/projects curl http://localhost:19070/api/projects
# 6. 重启服务 # 6. 重启服务
sudo systemctl restart bls-project-console.service sudo systemctl restart bls-project-console.service
@@ -624,7 +624,7 @@ cat /etc/systemd/system/bls-project-console.service
ls -la /vol1/1000/Docker/nginx/project/bls/bls_project_console/backend ls -la /vol1/1000/Docker/nginx/project/bls/bls_project_console/backend
# 5. 检查端口占用 # 5. 检查端口占用
netstat -tlnp | grep 3001 netstat -tlnp | grep 19070
# 6. 检查Redis服务 # 6. 检查Redis服务
sudo systemctl status redis sudo systemctl status redis
@@ -640,7 +640,7 @@ node server.js
**可能原因**: **可能原因**:
- 配置文件语法错误 - 配置文件语法错误
- 端口20100已被占用 - 端口19199已被占用
- 配置文件路径错误 - 配置文件路径错误
**解决方法**: **解决方法**:
@@ -650,7 +650,7 @@ node server.js
docker exec nginx nginx -t docker exec nginx nginx -t
# 2. 检查端口占用 # 2. 检查端口占用
netstat -tlnp | grep 20100 netstat -tlnp | grep 19199
# 3. 查看Nginx错误日志 # 3. 查看Nginx错误日志
docker logs nginx --tail 100 docker logs nginx --tail 100
@@ -865,8 +865,8 @@ docker restart nginx
```bash ```bash
# 只允许必要的端口 # 只允许必要的端口
sudo ufw allow 20100/tcp sudo ufw allow 19199/tcp
sudo ufw allow 19910/tcp sudo ufw allow 19070/tcp
sudo ufw enable sudo ufw enable
``` ```

View File

@@ -105,7 +105,7 @@ module.exports = {
max_memory_restart: '1G', max_memory_restart: '1G',
env: { env: {
NODE_ENV: 'production', NODE_ENV: 'production',
PORT: 19910, PORT: 19070,
}, },
error_file: './logs/pm2-error.log', error_file: './logs/pm2-error.log',
out_file: './logs/pm2-out.log', out_file: './logs/pm2-out.log',
@@ -187,14 +187,14 @@ pm2 status
pm2 logs bls-project-console pm2 logs bls-project-console
# 测试健康检查接口 # 测试健康检查接口
curl http://localhost:19910/api/health curl http://localhost:19070/api/health
``` ```
#### 8.2 检查前端访问 #### 8.2 检查前端访问
在浏览器中访问: 在浏览器中访问:
- `http://localhost/` 或配置的域名 - `http://localhost:19199/` 或配置的域名
#### 8.3 检查 API 代理 #### 8.3 检查 API 代理
@@ -252,7 +252,7 @@ nginx -v
1. 检查端口是否被占用: 1. 检查端口是否被占用:
```bash ```bash
netstat -ano | findstr :19910 netstat -ano | findstr :19070
``` ```
2. 检查 Redis 连接: 2. 检查 Redis 连接:
@@ -297,13 +297,13 @@ nginx -v
2. 检查 Nginx 代理配置: 2. 检查 Nginx 代理配置:
```bash ```bash
# 确保 proxy_pass 指向正确的后端地址 # 确保 proxy_pass 指向正确的后端地址
curl http://localhost:19910/api/health curl http://localhost:19070/api/health
``` ```
## 端口说明 ## 端口说明
- **19910**: 后端 API 服务端口 - **19070**: 后端 API 服务端口
- **80**: Nginx HTTP 服务端口 - **19199**: Nginx HTTP 服务端口
## 注意事项 ## 注意事项

View File

@@ -2,11 +2,11 @@
## 一、环境信息 ## 一、环境信息
- **前端访问地址**: blv-rd.tech:20100 - **前端访问地址**: blv-rd.tech:19199
- **NAS项目文件目录**: `/vol1/1000/Docker/nginx/project/bls/bls_project_console` - **NAS项目文件目录**: `/vol1/1000/Docker/nginx/project/bls/bls_project_console`
- **NAS配置文件目录**: `/vol1/1000/Docker/nginx/conf.d` - **NAS配置文件目录**: `/vol1/1000/Docker/nginx/conf.d`
- **项目类型**: Vue3前端 + Express后端 - **项目类型**: Vue3前端 + Express后端
- **后端端口**: 19910 - **后端端口**: 19070
## 二、本地编译步骤 ## 二、本地编译步骤
@@ -137,7 +137,7 @@ pm2 delete bls-project-console
**注意**: **注意**:
- 后端服务会在宿主机上运行端口为19910 - 后端服务会在宿主机上运行端口为19070
- 确保Redis服务已启动并可访问 - 确保Redis服务已启动并可访问
- PM2会自动管理进程崩溃重启 - PM2会自动管理进程崩溃重启
@@ -170,7 +170,7 @@ docker exec nginx nginx -t
在浏览器中访问: 在浏览器中访问:
``` ```
http://blv-rd.tech:20100 http://blv-rd.tech:19199
``` ```
应该能看到项目的前端页面。 应该能看到项目的前端页面。
@@ -180,7 +180,7 @@ http://blv-rd.tech:20100
在浏览器中访问: 在浏览器中访问:
``` ```
http://blv-rd.tech:20100/api/projects http://blv-rd.tech:19199/api/projects
``` ```
应该能返回JSON数据如果后端正常运行 应该能返回JSON数据如果后端正常运行
@@ -221,14 +221,14 @@ docker exec nginx tail -f /var/log/nginx-custom/error.log
**可能原因**: **可能原因**:
- 后端服务未启动 - 后端服务未启动
- 后端端口不是19910 - 后端端口不是19070
- `host.docker.internal` 无法解析 - `host.docker.internal` 无法解析
- 防火墙阻止了连接 - 防火墙阻止了连接
**解决方法**: **解决方法**:
1. 检查PM2服务状态`pm2 status` 1. 检查PM2服务状态`pm2 status`
2. 检查后端端口:`netstat -tlnp | grep 19910` 2. 检查后端端口:`netstat -tlnp | grep 19070`
3. 查看PM2日志`pm2 logs bls-project-console` 3. 查看PM2日志`pm2 logs bls-project-console`
4. 在Nginx容器内测试连接`docker exec nginx ping host.docker.internal` 4. 在Nginx容器内测试连接`docker exec nginx ping host.docker.internal`
5. 检查防火墙规则 5. 检查防火墙规则
@@ -239,13 +239,13 @@ docker exec nginx tail -f /var/log/nginx-custom/error.log
**可能原因**: **可能原因**:
- 配置文件语法错误 - 配置文件语法错误
- 端口20100已被占用 - 端口19199已被占用
- 配置文件路径错误 - 配置文件路径错误
**解决方法**: **解决方法**:
1. 检查配置文件语法:`docker exec nginx nginx -t` 1. 检查配置文件语法:`docker exec nginx nginx -t`
2. 检查端口占用:`netstat -tlnp | grep 20100` 2. 检查端口占用:`netstat -tlnp | grep 19199`
3. 查看Nginx错误日志`docker logs nginx` 3. 查看Nginx错误日志`docker logs nginx`
## 六、后续更新流程 ## 六、后续更新流程
@@ -342,7 +342,7 @@ Web_BLS_ProjectConsole/
- 应用名称:`bls-project-console` - 应用名称:`bls-project-console`
- 工作目录:`/vol1/1000/Docker/nginx/project/bls/bls_project_console/backend` - 工作目录:`/vol1/1000/Docker/nginx/project/bls/bls_project_console/backend`
- 启动脚本:`./server.js` - 启动脚本:`./server.js`
- 环境变量:`NODE_ENV=production`, `PORT=19910` - 环境变量:`NODE_ENV=production`, `PORT=19070`
- 内存限制1GB超过自动重启 - 内存限制1GB超过自动重启
- 日志文件:`./logs/pm2-error.log` 和 `./logs/pm2-out.log` - 日志文件:`./logs/pm2-error.log` 和 `./logs/pm2-out.log`
@@ -374,7 +374,7 @@ pm2 show bls-project-console
## 九、注意事项 ## 九、注意事项
1. **端口映射**: 确保Nginx容器的20100端口已映射到宿主机的20100端口 1. **端口映射**: 确保Nginx容器的19199端口已映射到宿主机的19199端口
2. **host.docker.internal**: 在Linux上需要在Docker Compose中添加 `extra_hosts` 配置 2. **host.docker.internal**: 在Linux上需要在Docker Compose中添加 `extra_hosts` 配置
3. **文件权限**: 确保上传的文件有正确的读写权限 3. **文件权限**: 确保上传的文件有正确的读写权限
4. **Redis连接**: 确保后端能连接到Redis服务 4. **Redis连接**: 确保后端能连接到Redis服务

View File

@@ -10,7 +10,7 @@ module.exports = {
max_memory_restart: '1G', max_memory_restart: '1G',
env: { env: {
NODE_ENV: 'production', NODE_ENV: 'production',
PORT: 19910, PORT: 19070,
}, },
error_file: './logs/pm2-error.log', error_file: './logs/pm2-error.log',
out_file: './logs/pm2-out.log', out_file: './logs/pm2-out.log',

View File

@@ -1,5 +1,5 @@
server { server {
listen 20100; listen 19199;
server_name blv-rd.tech; server_name blv-rd.tech;
root /var/www/bls_project_console; root /var/www/bls_project_console;
@@ -8,7 +8,7 @@ server {
client_max_body_size 100M; client_max_body_size 100M;
location /api/ { location /api/ {
proxy_pass http://host.docker.internal:19910; proxy_pass http://host.docker.internal:19070;
proxy_http_version 1.1; proxy_http_version 1.1;
proxy_set_header Host $host; proxy_set_header Host $host;

View File

@@ -1,12 +1,12 @@
server { server {
listen 80; listen 19199;
server_name localhost; server_name localhost;
root C:/nginx/sites/bls_project_console; root C:/nginx/sites/bls_project_console;
index index.html; index index.html;
location /api/ { location /api/ {
proxy_pass http://127.0.0.1:19910; proxy_pass http://127.0.0.1:19070;
proxy_http_version 1.1; proxy_http_version 1.1;
proxy_set_header Host $host; proxy_set_header Host $host;

View File

@@ -5,7 +5,7 @@ info:
description: | description: |
BLS Project Console 后端 API与当前实现保持一致 BLS Project Console 后端 API与当前实现保持一致
servers: servers:
- url: http://localhost:3001 - url: http://localhost:19070
paths: paths:
/api/health: /api/health:
get: get:

View File

@@ -0,0 +1,16 @@
# Change: Update Log Prune Schedule
## Why
项目日志持续增长会造成Redis列表膨胀影响读写性能需要在服务端统一执行定时清理。
## What Changes
- 每小时整点检查 `项目心跳` 内的全部项目日志列表
- 删除 `${projectName}_项目控制台` 中超过24小时的日志记录
- 若日志数量超过1000条按时间倒序保留最新1000条
## Impact
- Affected specs: specs/logging/spec.md
- Affected code: src/backend/routes/logs.js, src/backend/server.js

View File

@@ -0,0 +1,20 @@
## ADDED Requirements
### Requirement: Scheduled Log Pruning
The system SHALL prune each `${projectName}_项目控制台` log list once per hour.
#### Scenario: Hourly pruning of console logs
- **WHEN** the server reaches the top of an hour
- **THEN** it SHALL remove log records older than 24 hours
- **AND** it SHALL keep only the latest 1000 log records by timestamp
- **AND** the pruning operation SHALL explicitly sort records by timestamp to ensure correctness regardless of insertion order
### Requirement: Console Log Display
The console UI SHALL display logs in a compact format.
#### Scenario: Log Message Truncation
- **WHEN** a log message is displayed in the list
- **THEN** it SHALL be restricted to a single line
- **AND** overflow content SHALL be indicated with an ellipsis
- **AND** clicking the message SHALL toggle expansion to show the full content
- **AND** only one message SHALL be expanded at a time

View File

@@ -0,0 +1,13 @@
## 1. OpenSpec
- [x] 1.1 Add logging spec delta for scheduled log pruning
## 2. Backend
- [x] 2.1 Implement hourly log pruning with 24h TTL and 1000 limit
- [x] 2.2 Update scheduled pruning tests
## 3. Verify
- [x] 3.1 Run `npm run test`
- [x] 3.2 Run `npm run lint`

View File

@@ -9,45 +9,6 @@ import { getProjectsList } from '../services/migrateHeartbeatData.js';
const LOGS_MAX_LEN = 1000; const LOGS_MAX_LEN = 1000;
const LOG_TTL_MS = 24 * 60 * 60 * 1000; const LOG_TTL_MS = 24 * 60 * 60 * 1000;
async function listConsoleLogKeys(redis) {
const pattern = '*_项目控制台';
if (typeof redis.scanIterator === 'function') {
const keys = [];
for await (const key of redis.scanIterator({ MATCH: pattern, COUNT: 500 })) {
keys.push(key);
}
return keys;
}
if (typeof redis.keys === 'function') {
const list = await redis.keys(pattern);
return Array.isArray(list) ? list : [];
}
return [];
}
export async function trimProjectConsoleLogsByLength(redis, options = {}) {
const maxLen =
typeof options.maxLen === 'number' && Number.isFinite(options.maxLen)
? Math.max(1, Math.trunc(options.maxLen))
: LOGS_MAX_LEN;
const keys = await listConsoleLogKeys(redis);
let trimmedKeys = 0;
for (const key of keys) {
// eslint-disable-next-line no-await-in-loop
const len = await redis.lLen(key);
if (!Number.isFinite(len) || len <= maxLen) continue;
// eslint-disable-next-line no-await-in-loop
await redis.lTrim(key, -maxLen, -1);
trimmedKeys += 1;
}
return { keysScanned: keys.length, keysTrimmed: trimmedKeys, maxLen };
}
function parsePositiveInt(value, defaultValue) { function parsePositiveInt(value, defaultValue) {
const num = Number.parseInt(String(value), 10); const num = Number.parseInt(String(value), 10);
if (!Number.isFinite(num) || num <= 0) return defaultValue; if (!Number.isFinite(num) || num <= 0) return defaultValue;
@@ -183,11 +144,9 @@ async function pruneAndReadLogsAtomically(redis, key, limit) {
await redis.watch(key); await redis.watch(key);
const rawItems = await redis.lRange(key, 0, -1); const rawItems = await redis.lRange(key, 0, -1);
const ttlKept = rawItems.filter((raw) => shouldKeepRawLog(raw, cutoffMs)); const kept = rawItems
const kept = .filter((raw) => shouldKeepRawLog(raw, cutoffMs))
ttlKept.length > 0 .slice(-maxLen);
? ttlKept.slice(-maxLen)
: rawItems.slice(-Math.min(maxLen, rawItems.length));
const needsRewrite = rawItems.length !== kept.length || rawItems.length > maxLen; const needsRewrite = rawItems.length !== kept.length || rawItems.length > maxLen;
if (!needsRewrite) return kept.slice(-effectiveLimit); if (!needsRewrite) return kept.slice(-effectiveLimit);
@@ -211,6 +170,83 @@ async function pruneAndReadLogsAtomically(redis, key, limit) {
return rawItems; return rawItems;
} }
export async function pruneConsoleLogsForProjects(injectedRedis, options = {}) {
const redis = injectedRedis || (await getRedisClient());
if (!redis?.isReady) {
throw new Error('Redis 未就绪');
}
const maxLen = parsePositiveInt(options.maxLen, LOGS_MAX_LEN);
const configuredTtlMs = parseDurationMs(process.env.LOG_TTL_MS, LOG_TTL_MS);
const effectiveTtlMs = Math.max(configuredTtlMs, LOG_TTL_MS);
const cutoffMs = Date.now() - effectiveTtlMs;
const projects = await getProjectsList(redis);
if (!Array.isArray(projects) || projects.length === 0) {
return { trimmed: 0, scanned: 0 };
}
let trimmed = 0;
let scanned = 0;
for (const project of projects) {
const projectName =
typeof project?.projectName === 'string' ? project.projectName.trim() : '';
if (!projectName) continue;
scanned += 1;
const key = projectConsoleKey(projectName);
const keyType = await redis.type(key);
if (keyType !== 'list') continue;
let updated = false;
for (let attempt = 0; attempt < 3; attempt += 1) {
try {
await redis.watch(key);
const rawItems = await redis.lRange(key, 0, -1);
if (rawItems.length === 0) break;
const keptByTtl = rawItems.filter((raw) => shouldKeepRawLog(raw, cutoffMs));
// Explicitly sort by timestamp to ensure correctness regardless of insertion order
// Sort ascending (oldest -> newest), so we can slice from the end
keptByTtl.sort((a, b) => {
const tsA = parseLogTimestampMs(safeJsonParse(a)?.timestamp) || 0;
const tsB = parseLogTimestampMs(safeJsonParse(b)?.timestamp) || 0;
return tsA - tsB;
});
let kept = keptByTtl;
if (keptByTtl.length > maxLen) {
kept = keptByTtl.slice(-maxLen);
}
const needsRewrite =
rawItems.length !== kept.length || keptByTtl.length !== rawItems.length;
if (!needsRewrite) break;
const multi = redis.multi();
multi.del(key);
if (kept.length > 0) {
multi.rPush(key, ...kept);
}
const execResult = await multi.exec();
if (execResult === null) continue;
updated = true;
break;
} finally {
if (typeof redis.unwatch === 'function') {
await redis.unwatch();
}
}
}
if (updated) trimmed += 1;
}
return { trimmed, scanned };
}
// 获取日志列表 // 获取日志列表
router.get('/', async (req, res) => { router.get('/', async (req, res) => {
const projectName = const projectName =

View File

@@ -2,8 +2,8 @@ import { describe, expect, it } from 'vitest';
import request from 'supertest'; import request from 'supertest';
import { createApp } from '../app.js'; import { createApp } from '../app.js';
import { pruneConsoleLogsForProjects } from './logs.js';
import { createFakeRedis } from '../test/fakeRedis.js'; import { createFakeRedis } from '../test/fakeRedis.js';
import { trimProjectConsoleLogsByLength } from './logs.js';
describe('projects API', () => { describe('projects API', () => {
it('GET /api/projects returns projects from unified list', async () => { it('GET /api/projects returns projects from unified list', async () => {
@@ -96,24 +96,6 @@ describe('projects API', () => {
}); });
describe('logs API', () => { describe('logs API', () => {
it('trimProjectConsoleLogsByLength trims *_项目控制台 lists to 1000 items', async () => {
const projectName = 'Demo';
const key = `${projectName}_项目控制台`;
const items = Array.from({ length: 1505 }, (_, i) => `l${i + 1}`);
const redis = createFakeRedis({
[key]: items,
other: [ 'x', 'y' ],
});
const result = await trimProjectConsoleLogsByLength(redis, { maxLen: 1000 });
expect(result).toMatchObject({ keysScanned: 1, keysTrimmed: 1, maxLen: 1000 });
const listItems = await redis.lRange(key, 0, -1);
expect(listItems.length).toBe(1000);
expect(listItems[0]).toBe('l506');
expect(listItems[listItems.length - 1]).toBe('l1505');
});
it('GET /api/logs uses LOG_TTL_MS=24h without wiping recent logs', async () => { it('GET /api/logs uses LOG_TTL_MS=24h without wiping recent logs', async () => {
const prev = process.env.LOG_TTL_MS; const prev = process.env.LOG_TTL_MS;
process.env.LOG_TTL_MS = '24h'; process.env.LOG_TTL_MS = '24h';
@@ -221,38 +203,6 @@ describe('logs API', () => {
expect(JSON.parse(listItems[0]).id).toBe('new'); expect(JSON.parse(listItems[0]).id).toBe('new');
}); });
it('GET /api/logs keeps latest 1000 logs when all timestamps are expired', async () => {
const now = Date.now();
const projectName = 'Demo';
const key = `${projectName}_项目控制台`;
const items = Array.from({ length: 1505 }, (_, i) =>
JSON.stringify({
id: `log-${i + 1}`,
timestamp: new Date(now - 26 * 60 * 60 * 1000).toISOString(),
level: 'info',
message: `m${i + 1}`,
}),
);
const redis = createFakeRedis({
[key]: items,
});
const app = createApp({ redis });
const resp = await request(app)
.get('/api/logs')
.query({ projectName, limit: 1000 });
expect(resp.status).toBe(200);
expect(resp.body.logs.length).toBe(1000);
expect(resp.body.logs[0].id).toBe('log-506');
expect(resp.body.logs[resp.body.logs.length - 1].id).toBe('log-1505');
const listItems = await redis.lRange(key, 0, -1);
expect(listItems.length).toBe(1000);
expect(JSON.parse(listItems[0]).id).toBe('log-506');
expect(JSON.parse(listItems[listItems.length - 1]).id).toBe('log-1505');
});
it('GET /api/logs keeps unix-second timestamps and prunes correctly', async () => { it('GET /api/logs keeps unix-second timestamps and prunes correctly', async () => {
const now = Date.now(); const now = Date.now();
const nowSec = Math.floor(now / 1000); const nowSec = Math.floor(now / 1000);
@@ -382,6 +332,146 @@ describe('logs API', () => {
expect(JSON.parse(listItems[0]).id).toBe('x'); expect(JSON.parse(listItems[0]).id).toBe('x');
}); });
it('prunes project console logs to latest 1000 items (rpush order)', async () => {
const now = Date.now();
const projectName = 'Demo';
const key = `${projectName}_项目控制台`;
const logs = Array.from({ length: 1005 }, (_, i) =>
JSON.stringify({
id: `log-${i}`,
timestamp: new Date(now + i * 1000).toISOString(),
level: 'info',
message: `log-${i}`,
}),
);
const redis = createFakeRedis({
项目心跳: [
JSON.stringify({
projectName,
apiBaseUrl: 'http://localhost:8080',
lastActiveAt: Date.now(),
}),
],
[key]: logs,
});
const result = await pruneConsoleLogsForProjects(redis, { maxLen: 1000 });
expect(result.scanned).toBe(1);
expect(result.trimmed).toBe(1);
const listItems = await redis.lRange(key, 0, -1);
expect(listItems.length).toBe(1000);
expect(JSON.parse(listItems[0]).id).toBe('log-5');
expect(JSON.parse(listItems[999]).id).toBe('log-1004');
});
it('prunes project console logs to latest 1000 items (lpush order)', async () => {
const now = Date.now();
const projectName = 'Demo';
const key = `${projectName}_项目控制台`;
const logs = Array.from({ length: 1005 }, (_, i) =>
JSON.stringify({
id: `log-${i}`,
timestamp: new Date(now + i * 1000).toISOString(),
level: 'info',
message: `log-${i}`,
}),
).reverse();
const redis = createFakeRedis({
项目心跳: [
JSON.stringify({
projectName,
apiBaseUrl: 'http://localhost:8080',
lastActiveAt: Date.now(),
}),
],
[key]: logs,
});
const result = await pruneConsoleLogsForProjects(redis, { maxLen: 1000 });
expect(result.scanned).toBe(1);
expect(result.trimmed).toBe(1);
const listItems = await redis.lRange(key, 0, -1);
expect(listItems.length).toBe(1000);
// Logic enforces chronological order (Oldest -> Newest)
expect(JSON.parse(listItems[0]).id).toBe('log-5');
expect(JSON.parse(listItems[999]).id).toBe('log-1004');
});
it('prunes project console logs older than 24h during scheduled cleanup', async () => {
const now = Date.now();
const projectName = 'Demo';
const key = `${projectName}_项目控制台`;
const redis = createFakeRedis({
项目心跳: [
JSON.stringify({
projectName,
apiBaseUrl: 'http://localhost:8080',
lastActiveAt: Date.now(),
}),
],
[key]: [
JSON.stringify({
id: 'old',
timestamp: new Date(now - 25 * 60 * 60 * 1000).toISOString(),
level: 'info',
message: 'old',
}),
JSON.stringify({
id: 'new',
timestamp: new Date(now - 60 * 60 * 1000).toISOString(),
level: 'info',
message: 'new',
}),
],
});
const result = await pruneConsoleLogsForProjects(redis, { maxLen: 1000 });
expect(result.scanned).toBe(1);
expect(result.trimmed).toBe(1);
const listItems = await redis.lRange(key, 0, -1);
expect(listItems.length).toBe(1);
expect(JSON.parse(listItems[0]).id).toBe('new');
});
it('prunes correctly even if logs are inserted out of order', async () => {
const now = Date.now();
const projectName = 'Demo';
const key = `${projectName}_项目控制台`;
// Insert Oldest, Newest, Middle in that order
const logs = [
JSON.stringify({ id: 'old', timestamp: new Date(now - 100000).toISOString(), message: 'old' }),
JSON.stringify({ id: 'new', timestamp: new Date(now).toISOString(), message: 'new' }),
JSON.stringify({ id: 'mid', timestamp: new Date(now - 50000).toISOString(), message: 'mid' }),
];
const redis = createFakeRedis({
项目心跳: [ JSON.stringify({ projectName, apiBaseUrl: 'http://localhost:8080', lastActiveAt: Date.now() }) ],
[key]: logs,
});
// MaxLen 2: Should keep New and Mid. Old is dropped.
// Sorted result in Redis should be [Mid, New] (ascending time)
const result = await pruneConsoleLogsForProjects(redis, { maxLen: 2 });
expect(result.scanned).toBe(1);
expect(result.trimmed).toBe(1);
const listItems = await redis.lRange(key, 0, -1);
expect(listItems.length).toBe(2);
const item0 = JSON.parse(listItems[0]);
const item1 = JSON.parse(listItems[1]);
// Expect sorted by time: Mid, then New
expect(item0.id).toBe('mid');
expect(item1.id).toBe('new');
});
it('POST /api/logs/clear deletes all logs for project', async () => { it('POST /api/logs/clear deletes all logs for project', async () => {
const projectName = 'Demo'; const projectName = 'Demo';
const key = `${projectName}_项目控制台`; const key = `${projectName}_项目控制台`;

View File

@@ -1,6 +1,6 @@
import express from 'express'; import express from 'express';
import cors from 'cors'; import cors from 'cors';
import logRoutes, { trimProjectConsoleLogsByLength } from './routes/logs.js'; import logRoutes, { pruneConsoleLogsForProjects } from './routes/logs.js';
import commandRoutes from './routes/commands.js'; import commandRoutes from './routes/commands.js';
import projectRoutes from './routes/projects.js'; import projectRoutes from './routes/projects.js';
import { getRedisClient } from './services/redisClient.js'; import { getRedisClient } from './services/redisClient.js';
@@ -13,7 +13,14 @@ function parsePort(value, defaultPort) {
return Number.isFinite(parsed) ? parsed : defaultPort; return Number.isFinite(parsed) ? parsed : defaultPort;
} }
const PORT = parsePort(process.env.PORT, 3001); function msUntilNextHour(nowMs = Date.now()) {
const date = new Date(nowMs);
date.setMinutes(0, 0, 0);
date.setHours(date.getHours() + 1);
return date.getTime() - nowMs;
}
const PORT = parsePort(process.env.PORT, 19070);
app.use(cors()); app.use(cors());
app.use(express.json()); app.use(express.json());
@@ -43,19 +50,26 @@ const server = app.listen(PORT, async () => {
} }
}, intervalMs); }, intervalMs);
let trimRunning = false; const scheduleConsolePrune = () => {
const logsTrimIntervalMs = 60 * 60 * 1000; const delay = msUntilNextHour();
app.locals.logsTrimInterval = setInterval(async () => { app.locals.consolePruneTimeout = setTimeout(async () => {
if (trimRunning) return; app.locals.consolePruneInterval = setInterval(async () => {
trimRunning = true; try {
try { await pruneConsoleLogsForProjects(redis);
await trimProjectConsoleLogsByLength(redis, { maxLen: 1000 }); } catch (err) {
} catch (err) { void err;
void err; }
} finally { }, 3_600_000);
trimRunning = false;
} try {
}, logsTrimIntervalMs); await pruneConsoleLogsForProjects(redis);
} catch (err) {
void err;
}
}, delay);
};
scheduleConsolePrune();
} catch (err) { } catch (err) {
console.error('[redis] failed to connect on startup', err); console.error('[redis] failed to connect on startup', err);
} }
@@ -66,8 +80,11 @@ process.on('SIGINT', async () => {
if (app.locals.heartbeatPruneInterval) { if (app.locals.heartbeatPruneInterval) {
clearInterval(app.locals.heartbeatPruneInterval); clearInterval(app.locals.heartbeatPruneInterval);
} }
if (app.locals.logsTrimInterval) { if (app.locals.consolePruneTimeout) {
clearInterval(app.locals.logsTrimInterval); clearTimeout(app.locals.consolePruneTimeout);
}
if (app.locals.consolePruneInterval) {
clearInterval(app.locals.consolePruneInterval);
} }
if (app.locals.redis) { if (app.locals.redis) {
await app.locals.redis.quit(); await app.locals.redis.quit();

View File

@@ -22,7 +22,7 @@
<h3>服务不可用</h3> <h3>服务不可用</h3>
<p>无法连接到后端服务请检查服务是否正常运行</p> <p>无法连接到后端服务请检查服务是否正常运行</p>
</div> </div>
<router-view v-else name="main" /> <router-view name="main" />
</main> </main>
</div> </div>
</div> </div>
@@ -306,4 +306,4 @@ body {
width: 280px; width: 280px;
} }
} }
</style> </style>

View File

@@ -6,7 +6,10 @@
<!-- 日志级别筛选 --> <!-- 日志级别筛选 -->
<div class="log-level-filter"> <div class="log-level-filter">
<label class="filter-label">日志级别:</label> <label class="filter-label">日志级别:</label>
<select v-model="selectedLogLevel" class="filter-select"> <select
v-model="selectedLogLevel"
class="filter-select"
>
<option value="all"> <option value="all">
全部 全部
</option> </option>
@@ -28,13 +31,20 @@
<!-- 自动滚动开关 --> <!-- 自动滚动开关 -->
<div class="auto-scroll-toggle"> <div class="auto-scroll-toggle">
<label class="toggle-label"> <label class="toggle-label">
<input v-model="autoScroll" type="checkbox" class="toggle-checkbox"> <input
v-model="autoScroll"
type="checkbox"
class="toggle-checkbox"
>
<span class="toggle-text">自动滚动</span> <span class="toggle-text">自动滚动</span>
</label> </label>
</div> </div>
<!-- 日志清理按钮 --> <!-- 日志清理按钮 -->
<button class="clear-logs-btn" @click="clearLogs"> <button
class="clear-logs-btn"
@click="clearLogs"
>
清空日志 清空日志
</button> </button>
@@ -46,8 +56,14 @@
</div> </div>
<!-- 日志显示区域 --> <!-- 日志显示区域 -->
<div ref="logsContainer" class="logs-container"> <div
<div v-if="timelineMarkers.length" class="timeline-bar"> ref="logsContainer"
class="logs-container"
>
<div
v-if="timelineMarkers.length"
class="timeline-bar"
>
<div class="timeline-header"> <div class="timeline-header">
<div class="timeline-title"> <div class="timeline-title">
时间轴 时间轴
@@ -77,10 +93,18 @@
</div> </div>
<div class="timeline-track"> <div class="timeline-track">
<div v-for="marker in timelineMarkers" :key="marker.id" class="timeline-marker-group" <div
:style="{ left: `${marker.position}%` }" @click="scrollToLog(marker.id)"> v-for="marker in timelineMarkers"
<div class="timeline-marker" :class="`marker-${marker.level}`" :key="marker.id"
:title="`${formatTimestamp(marker.timestamp)}: ${marker.message}`" /> class="timeline-marker-group"
:style="{ left: `${marker.position}%` }"
@click="scrollToLog(marker.id)"
>
<div
class="timeline-marker"
:class="`marker-${marker.level}`"
:title="`${formatTimestamp(marker.timestamp)}: ${marker.message}`"
/>
<div class="timeline-tooltip"> <div class="timeline-tooltip">
<div class="tooltip-time"> <div class="tooltip-time">
@@ -94,17 +118,28 @@
</div> </div>
</div> </div>
<div ref="logTableWrapper" class="log-table-wrapper" @scroll="handleScroll"> <div
ref="logTableWrapper"
class="log-table-wrapper"
@scroll="handleScroll"
>
<table class="log-table"> <table class="log-table">
<tbody> <tbody>
<tr v-for="log in filteredLogs" :key="log.id" :data-log-id="log.id" <tr
:class="['log-item', `log-level-${log.level}`, { 'log-active': activeLogId === log.id }]"> v-for="log in filteredLogs"
:key="log.id"
:data-log-id="log.id"
:class="['log-item', `log-level-${log.level}`, { 'log-active': activeLogId === log.id }]"
>
<td class="log-meta"> <td class="log-meta">
<div class="log-timestamp"> <div class="log-timestamp">
{{ formatTimestamp(log.timestamp) }} {{ formatTimestamp(log.timestamp) }}
</div> </div>
</td> </td>
<td class="log-message"> <td
:class="['log-message', { 'log-message-expanded': expandedLogId === log.id }]"
@click="toggleExpand(log.id)"
>
{{ log.message }} {{ log.message }}
</td> </td>
</tr> </tr>
@@ -113,7 +148,10 @@
</div> </div>
<!-- 空状态 --> <!-- 空状态 -->
<div v-if="filteredLogs.length === 0" class="empty-logs"> <div
v-if="filteredLogs.length === 0"
class="empty-logs"
>
<p>暂无日志记录</p> <p>暂无日志记录</p>
</div> </div>
</div> </div>
@@ -123,9 +161,19 @@
<div class="command-prompt"> <div class="command-prompt">
$ $
</div> </div>
<input ref="commandInputRef" v-model="commandInput" type="text" class="command-input" <input
placeholder="输入:<接口名> <参数...>(按空格分割)" autocomplete="off" @keydown.enter="sendCommand"> ref="commandInputRef"
<button class="send-command-btn" @click="sendCommand"> v-model="commandInput"
type="text"
class="command-input"
placeholder="输入:<接口名> <参数...>(按空格分割)"
autocomplete="off"
@keydown.enter="sendCommand"
>
<button
class="send-command-btn"
@click="sendCommand"
>
发送 发送
</button> </button>
</div> </div>
@@ -174,6 +222,15 @@ const filteredLogs = computed(() => {
}); });
const activeLogId = ref(''); const activeLogId = ref('');
const expandedLogId = ref('');
function toggleExpand(logId) {
if (expandedLogId.value === logId) {
expandedLogId.value = '';
} else {
expandedLogId.value = logId;
}
}
const timelineData = computed(() => { const timelineData = computed(() => {
if (!filteredLogs.value.length) { if (!filteredLogs.value.length) {
@@ -225,6 +282,7 @@ const timelineMarkers = computed(() => {
function scrollTableToBottom() { function scrollTableToBottom() {
if (!logTableWrapper.value) return; if (!logTableWrapper.value) return;
setTimeout(() => { setTimeout(() => {
if (!logTableWrapper.value) return;
logTableWrapper.value.scrollTop = logTableWrapper.value.scrollHeight; logTableWrapper.value.scrollTop = logTableWrapper.value.scrollHeight;
}, 60); }, 60);
} }
@@ -321,6 +379,7 @@ const addLog = (logData) => {
// 自动滚动到底部(如果启用了自动滚动且用户在底部) // 自动滚动到底部(如果启用了自动滚动且用户在底部)
if (autoScroll.value && isAtBottom.value && logTableWrapper.value) { if (autoScroll.value && isAtBottom.value && logTableWrapper.value) {
setTimeout(() => { setTimeout(() => {
if (!logTableWrapper.value) return;
logTableWrapper.value.scrollTop = logTableWrapper.value.scrollHeight; logTableWrapper.value.scrollTop = logTableWrapper.value.scrollHeight;
}, 50); }, 50);
} }
@@ -377,6 +436,7 @@ const handleScroll = () => {
watch(filteredLogs, () => { watch(filteredLogs, () => {
if (autoScroll.value && isAtBottom.value && logTableWrapper.value) { if (autoScroll.value && isAtBottom.value && logTableWrapper.value) {
setTimeout(() => { setTimeout(() => {
if (!logTableWrapper.value) return;
logTableWrapper.value.scrollTop = logTableWrapper.value.scrollHeight; logTableWrapper.value.scrollTop = logTableWrapper.value.scrollHeight;
}, 50); }, 50);
} }
@@ -809,9 +869,18 @@ watch(() => props.projectName, async () => {
.log-message { .log-message {
vertical-align: top; vertical-align: top;
padding: 0 0 0.25rem 0; padding: 0 0 0.25rem 0;
word-break: break-word;
white-space: pre-wrap;
line-height: 1.5; line-height: 1.5;
white-space: nowrap;
overflow: hidden;
text-overflow: ellipsis;
cursor: pointer;
}
.log-message-expanded {
white-space: pre-wrap;
word-break: break-word;
overflow: visible;
cursor: text;
} }
.log-level-error .log-message { .log-level-error .log-message {

View File

@@ -13,7 +13,7 @@ export default defineConfig({
port: 3000, port: 3000,
proxy: { proxy: {
'/api': { '/api': {
target: 'http://localhost:3001', target: 'http://127.0.0.1:19070',
changeOrigin: true, changeOrigin: true,
}, },
}, },