feat: 实现Kafka批量消费与写入以提升吞吐量

引入批量处理机制,将消息缓冲并按批次写入数据库,显著提高消费性能。调整Kafka配置参数,优化消费者并发与提交策略。新增分区索引自动创建功能,并重构处理器以支持批量操作。添加降级写入逻辑以处理数据错误,同时增强指标收集以监控批量处理效果。
This commit is contained in:
2026-02-09 10:50:56 +08:00
parent a8c7cf74e6
commit 8337c60f98
17 changed files with 1165 additions and 330 deletions

View File

@@ -9,8 +9,12 @@ KAFKA_SASL_MECHANISM=plain
KAFKA_SASL_USERNAME=blwmomo
KAFKA_SASL_PASSWORD=blwmomo
KAFKA_SSL_ENABLED=false
KAFKA_CONSUMER_INSTANCES=6
KAFKA_MAX_IN_FLIGHT=50
KAFKA_CONSUMER_INSTANCES=3
KAFKA_MAX_IN_FLIGHT=5000
KAFKA_BATCH_SIZE=1000
KAFKA_BATCH_TIMEOUT_MS=20
KAFKA_COMMIT_INTERVAL_MS=200
KAFKA_COMMIT_ON_ATTEMPT=true
KAFKA_FETCH_MAX_BYTES=10485760
KAFKA_FETCH_MAX_WAIT_MS=100
KAFKA_FETCH_MIN_BYTES=1