0

我正在使用电子后端 API 文档来构建 btc 区块链索引引擎和本地 HTTP API - https://github.com/Blockstream/electrs on linux os。

在索引过程中发生错误(我重复整个过程不止一次并且错误发生在同一个地方 - 根据我的解释,总是在阅读过程结束时[准确地说是几分钟后]):

DEBUG - writing 1167005 rows to RocksDB { path: "./db/mainnet/newindex/txstore" }, flush=Disable
TRACE - parsing 50331648 bytes
TRACE - fetched 101 blocks
DEBUG - writing 1144149 rows to RocksDB { path: "./db/mainnet/newindex/txstore" }, flush=Disable
TRACE - fetched 104 blocks
DEBUG - writing 1221278 rows to RocksDB { path: "./db/mainnet/newindex/txstore" }, flush=Disable
TRACE - skipping block 00000000000000000006160011df713a63b3bedc361b60bad660d5a76434ad59
TRACE - skipping block 00000000000000000005d70314d0dd3a31b0d44a5d83bc6c66a4aedbf8cf6207
TRACE - skipping block 00000000000000000001363a85233b4e4a024c8c8791d9eb0e7942a75be0d4de
TRACE - skipping block 00000000000000000008512cf84870ff39ce347e7c83083615a2731e34a3a956
TRACE - skipping block 0000000000000000000364350efd609c8b140d7b9818f15e19a17df9fc736971
TRACE - skipping block 0000000000000000000cc0a4fd1e418341f5926f0a6a5c5e70e4e190ed4b2251
TRACE - fetched 23 blocks
DEBUG - writing 1159426 rows to RocksDB { path: "./db/mainnet/newindex/txstore" }, flush=Disable
DEBUG - writing 1155416 rows to RocksDB { path: "./db/mainnet/newindex/txstore" }, flush=Disable
DEBUG - writing 232110 rows to RocksDB { path: "./db/mainnet/newindex/txstore" }, flush=Disable
DEBUG - starting full compaction on RocksDB { path: "./db/mainnet/newindex/txstore" }
DEBUG - finished full compaction on RocksDB { path: "./db/mainnet/newindex/txstore" }
thread 'main' panicked at 'called `Result::unwrap()` on an `Err` value: Error { message: "IO error: While open a file for random read: ./db/mainnet/newindex/txstore/000762.sst: Too many open files" }', src/new_index/db.rs:192:44
note: run with `RUST_BACKTRACE=1` environment variable to display a backtrace
Aborted (core dumped)

db 目录(存储索引)的大小超过 450GB。我的打开文件限制是 1048576(由 ulimit -aH 检查),所以问题可能不存在。我检查了https://github.com/Blockstream/esplora/issues/133任务但没有帮助。任何想法出了什么问题?

编辑:软限制(通过“ulimit -n”检查后)等于 1024 - 这是问题的根源。将其设置为 65000 解决了它。我通过“ulimit -n 65000”设置它,仅在当前打开的终端的一个会话期间有效。我更改了 etc/security/limits.conf,但更改并未全局保存。

4

0 回答 0