我有一个我想写出的列表data
,每个项目都有一个文件,如下所示:
for i,chunk in enumerate(data):
fname = ROOT / f'{i}.in'
with open(fname, "wb") as fout:
dill.dump(chunk, fout)
由于数据列表可能很长,并且我正在写入网络存储位置,因此我花费大量时间等待 NFS 中的迭代,如果可能的话,我希望异步执行此操作。
我现在基本上看起来像这样:
import dill
import asyncio
import aiofiles
from pathlib import Path
ROOT = Path("/tmp/")
data = [str(i) for i in range(500)]
def serialize(data):
"""
Write my data out in serial
"""
for i,chunk in enumerate(data):
fname = ROOT / f'{i}.in'
print(fname)
with open(fname, "wb") as fout:
dill.dump(chunk, fout)
def aserialize(data):
"""
Same as above, but writes my data out asynchronously
"""
fnames = [ROOT / f'{i}.in' for i in range(len(data))]
chunks = data
async def write_file(i):
fname = fnames[i]
chunk = chunks[i]
print(fname)
async with aiofiles.open(fname, "wb") as fout:
print(f"written: {i}")
dill.dump(chunk, fout)
await fout.flush()
loop = asyncio.get_event_loop()
loop.run_until_complete(asyncio.gather(*[write_file(i) for i in range(len(data))]))
现在,当我测试写入时,这看起来足够快,值得在我的 NFS 上使用:
# test 1
start = datetime.utcnow()
serialize(data)
end = datetime.utcnow()
print(end - start)
# >>> 0:02:04.204681
# test 3
start = datetime.utcnow()
aserialize(data)
end = datetime.utcnow()
print(end - start)
# >>> 0:00:27.048893
# faster is better.
但是当我实际 /de/-serialize 我写的数据时,我发现它可能很快,因为它没有写任何东西:
def deserialize(dat):
tmp = []
for i in range(len(dat)):
fname = ROOT / f'{i}.in'
with open(fname, "rb") as fin:
fo = dill.load(fin)
tmp.append(fo)
return tmp
serialize(data)
d2 = deserialize(data)
d2 == data
# True
好,而:
aserialize(data)
d3 = deserialize(data)
>>> Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "<stdin>", line 6, in deserialize
File "...python3.7/site-packages/dill/_dill.py", line 305, in load
obj = pik.load()
EOFError: Ran out of input
即异步写入的文件为空。怪不得这么快。
我怎样才能将我的列表异步挖掘/腌制到文件中并让它们实际写入?我想我需要以某种方式等待 dill.dump 吗?我认为 fout.flush 会处理这个问题,但似乎不是。