如果我们使用Dataset
和Dataloader
类的组合(如下所示),我必须使用或将数据显式加载到GPU上。有没有办法指示数据加载器自动/隐式地执行它?.to()
.cuda()
理解/重现场景的代码:
from torch.utils.data import Dataset, DataLoader
import numpy as np
class DemoData(Dataset):
def __init__(self, limit):
super(DemoData, self).__init__()
self.data = np.arange(limit)
def __len__(self):
return self.data.shape[0]
def __getitem__(self, idx):
return (self.data[idx], self.data[idx]*100)
demo = DemoData(100)
loader = DataLoader(demo, batch_size=50, shuffle=True)
for i, (i1, i2) in enumerate(loader):
print('Batch Index: {}'.format(i))
print('Shape of data item 1: {}; shape of data item 2: {}'.format(i1.shape, i2.shape))
# i1, i2 = i1.to('cuda:0'), i2.to('cuda:0')
print('Device of data item 1: {}; device of data item 2: {}\n'.format(i1.device, i2.device))
这将输出以下内容;注意 - 没有明确的设备传输指令,数据被加载到CPU上:
Batch Index: 0
Shape of data item 1: torch.Size([50]); shape of data item 2: torch.Size([50])
Device of data item 1: cpu; device of data item 2: cpu
Batch Index: 1
Shape of data item 1: torch.Size([50]); shape of data item 2: torch.Size([50])
Device of data item 1: cpu; device of data item 2: cpu
一个可能的解决方案是在这个 PyTorch GitHub 存储库中。问题(在发布此问题时仍处于打开状态),但是当数据加载器必须返回多个数据项时,我无法使其工作!