I'm working with PyTorch on M2 Max.
I'm trying to improve computing time with the help of GPU.
I have a working variant with GPU:
mnist_test_loader = DataLoader(mnist_test_dataset, batch_size=32, shuffle=False)
network.to(device="mps")
for X in mnist_test_loader:
X = X.to(device="mps")
prediction = network(X)
But the thing is that when iterating over Dataloader, at first X is loaded on CPU and it takes time to move it to GPU X.to(device="mps")
I've been reading documentation for a while and have found out that it's possible to load data on GPU in Dataloader when creating it. To do so I add 2 params: pin_memory, pin_memory_device
.
mnist_test_loader = DataLoader(mnist_test_dataset, batch_size=32, shuffle=False,
pin_memory=True, pin_memory_device="mps")
And the idea is that I pass data from Dataloader on GPU directly into network, which is on GPU too without having to move it from CPU to GPU when iterating.
But then the problem appears:
for X in moist_test_loader:
...
I can't iterate over Dataloader on GPU. When I try to, I get the following error:
RuntimeError: Attempted to set the storage of a tensor on device "cpu" to a storage on different device "mps:0".
This is no longer allowed; the devices must match.
Any ideas how to deal with it? Maybe there's another way to get data from Dataloader? Thanks for your time.
from Iterate over Dataloader which is loaded on GPU (MPS)
No comments:
Post a Comment