You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
When running with dataset DIGINETICA, I found this program is really memory demanding, not to speak of YOOCHOOSE. For my case, one TITAN XP is just not enough. Then I was trying to apply multi graph cards to cuda and I just failed to do so.
I try to use Dataparallel from torch:
os.environ['CUDA_VISIBLE_DEVICES'] = '0,1'
model = torch.nn.Dataparallel(SessionGraph())
model = model.cuda()
Since model is now a DataParallel object the parameters in train_test is modified accordingly
And that didn't work. I still got runtime error about not enough memory in GPU0 while GPU1 was barely utilized. So I want to know how you guys done it. Great thanks in advance!
The text was updated successfully, but these errors were encountered:
When running the diginetica datasets, I set the batch_size to 50 in order to run on the 2080Ti.
Now, I can run batch_size = 100, when using 3090, this model is really VRA demanding.
@ZorrowHu You need to assign the device_ids in Dataparallel.
And the optimizer part needs to be updated in order to utilize the second or other GPUs.
I have tested this on SR-GNN not TA-GNN, you can give it a try.
When running with dataset DIGINETICA, I found this program is really memory demanding, not to speak of YOOCHOOSE. For my case, one TITAN XP is just not enough. Then I was trying to apply multi graph cards to cuda and I just failed to do so.
I try to use Dataparallel from torch:
Since model is now a DataParallel object the parameters in train_test is modified accordingly
And that didn't work. I still got runtime error about not enough memory in GPU0 while GPU1 was barely utilized. So I want to know how you guys done it. Great thanks in advance!
The text was updated successfully, but these errors were encountered: