Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Graphic Memory problems #3

Open
ZorrowHu opened this issue Nov 10, 2020 · 2 comments
Open

Graphic Memory problems #3

ZorrowHu opened this issue Nov 10, 2020 · 2 comments

Comments

@ZorrowHu
Copy link

When running with dataset DIGINETICA, I found this program is really memory demanding, not to speak of YOOCHOOSE. For my case, one TITAN XP is just not enough. Then I was trying to apply multi graph cards to cuda and I just failed to do so.
I try to use Dataparallel from torch:

os.environ['CUDA_VISIBLE_DEVICES'] = '0,1'
model = torch.nn.Dataparallel(SessionGraph())
model = model.cuda()

Since model is now a DataParallel object the parameters in train_test is modified accordingly

model.loss_function → model.module.loss_function
...

And that didn't work. I still got runtime error about not enough memory in GPU0 while GPU1 was barely utilized. So I want to know how you guys done it. Great thanks in advance!

@johnny12150
Copy link

When running the diginetica datasets, I set the batch_size to 50 in order to run on the 2080Ti.
Now, I can run batch_size = 100, when using 3090, this model is really VRA demanding.

@johnny12150
Copy link

johnny12150 commented Jan 4, 2021

@ZorrowHu You need to assign the device_ids in Dataparallel.
And the optimizer part needs to be updated in order to utilize the second or other GPUs.
image
I have tested this on SR-GNN not TA-GNN, you can give it a try.

Edit
For more info, you can view my repo.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants