Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Bug] do not use torch.cuda.current_device() as device, since it only retures an int #130

Open
sunpengsdu opened this issue Mar 26, 2024 · 3 comments
Assignees
Labels
bug Something isn't working

Comments

@sunpengsdu
Copy link
Contributor

Describe the bug

we have a lot of cases like following:

data = torch.empty(partition_size, dtype=tensor.dtype, device=torch.cuda.current_device(), requires_grad=False)

where we directly use device=torch.cuda.current_device(). However, it is not recommended to do like it, since torch.cuda.current_device() only returns device id. It is OK to run such codes on GPUs. However, maybe there are some problems when running on NPU

Environment

python3.8 + torch2.1

Other information

No response

@sunpengsdu sunpengsdu added the bug Something isn't working label Mar 26, 2024
@sunpengsdu
Copy link
Contributor Author

企业微信截图_96b76769-6b40-4504-b17a-102e3ae70efa ![Uploading 企业微信截图_f6b71604-f3a9-4917-b0f4-e36960e097c3.png…]()

@sunpengsdu
Copy link
Contributor Author

企业微信截图_f6b71604-f3a9-4917-b0f4-e36960e097c3

@sallyjunjun
Copy link
Collaborator

already removed currently

https://github.com/InternLM/InternEvo/pull/139/files

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Projects
None yet
Development

No branches or pull requests

3 participants