Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Allow server to run on specific GPU id #87

Open
che85 opened this issue Oct 24, 2024 · 2 comments
Open

Allow server to run on specific GPU id #87

che85 opened this issue Oct 24, 2024 · 2 comments

Comments

@che85
Copy link
Contributor

che85 commented Oct 24, 2024

It would be helpful to have an option to specify which GPU to use when running inference on a machine with multiple GPUs. In my case, I am running multiple MONAILabel servers, each with its own dedicated GPU.

@che85
Copy link
Contributor Author

che85 commented Oct 24, 2024

device = torch.device("cpu") if torch.cuda.device_count() == 0 else torch.device(0)

The script uses the first available GPU. We could use CUDA_VISIBILE_DEVICES environment variable, but I am not sure how this behaves when running a server which will run a subprocess with sys.executable

@lassoan
Copy link
Owner

lassoan commented Oct 24, 2024

Since we already pass several configuration parameters via command-line argument, we could just add one more optional device argument:

def main(model_file,
         image_file,
         result_file,
         save_mode=None,
         image_file_2=None,
         image_file_3=None,
         image_file_4=None,
         device=None,
         **kwargs):
...
if device is None:
    device = torch.device("cpu") if torch.cuda.device_count() == 0 else torch.device(0)
else:
    device = torch.device(device)
...

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants