You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I've trained a model on a GPU. Exported the model as a zip and transferred to a PC with only CPU. I installed cpu-only pytorch bud getting this error on running nnUNet_predict. is this because this model was trained on a GPU? It seems still looking for CUDA... any way to fix this? Thanks.
Traceback (most recent call last):
File "/home/jk/projects/nnunet_scripts/_venv/bin/nnUNetv2_predict", line 8, in
sys.exit(predict_entry_point())
File "/home/jk/projects/nnunet_scripts/_nnUNet/nnunetv2/inference/predict_from_raw_data.py", line 866, in predict_entry_point
predictor.predict_from_files(args.i, args.o, save_probabilities=args.save_probabilities,
File "/home/jk/projects/nnunet_scripts/_nnUNet/nnunetv2/inference/predict_from_raw_data.py", line 258, in predict_from_files
return self.predict_from_data_iterator(data_iterator, save_probabilities, num_processes_segmentation_export)
File "/home/jk/projects/nnunet_scripts/_nnUNet/nnunetv2/inference/predict_from_raw_data.py", line 351, in predict_from_data_iterator
for preprocessed in data_iterator:
File "/home/jk/projects/nnunet_scripts/_nnUNet/nnunetv2/inference/data_iterators.py", line 117, in preprocessing_iterator_fromfiles
[i.pin_memory() for i in item.values() if isinstance(i, torch.Tensor)]
File "/home/jk/projects/nnunet_scripts/_nnUNet/nnunetv2/inference/data_iterators.py", line 117, in
[i.pin_memory() for i in item.values() if isinstance(i, torch.Tensor)]
NotImplementedError: Could not run 'aten::_pin_memory' with arguments from the 'CUDA' backend. This could be because the operator doesn't exist for this backend, or was omitted during the selective/custom build process (if using custom build). If you are a Facebook employee using PyTorch on mobile, please visit https://fburl.com/ptmfixes for possible resolutions. 'aten::_pin_memory' is only available for these backends: [Meta, NestedTensorCPU, BackendSelect, Python, FuncTorchDynamicLayerBackMode, Functionalize, Named, Conjugate, Negative, ZeroTensor, ADInplaceOrView, AutogradOther, AutogradCPU, AutogradCUDA, AutogradHIP, AutogradXLA, AutogradMPS, AutogradIPU, AutogradXPU, AutogradHPU, AutogradVE, AutogradLazy, AutogradMTIA, AutogradPrivateUse1, AutogradPrivateUse2, AutogradPrivateUse3, AutogradMeta, AutogradNestedTensor, Tracer, AutocastCPU, AutocastXPU, AutocastCUDA, FuncTorchBatched, BatchedNestedTensor, FuncTorchVmapMode, Batched, VmapMode, FuncTorchGradWrapper, PythonTLSSnapshot, FuncTorchDynamicLayerFrontMode, PreDispatch, PythonDispatcher].
reacted with thumbs up emoji reacted with thumbs down emoji reacted with laugh emoji reacted with hooray emoji reacted with confused emoji reacted with heart emoji reacted with rocket emoji reacted with eyes emoji
-
I've trained a model on a GPU. Exported the model as a zip and transferred to a PC with only CPU. I installed cpu-only pytorch bud getting this error on running nnUNet_predict. is this because this model was trained on a GPU? It seems still looking for CUDA... any way to fix this? Thanks.
Traceback (most recent call last):
File "/home/jk/projects/nnunet_scripts/_venv/bin/nnUNetv2_predict", line 8, in
sys.exit(predict_entry_point())
File "/home/jk/projects/nnunet_scripts/_nnUNet/nnunetv2/inference/predict_from_raw_data.py", line 866, in predict_entry_point
predictor.predict_from_files(args.i, args.o, save_probabilities=args.save_probabilities,
File "/home/jk/projects/nnunet_scripts/_nnUNet/nnunetv2/inference/predict_from_raw_data.py", line 258, in predict_from_files
return self.predict_from_data_iterator(data_iterator, save_probabilities, num_processes_segmentation_export)
File "/home/jk/projects/nnunet_scripts/_nnUNet/nnunetv2/inference/predict_from_raw_data.py", line 351, in predict_from_data_iterator
for preprocessed in data_iterator:
File "/home/jk/projects/nnunet_scripts/_nnUNet/nnunetv2/inference/data_iterators.py", line 117, in preprocessing_iterator_fromfiles
[i.pin_memory() for i in item.values() if isinstance(i, torch.Tensor)]
File "/home/jk/projects/nnunet_scripts/_nnUNet/nnunetv2/inference/data_iterators.py", line 117, in
[i.pin_memory() for i in item.values() if isinstance(i, torch.Tensor)]
NotImplementedError: Could not run 'aten::_pin_memory' with arguments from the 'CUDA' backend. This could be because the operator doesn't exist for this backend, or was omitted during the selective/custom build process (if using custom build). If you are a Facebook employee using PyTorch on mobile, please visit https://fburl.com/ptmfixes for possible resolutions. 'aten::_pin_memory' is only available for these backends: [Meta, NestedTensorCPU, BackendSelect, Python, FuncTorchDynamicLayerBackMode, Functionalize, Named, Conjugate, Negative, ZeroTensor, ADInplaceOrView, AutogradOther, AutogradCPU, AutogradCUDA, AutogradHIP, AutogradXLA, AutogradMPS, AutogradIPU, AutogradXPU, AutogradHPU, AutogradVE, AutogradLazy, AutogradMTIA, AutogradPrivateUse1, AutogradPrivateUse2, AutogradPrivateUse3, AutogradMeta, AutogradNestedTensor, Tracer, AutocastCPU, AutocastXPU, AutocastCUDA, FuncTorchBatched, BatchedNestedTensor, FuncTorchVmapMode, Batched, VmapMode, FuncTorchGradWrapper, PythonTLSSnapshot, FuncTorchDynamicLayerFrontMode, PreDispatch, PythonDispatcher].
Beta Was this translation helpful? Give feedback.
All reactions