You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Hello,
I am training on a custom dataset with 80 classes. I have updated the config file for CHAR_NUM_CLASSES, NUM_CHAR, and set RESUME and CHAR_MASK_ON to False. I am using the pre_train model for finetuning. I have also updated char_to_num and num_to_char functions and added the class for new dataset in maskrcnn_benchmark folder and well as updating the paths_catalog.py file. I am getting the error
RuntimeError: Error(s) in loading state_dict for DistributedDataParallel:
size mismatch for module.roi_heads.mask.predictor.seq.seq_decoder.embedding.weight: copying a param with shape torch.Size([38, 38]) from checkpoint, the shape in current model is torch.Size([83, 83]).
size mismatch for module.roi_heads.mask.predictor.seq.seq_decoder.word_linear.weight: copying a param with shape torch.Size([256, 38]) from checkpoint, the shape in current model is torch.Size([256, 83]).
size mismatch for module.roi_heads.mask.predictor.seq.seq_decoder.out.weight: copying a param with shape torch.Size([38, 256]) from checkpoint, the shape in current model is torch.Size([83, 256]).
size mismatch for module.roi_heads.mask.predictor.seq.seq_decoder.out.bias: copying a param with shape torch.Size([38]) from checkpoint, the shape in current model is torch.Size([83]).
Please tell me how I can fix this error.
Thank you
The text was updated successfully, but these errors were encountered:
Hello,
I am training on a custom dataset with 80 classes. I have updated the config file for CHAR_NUM_CLASSES, NUM_CHAR, and set RESUME and CHAR_MASK_ON to False. I am using the pre_train model for finetuning. I have also updated char_to_num and num_to_char functions and added the class for new dataset in maskrcnn_benchmark folder and well as updating the paths_catalog.py file. I am getting the error
RuntimeError: Error(s) in loading state_dict for DistributedDataParallel:
size mismatch for module.roi_heads.mask.predictor.seq.seq_decoder.embedding.weight: copying a param with shape torch.Size([38, 38]) from checkpoint, the shape in current model is torch.Size([83, 83]).
size mismatch for module.roi_heads.mask.predictor.seq.seq_decoder.word_linear.weight: copying a param with shape torch.Size([256, 38]) from checkpoint, the shape in current model is torch.Size([256, 83]).
size mismatch for module.roi_heads.mask.predictor.seq.seq_decoder.out.weight: copying a param with shape torch.Size([38, 256]) from checkpoint, the shape in current model is torch.Size([83, 256]).
size mismatch for module.roi_heads.mask.predictor.seq.seq_decoder.out.bias: copying a param with shape torch.Size([38]) from checkpoint, the shape in current model is torch.Size([83]).
Please tell me how I can fix this error.
Thank you
The text was updated successfully, but these errors were encountered: