We read every piece of feedback, and take your input very seriously.
To see all available qualifiers, see our documentation.
There was an error while loading. Please reload this page.
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
我使用如下命令进行多卡分布式训练: CUDA_VISIBLE_DEVICES=5,6 python -m torch.distributed.run main.py --data_root ./text_spotting_datasets/ --output_folder ./output/pretrain/stage1/ --train_dataset totaltext_train mlt_train ic13_train ic15_train syntext1_train syntext2_train --lr 0.0005 --max_steps 400000 --warmup_steps 5000 --checkpoint_freq 10000 --batch_size 6 --tfm_pre_norm --train_max_size 768 --rec_loss_weight 2 --use_fpn --use_char_window_prompt 但是实际上只有5号卡在训练,6号卡没有显存占用
The text was updated successfully, but these errors were encountered:
No branches or pull requests
我使用如下命令进行多卡分布式训练:
CUDA_VISIBLE_DEVICES=5,6 python -m torch.distributed.run
main.py
--data_root ./text_spotting_datasets/
--output_folder ./output/pretrain/stage1/
--train_dataset totaltext_train mlt_train ic13_train ic15_train syntext1_train syntext2_train
--lr 0.0005
--max_steps 400000
--warmup_steps 5000
--checkpoint_freq 10000
--batch_size 6
--tfm_pre_norm
--train_max_size 768
--rec_loss_weight 2
--use_fpn
--use_char_window_prompt
但是实际上只有5号卡在训练,6号卡没有显存占用
The text was updated successfully, but these errors were encountered: