In this project, we develop a toolkit for lane detection to facilitate research. Welcome to join us to make this project more perfect and practical.
If you do not have enough compute resource, we recommend that you can run our project at AiStudio, which can provide you with V100(32GB memory) for free. We also opened source the chinese version at AiStudio. Project link is here
If you want to learn more changes, you can refer to History.
- [2023-02-24] : We fixed some bugs in PPLanedet. CLRNet is still under fixing. If you want to achieve high performance, we recommend that you can try CondLaneNet.
- [2023-02-24] 🔥 We released the version5 of the PPLanedet. In V5, we reproduced more backbones and necks like CSPRepBiFPN, which is used in YOLOv6. With aforementioned components, we achieved the state-of-the-art performance on CULane with CondLaneNet. Compared with vanilla CondLaneNet, Our CondLaneNet achieves 79.92 F1 score and only contains 11M parameters. More details you can find in CondLaneNet config.
PPlanedet is developed for lane detection based on PaddlPaddle. PaddlePaddle is a high performance Deep learning framework. The idea behind the pplanedet is to facilitate researchers who use PaddlePaddle to do research about lane detection. If you have any suggestions about our project, you can contact me.
Models | Components | |
Segmentation basedGAN based |
BackbonesNecksMetrics
|
Data Augmentation
|
step 1 Install PaddlePaddle>=2.4.0(you can refer to official documentation)
conda create -n pplanedet python=3.8 -y
conda activate pplanedet
conda install paddlepaddle-gpu==2.4.1 cudatoolkit=10.2 --channel https://mirrors.tuna.tsinghua.edu.cn/anaconda/cloud/Paddle/
git clone https://github.com/zkyseu/PPlanedet
cd PPlanedet
pip install -r requirements.txt
python setup.py build develop
Download CULane. Then extract them to $CULANEROOT
. Create link to data
directory.
cd $LANEDET_ROOT
mkdir -p data
ln -s $CULANEROOT data/CULane
For CULane, you should have structure like this:
$CULANEROOT/driver_xx_xxframe # data folders x6
$CULANEROOT/laneseg_label_w16 # lane segmentation labels
$CULANEROOT/list # data lists
Download Tusimple. Then extract them to $TUSIMPLEROOT
. Create link to data
directory.
cd $LANEDET_ROOT
mkdir -p data
ln -s $TUSIMPLEROOT data/tusimple
For Tusimple, you should have structure like this:
$TUSIMPLEROOT/clips # data folders
$TUSIMPLEROOT/lable_data_xxxx.json # label json file x4
$TUSIMPLEROOT/test_tasks_0627.json # test tasks json file
$TUSIMPLEROOT/test_label.json # test label json file
For Tusimple, the segmentation annotation is not provided, hence we need to generate segmentation from the json annotation.
python tools/generate_seg_tusimple.py --root $TUSIMPLEROOT
# python tools/generate_seg_tusimple.py --root /root/paddlejob/workspace/train_data/datasets --savedir /root/paddlejob/workspace/train_data/datasets/seg_label
For training, run(shell scripts are under folder script). More training details are in documentation
# training on single-GPU
export CUDA_VISIBLE_DEVICES=0
python tools/train.py -c configs/scnn/resnet50_tusimple.py
# training on multi-GPU
export CUDA_VISIBLE_DEVICES=0,1,2,3
python -m paddle.distributed.launch tools/train.py -c configs/scnn/resnet50_tusimple.py
For testing, run
python tools/train.py -c configs/scnn/resnet50_tusimple.py \
--load /home/fyj/zky/tusimple/new/pplanedet/output_dir/resnet50_tusimple/latest.pd \
--evaluate-only
See tools/detect.py
for detailed information.
python tools/detect.py --help
usage: detect.py [-h] [--img IMG] [--show] [--savedir SAVEDIR]
[--load_from LOAD_FROM]
config
positional arguments:
config The path of config file
optional arguments:
-h, --help show this help message and exit
--img IMG The path of the img (img file or img_folder), for
example: data/*.png
--show Whether to show the image
--savedir SAVEDIR The root of save directory
--load_from LOAD_FROM
The path of model
To run inference on example images in ./images
and save the visualization images in vis
folder:
# first you should add 'seg = False' in your config
python tools/detect.py configs/scnn/resnet50_tusimple.py --img images\
--load_from model.pd --savedir ./vis
If you want to save the visualization of the segmentation results, you can run the following code
# first you should add 'seg = True' in your config
python tools/detect.py configs/scnn/resnet50_tusimple.py --img images\
--load_from model.pd --savedir ./vis
If you want to test the inference speed, you can run the following code. It should be noted that test script is written by python instead of C++, there may exist some difference between official speed and our test speed.
python tools/test_speed.py configs/condlane/cspresnet_50_culane.py --model_path output_dir/cspresnet_50_culane/model.pd
PPlanedet is released under the MIT license. We only allow you to use our project for academic uses.
If you find our project useful in your research, please consider citing:
@misc{PPlanedet,
title={PPlanedet, A Toolkit for lane detection based on PaddlePaddle},
author={Kunyang Zhou},
howpublished = {\url{https://github.com/zkyseu/PPlanedet}},
year={2022}
}
model reproduced in our project
@Inproceedings{pan2018SCNN,
author = {Xingang Pan, Jianping Shi, Ping Luo, Xiaogang Wang, and Xiaoou Tang},
title = {Spatial As Deep: Spatial CNN for Traffic Scene Understanding},
booktitle = {AAAI Conference on Artificial Intelligence (AAAI)},
month = {February},
year = {2018}
}
@InProceedings{qin2020ultra,
author = {Qin, Zequn and Wang, Huanyu and Li, Xi},
title = {Ultra Fast Structure-aware Deep Lane Detection},
booktitle = {The European Conference on Computer Vision (ECCV)},
year = {2020}
}
@article{2017ERFNet,
author = {E.Romera, J.M.Alvarez, L.M.Bergasa and R.Arroyo},
title = {ERFNet: Efficient Residual Factorized ConvNet for Real-time Semantic Segmentation},
journal = {IEEE Transactions on Intelligent Transportation Systems(T-ITS)},
year = {2017}
}
@InProceedings{2021RESA,
author = {Zheng, Tu and Fang, Hao and Zhang, Yi and Tang, Wenjian and Yang, Zheng and Liu, Haifeng and Cai, Deng},
title = {RESA: Recurrent Feature-Shift Aggregator for Lane Detection},
booktitle = {Proceedings of the AAAI Conference on Artificial Intelligence},
year = {2021}
}
@InProceedings{DeepLabV3+,
author = {Chen, Liang-Chieh, Yukun Zhu, George Papandreou, Florian Schroff, and Hartwig Adam},
title = {Encoder-decoder with atrous separable convolution for semantic image segmentation},
booktitle = {In Proceedings of the European conference on computer vision(ECCV)},
year = {2018}
}
@InProceedings{CondLaneNet,
author = {Liu, Lizhe and Chen, Xiaohao and Zhu, Siyu and Tan, Ping},
title = {CondLaneNet: a Top-to-down Lane Detection Framework Based on Conditional Convolution},
booktitle = {Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV)},
year = {2021}
}
@article{RTFormer,
author = {Wang, Jian, Chenhui Gou, Qiman Wu, Haocheng Feng, Junyu Han, Errui Ding, and Jingdong Wang},
title = {RTFormer: Efficient Design for Real-Time Semantic Segmentation with Transformer},
journal = {arXiv preprint arXiv:2210.07124 (2022)},
year = {2022}
}