The Usage of ADM PV-RCNN

The Usage of ADM PV-RCNN

image

Note

This document mainly introduces the use of ADM PV-RCNN Docker container, details can be seen here.

This Docker container contains a complete pipline working files from LiDAR to algorithm model. This Docker container can process the Pcap format data packets collected by LiDAR. Currently, it only supports data collected by four types of LiDAR produced by Robosense: RS-LiDAR-16, RS-LiDAR-32, RS-LiDAR-64 and RS-LiDAR-128. More brands and types of lidar will be supported in the future.


Description

ADM PV-RCNN full name is Adaptive Deformable Module PV-RCNN, is a new Point-based and Voxel-based 3D object detection model designed based on PV-RCNN (See github.com/open-mmlab/OpenPCDet). We add adaptive deformation convolution module and context fusion module on the basis of PV-RCNN. Adaptive deformation convolution module can solve the problem of poor recognition accuracy of PV-RCNN model in fuzzy scene and long-distance scene, and context fusion module can reduce the probability of false positive of PV-RCNN model in the area of uneven distribution of point cloud. We tested the proposed ADM PV-RCNN model on KITTI datasets, and the results show that the proposed model were much higher than PV-RCNN and other similar models.


Download

Simply type docker pull 663lab/ adm-pv-rcnn:v1.0 on the machine where Docker is installed, and you can download the Docker container.


Usage

1. Parse the LiDAR pcap packet

Enter the Docker container and mount the Pcap format data packet collected by Robosense lidar into the Docker container.

a. Go to the Robosense LiDAR SDK folder

cd ~/catkin_ws/src/rslidar_sdk/

b. Convert Pcap packets to Pcd data

▪︎ Modify the parameters in the config.yaml file under the config directory to the corresponding parameters, such as changing msg_source to 3, changing pcap_path to the path of pcap packet, and changing lidar_type to the type of LiDAR.

▪︎ Turn the Pcap packets into ROSBag packets by running the following command:

roslaunch rslidar_sdk start.launch

▪︎ Go to the directory where Pcap packets are stored

▪︎ Repair ROSBag packet by running the following command:

rosbag reindex xxx.bag.active
rosbag fix xxx.bag.active result.bag

▪︎ Turn the ROSBag packets into Pcd data by running following command:

rosrun pcl_ros bag_to_pcd result.bag ~/pcdfiles

The converted Pcd data files are stored in the /root/pcdfiles/directory.

2. Convert LiDAR point clouds to standardized bin files

a. Convert Pcd data files to binary bin data files

▪︎ Go to the Pcd-to-bin project folder

cd ~/pcd2bin/

▪︎ Modify the pcd2bin.cpp file in the current directory, and change pcd_path and bin_path to corresponding paths.

▪︎ Compile and install the Pcd-to-bin project by running following command:

mkdir build && cd build
cmake .. && make

▪︎ Generate binary bin data files by running following command:

zsh ./bin2pcd

b. Modify bin data files to standardized bin files

▪︎ Go to Modbin project folder:

cd ~/modbin/

▪︎ Generate standardized bin data files by running following command:

mkdir ~/modfiles
conda activate model
python modbin.py --ori_path ~/binfiles mod_path ~/modfiles

3. Usages of ADM PV-RCNN

a. Go to the ADM PV-RCNN project folder:

cd ~/ADM-PV-RCNN/

b. Copy ADM PV-RCNN src into OpenPCDet:

zsh ./init.sh

c. Prepare KITTI dataset:

▪︎ Please download the official KITTI 3D object detection dataset and organize the downloaded files as follows (the road planes could be downloaded from [road plane], which are optional for data augmentation in the training):

ADM-PV-RCNN
├── OpenPCDet
│   ├── data
│   │   ├── kitti
│   │   │   │──ImageSets
│   │   │   │──training
│   │   │   │   ├──calib & velodyne & label_2 & image_2 & (optional: planes)
│   │   │   │──testing
│   │   │   │   ├──calib & velodyne & image_2
│   ├── pcdet
│   ├── tools

▪︎ Generate the data infos by running the following command:

python -m pcdet.datasets.kitti.kitti_dataset create_kitti_infos tools/cfgs/dataset_configs/kitti_dataset.yaml

d. Run experiments with a specific configuration file:

▪︎ Test and evaluate the pretrained model,You can download our pretrained model here. • Go to tools:

cd OpenPCDet/tools

• Test with a pretrained model:

python test.py --cfg_file ${CONFIG_FILE} --batch_size ${BATCH_SIZE} --ckpt ${CKPT}

• For example:

python test.py --cfg_file cfgs/kitti_models/def_pv_rcnn.yaml --batch_size 4 --ckpt ${SAVED_CKPT_PATH}/def_pv_rcnn.pth

▪︎ Train a model: • Train with multiple GPUs:

sh scripts/dist_train.sh ${NUM_GPUS} --cfg_file ${CONFIG_FILE} --batch_size ${BATCH_SIZE}  --epochs 80

• For example:

sh scripts/dist_train.sh 8 --cfg_file cfgs/kitti_models/def_pv_rcnn.yaml  --batch_size 16  --epochs 100

• Train with a single GPU:

python train.py --cfg_file ${CONFIG_FILE} --batch_size ${BATCH_SIZE} --epochs 100

e. Quick demo

▪︎ Run the demo with a pretrained model and your custom point cloud data by running following command:

python demo.py --cfg_file cfgs/kitti_models/adm_pv_rcnn.yaml --ckpt adm_pv_rcnn_epoch_100.pth --data_path ${POINT_CLOUD_DATA}

Here ${POINT_CLOUD_DATA} could be the following format:

1). Your transformed custom data with a single numpy file like my_data.npy.

2). Your transformed custom data with a single bin file like my_data.bin.

3). Your transformed custom data with a directory to test with multiple point cloud data.

4). The original KITTI .bin data within data/kitti, like data/kitti/testing/velodyne/000010.bin.

PS: If you have any questions, please email me at jensen.acm@gmail.com (please indicate “ADM PV-RCNN” as the subject of the email).

Last updated on