Skip to content

traffic_light_classifier#

Purpose#

traffic_light_classifier is a package for classifying traffic light labels using cropped image around a traffic light. This package has two classifier models: cnn_classifier and hsv_classifier.

Inner-workings / Algorithms#

cnn_classifier#

Traffic light labels are classified by EfficientNet-b1 or MobileNet-v2.
Totally 83400 (58600 for training, 14800 for evaluation and 10000 for test) TIER IV internal images of Japanese traffic lights were used for fine-tuning.
The information of the models is listed here:

Name Input Size Test Accuracy
EfficientNet-b1 128 x 128 99.76%
MobileNet-v2 224 x 224 99.81%

hsv_classifier#

Traffic light colors (green, yellow and red) are classified in HSV model.

About Label#

The message type is designed to comply with the unified road signs proposed at the Vienna Convention. This idea has been also proposed in Autoware.Auto.

There are rules for naming labels that nodes receive. One traffic light is represented by the following character string separated by commas. color1-shape1, color2-shape2 .

For example, the simple red and red cross traffic light label must be expressed as "red-circle, red-cross".

These colors and shapes are assigned to the message as follows: TrafficLightDataStructure.jpg

Inputs / Outputs#

Input#

Name Type Description
~/input/image sensor_msgs::msg::Image input image
~/input/rois tier4_perception_msgs::msg::TrafficLightRoiArray rois of traffic lights

Output#

Name Type Description
~/output/traffic_signals tier4_perception_msgs::msg::TrafficSignalArray classified signals
~/output/debug/image sensor_msgs::msg::Image image for debugging

Parameters#

Node Parameters#

Name Type Description
classifier_type int if the value is 1, cnn_classifier is used

Core Parameters#

cnn_classifier#

Name Type Description
classifier_label_path str path to the model file
classifier_model_path str path to the label file
classifier_precision str TensorRT precision, fp16 or int8
classifier_mean vector\ 3-channel input image mean
classifier_std vector\ 3-channel input image std
apply_softmax bool whether or not apply softmax

hsv_classifier#

Name Type Description
green_min_h int the minimum hue of green color
green_min_s int the minimum saturation of green color
green_min_v int the minimum value (brightness) of green color
green_max_h int the maximum hue of green color
green_max_s int the maximum saturation of green color
green_max_v int the maximum value (brightness) of green color
yellow_min_h int the minimum hue of yellow color
yellow_min_s int the minimum saturation of yellow color
yellow_min_v int the minimum value (brightness) of yellow color
yellow_max_h int the maximum hue of yellow color
yellow_max_s int the maximum saturation of yellow color
yellow_max_v int the maximum value (brightness) of yellow color
red_min_h int the minimum hue of red color
red_min_s int the minimum saturation of red color
red_min_v int the minimum value (brightness) of red color
red_max_h int the maximum hue of red color
red_max_s int the maximum saturation of red color
red_max_v int the maximum value (brightness) of red color

Customization of CNN model#

Currently, in Autoware, MobileNetV2 and EfficientNet-b1 are provided. The corresponding onnx files are data/traffic_light_classifier_mobilenetv2.onnx and data/traffic_light_classifier_efficientNet_b1.onnx (These files will be downloaded during the build process). Also, you can apply the following models shown as below, for example.

In order to train models and export onnx model, we recommend open-mmlab/mmclassification. Please follow the official document to install and experiment with mmclassification. If you get into troubles, FAQ page would help you.

The following steps are example of a quick-start.

step 0. Install MMCV and MIM#

NOTE : First of all, install PyTorch suitable for your CUDA version (CUDA11.6 is supported in Autoware).

In order to install mmcv suitable for your CUDA version, install it specifying a url.

# Install mim
$ pip install -U openmim

# Install mmcv on a machine with CUDA11.6 and PyTorch1.13.0
$ pip install mmcv-full -f https://download.openmmlab.com/mmcv/dist/cu116/torch1.13/index.html

step 1. Install MMClassification#

You can install MMClassification as a Python package or from source.

# As a Python package
$ pip install mmcls

# From source
$ git clone https://github.com/open-mmlab/mmclassification.git
$ cd mmclassification
$ pip install -v -e .

step 2. Train your model#

Train model with your experiment configuration file. For the details of config file, see here.

# [] is optional, you can start training from pre-trained checkpoint
$ mim train mmcls YOUR_CONFIG.py [--resume-from YOUR_CHECKPOINT.pth]

step 3. Export onnx model#

In exporting onnx, use mmclassification/tools/deployment/pytorch2onnx.py or open-mmlab/mmdeploy.

cd ~/mmclassification/tools/deployment
python3 pytorch2onnx.py YOUR_CONFIG.py ...

After obtaining your onnx model, update parameters defined in the launch file (e.g. model_file_path, label_file_path, input_h, input_w...). Note that, we only support labels defined in tier4_perception_msgs::msg::TrafficLightElement.

Assumptions / Known limits#

(Optional) Error detection and handling#

(Optional) Performance characterization#

[1] M. Sandler, A. Howard, M. Zhu, A. Zhmoginov and L. Chen, "MobileNetV2: Inverted Residuals and Linear Bottlenecks," 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition, Salt Lake City, UT, 2018, pp. 4510-4520, doi: 10.1109/CVPR.2018.00474.

[2] Tan, Mingxing, and Quoc Le. "EfficientNet: Rethinking model scaling for convolutional neural networks." International conference on machine learning. PMLR, 2019.

(Optional) Future extensions / Unimplemented parts#