NXP Tech Blog

cancel
Showing results for 
Show  only  | Search instead for 
Did you mean: 

NXP Tech Blog

EsaHuang
NXP Employee
NXP Employee

1.To train dataset on yolov5, first clone repository and setup environment.

git clone https://github.com/ultralytics/yolov5
cd yolov5
pip install -r requirements.txt

 

2.Prepare data including images and labels.

Download dataset from here, classes incluing pedestrian, car, pothole, red light and green light. You can add your own dataset as you need.Split data into train, test, valid by 8:1:1 ratio.

To get optimized results, you can label videos which need to be shown with labelme tool.

 

3.Train model with train.py. The pretrained model yolov5s.pt can be downloaded here. Copy trained model under yolov5 directory.

python train.py --weights yolov5s.pt --data dcc.yaml --img 320
cp yolov5/runs/expX/train/weights/best.pt yolov5/dcc.pt

 

 

 

4.Export model as .pb format.

python export.py --weights dcc.pt --data dcc.yaml --include pb --img 320

 

 

 

5.Convert model from .pb to .tflite formats with eIQ toolkits to deploy on i.MX95 CPU. (NPU not supported yet)

Open eIQ model toolkits, load model and convert to tflite. Remind to enable quantization at the same time to shorten inference time, choose input and output data type as uint8. Keep numbers of sample as default.

 

Note: Models in attachment are for reference only.

Note: Test videos link can be found below. Organized test videos can be downloaded here.

https://www.youtube.com/watch?v=DcMf8IjO6Qo

https://www.youtube.com/watch?v=HUbKO1cACLE&pp=ygURZHJpdmluZyBpbiBzdXpob3U%3D

Potholes in a rural road - Free Stock Video (mixkit.co)

Driving Los Angeles 8K HDR Dolby Vision Rear View Long Beach to Downtown LA California, USA (yout...

reference link:

GitHub - ultralytics/yolov5: YOLOv5 in PyTorch > ONNX > CoreML > TFLite

More
1 0 219