TensorFlow provides a set of pretrained models ready for inference, please checkCaffe and TensorFlow Pretrained Models.
You can use the model as it is or you can retrain the model with your own data to detect specific objects for your custom application.
This post shows some useful links that can help you on this task.
The link above also shows the needed steps to prepare your data for the retraining process (image labeling).
For the link above, you also need to follow the steps to prepare your data for the retraining process as the Inception retraining tutorial.
Make sure your exported the needed PYTHONPATH variable:
export PYTHONPATH=$PYTHONPATH:/path/to/tf_models/models/research:/path/to/tf_models/models/research/slim
A few tips:
- Retraining a model will be faster than training a model from the beginning, but it can still take a long time to complete. It depends on many factors, such as the number of steps defined on the model's *.config file. You need to be aware of overfitting your model if your dataset is too small and the number of steps are too large. Also, TensorFlow saves checkpoints at the retraining process, which you can prepare for inference and test before the retraining process is over and check when the models is good enough for your application. Please check "Exporting a Trained Inference Graph" in the Inception retraining tutorial and keep in mind that you can follow these steps before the training process is complete. Of course, low checkpoints may not be well trained.
- If your are running OpenCV DNN inference, you may need to run the following command to get the *.pbtxt file, where X corresponds to the number of classes trained in your model and tf_text_graph_ssd.py is an OpenCV DNN script:
python tf_text_graph_ssd.py --input frozen_inference_graph.pb --output frozen_inference_graph.pbtxt --num_classes X