Install Tensorflow on Raspberry pi
1. Download and install Raspberry Pi OS
In this tutorial we are using Raspberry Pi 4.
Please follow the instructions in the official page https://www.raspberrypi.org/software/
2. Update the Raspberry Pi OS
After installation we need to update the operating system.
Type:
sudo apt-get update sudo apt-get dist-upgrade
2. Install TensorFlow
Type:
sudo pip3 install setuptools --upgrade sudo apt-get install libatlas-base-dev sudo pip3 install tensorflow sudo pip3 install pillow lxml jupyter matplotlib cython sudo apt-get install python-tk
3. Install OpenCV
In this tutorial we will use OpenCV.
OpenCV (Open Source Computer Vision Library) is a library of programming functions mainly aimed at real-time computer vision.
OpenCV features GPU acceleration for real-time operations.
Install OpenCV and its dependencies.
Type:
sudo apt-get install libjpeg-dev libtiff5-dev libjasper-dev libpng12-dev sudo apt-get install libavcodec-dev libavformat-dev libswscale-dev libv4l-dev sudo apt-get install libxvidcore-dev libx264-dev sudo apt-get install qt4-dev-tools libatlas-base-dev sudo pip3 install opencv-python==3.4.6.27
4. Compile and Install Protobuf
The TensorFlow object detection API uses Protobuf, a package that implements Google’s Protocol Buffer data format.
You can install it with apt-get.
Type:
sudo apt-get install protobuf-compiler
Verify the installation.
Type:
protoc --version
You will get:
libprotoc 3.6.1
5. Set up TensorFlow Directory Structure and PYTHONPATH Variable
After that the packages are installed, we need to set up the TensorFlow directory.
At the home directory create a new directory e.g tensorlfow:
Type:
mkdir tensorflow cd tensorflow
Clone the models from GitHub:
Type:
git clone --depth 1 https://github.com/tensorflow/models.git
Add PYTHONPATH environment variable to bash config file located in the home directory.
Use vi or nano to edit .bashrc.
Type:
sudo vi ~/.bashrc
And add the PYTHONPATH environment variable on the last line:
export PYTHONPATH=$PYTHONPATH:/home/pi/tensorflow/models/research:/home/pi/tensorflow/models/research/slim
save and exit the file. This makes it so the “export PYTHONPATH” command is called every time you open a new terminal.
Close and then re-open the terminal.
After login change directory.
Type:
cd /home/pi/tensorflow/models/research
Type:
protoc object_detection/protos/*.proto --python_out=.
This command converts all the "name".proto files to "name_pb2".py files.
6. Download the SSDLite-MobileNet model
SSDLite is an object detection model that aims to produce bounding boxes around objects in an image.
SSDLite uses MobileNet for feature extraction to enable real-time object detection on mobile devices.
In the benchmark, the float version of SSDLite uses the small minimalistic MobileNet V3 variant.
The integer version uses the EdgeTPU variant of MobileNet V3.
With the SSDLite model, the Raspberry Pi 4 performs fairly well, achieving a frame rate higher than 1FPS. This is fast enough for most real-time object detection applications.
Change diretory.
Type:
cd /home/pi/tensorflow/models/research/object_detection
Now download the SSDLite-MobileNet model and unpack it.
Type:
wget http://download.tensorflow.org/models/object_detection/ssdlite_mobilenet_v2_coco_2018_05_09.tar.gz tar -xzvf ssdlite_mobilenet_v2_coco_2018_05_09.tar.gz
7. Detect objects
Download the python script for object detection.
Type:
wget https://raw.githubusercontent.com/EdjeElectronics/TensorFlow-Object-Detection-on-the-Raspberry-Pi/master/Object_detection_picamera.py
The Python script Object_detection_picamera.py detects objects in live feeds from a Picamera or USB webcam. It sets paths to the model and label map, loads the model into memory, initializes the Picamera or USB webcam, and then begins performing object detection on each video frame from the Picamera or USB webcam.
In this tutorial we will use a webcam. To test the script connect a webcam to Raspberry Pi and type:
python3 Object_detection_picamera.py --usbcam
If you’re using a Picamera, make sure it is enabled in the Raspberry Pi configuration menu.
To test with Picamera type:
python3 Object_detection_picamera.py
Once the script initializes (which can take up to 20 seconds), you will see a window showing a live view from your camera. Common objects inside the view will be identified and have a rectangle drawn around them.
You can ignore the error messages that appear during the startup of the script.
8. Use the model you trained yourself
Here's a guide that shows you how to train your own model and how to export it.
Add the frozen inference graph that you created during the training in Windows machine into the object_detection directory in Rasberry Pi and change the model path in the script.
Copy object_detection\inference_graph directory in Windows machine to object_detection directory in Raspberry Pi.
Copy object_detection\training\lablemap.pbtxt in Windows machine to object_detection\data\ directory in Raspberry Pi.
Open the Object_detection_picamera.py script in a text editor (vi or nano).
Go to the line where MODEL_NAME is set and change the string to match the name of the new model folder.
Then, on the line where PATH_TO_LABELS is set, change the name of the labelmap file to match the new label map.
Change the NUM_CLASSES variable to the number of classes your model can identify.
Run Object_detection_picamera.py.
Type
python3 Object_detection_picamera.py --usbcam
The script should identify the objects in your model.
Reference
https://github.com/EdjeElectronics/TensorFlow-Object-Detection-on-the-Raspberry-Pi