Object detection and Analytics using AI, Raspberry Pi and Docker

Object detection and Analytics using AI, Raspberry Pi and Docker

Play this article

Imagine you are able to capture live video streams, identify objects using deep learning, and then trigger actions or notifications based on the identified objects — all using Docker containers. With Pico, you will be able to setup and run a live video capture, analysis, and alerting solution prototype. The intention with this article is to showcase how easy it is to implement object detection and analytics using IoT devices like Raspberry Pi and Docker containers.

The overall architecture is self-explanatory. A camera surveils a particular area, streaming video over the network to a video capture client. The client samples video frames and sends them over to AWS, where they are analyzed and stored along with metadata. If certain objects are detected in the analyzed video frames, SMS alerts are sent out. Once a person receives an SMS alert, they will likely want to know what caused it. For that, sampled video frames can be monitored with low latency using a web-based user interface.

The Pico framework uses Kafka cluster to acquire data in real-time. Kafka is a message-based distributed publish-subscribe system, which has the advantages of high throughput and perfect fault-tolerant mechanism. The type of data source is the video that generated by the cameras attached to Raspberry Pi.

List of Hardware:

Raspberry Pi 3 Model B

Buy

Raspberry Pi Infrared IR Night Vision Surveillance Camera Module 500W Webcam

Buy

5MP Raspberry Pi 3 Camera Module W/ HBV FFC Cable

Buy

  1. Raspberry Pi OS
  2. Docker 19.03.x
  3. Python
  4. Amazon Cloud Subscription
  5. AWS Rekognition Service

To configure the camera Interface, run the below command as sudo or root user:

It will open up command-line UI window, choose Interfacing , select Camera and enable it. Save and exit the CLI window.

You will also need to load the required driver “bcm2835-v412” to make your camera module work. If you miss this step, you will end up seeing a blank screen even though the application comes up without any issue.

# sudo modprobe bcm2835-v4l2

Raspberry Pi OS (previously called Raspbian) is an official operating system for all models of the Raspberry Pi. We will be using Raspberry Pi Imager for an easy way to install Raspberry Pi OS on top of Raspberry Pi:

Visit this link and download Raspberry Pi OS by running the below CLI:

In case you are in hurry, just run the below command and you should be good to go:

wget [https://downloads.raspberrypi.org/raspios_full_armhf_latest](https://downloads.raspberrypi.org/raspios_full_armhf_latest)

Next, we will be installing Raspberry Pi Imager. You can download via https://www.raspberrypi.org/blog/raspberry-pi-imager-imaging-utility/

All you need to do is choose the right operating system and SD card, and it should be able to flash OS on your SD card.

Click “Write” and it’s time to grab a coffee.

Once the write is successful, you can remove the SD card from card reader and then insert it into Raspberry Pi SD card slot.

$ssh pi @192.168.1.4 pi@raspberrypi:~ $ uname -arn Linux raspberrypi 4.19.118-v7+ #1311 SMP Mon Apr 27 14:21:24 BST 2020 armv7l GNU/Linuxpi@raspberrypi:~ $
sudo curl -sSL https://get.docker.com/ | sh

Verifying Docker Binaries

pi@raspi2:~ $ docker version Client: Docker Engine - Community Version: 19.03.4 API version: 1.40 Go version: go1.12.10 Git commit: 9013bf5 Built: Fri Oct 18 16:03:00 2019 OS/Arch: linux/arm Experimental: false Server: Docker Engine - Community Engine: Version: 19.03.8 API version: 1.40 (minimum version 1.12) Go version: go1.12.17 Git commit: afacb8b Built: Wed Mar 11 01:29:22 2020 OS/Arch: linux/arm Experimental: false containerd: Version: 1.2.10 GitCommit: b34a5c8af56e510852c35414db4c1f4fa6172339 runc: Version: 1.0.0-rc8+dev GitCommit: 3e425f80a8c931f88e6d94a8c831b9d5aa481657 docker-init: Version: 0.18.0 GitCommit: fec368 pi@raspi2:~ $
  • Docker Desktop for Mac or Windows
  • AWS Account ( You will require t2.medium instances for this)
  • AWS CLI installed
  • Docker Machine installed
[Captains-Bay]🚩 > cat ~/.aws/credentials [default] aws_access_key_id = XXXA aws_secret_access_key = XX
[Captains-Bay]🚩 > aws --version aws-cli/1.11.107 Python/2.7.10 Darwin/17.7.0 botocore/1.5.70 Setting up Environmental Variable
[Captains-Bay]🚩 > export VPC=vpc-ae59f0d6 [Captains-Bay]🚩 > export REGION=us-west-2a [Captains-Bay]🚩 > export SUBNET=subnet-827651c9 [Captains-Bay]🚩 > export ZONE=a [Captains-Bay]🚩 > export REGION=us-west-2
[Captains-Bay]🚩 > docker-machine create --driver amazonec2 --amazonec2-access-key=${ACCESS_KEY_ID} --amazonec2-secret-key=${SECRET_ACCESS_KEY} --amazonec2-region=us-west-2 --amazonec2-vpc-id=vpc-ae59f0d6 --amazonec2-ami=ami-78a22900 --amazonec2-open-port 2377 --amazonec2-open-port 7946 --amazonec2-open-port 4789 --amazonec2-open-port 7946/udp --amazonec2-open-port 4789/udp --amazonec2-open-port 8080 --amazonec2-open-port 443 --amazonec2-open-port 80 --amazonec2-subnet-id=subnet-72dbdb1a --amazonec2-instance-type=t2.micro kafka-swarm-node1
[Captains-Bay]🚩 > docker-machine ls NAME ACTIVE DRIVER STATE URL SWARM DOCKER ERRORS kafka-swarm-node1 - amazonec2 Running tcp://35.161.106.158:2376 v18.09.6 kafka-swarm-node2 - amazonec2 Running tcp://54.201.99.75:2376 v18.09.6
ubuntu@kafka-swarm-node1:~$ sudo docker swarm init --advertise-addr 172.31.53.71 --listen-addr 172.31.53.71:2377 Swarm initialized: current node (yui9wqfu7b12hwt4ig4ribpyq) is now a manager. To add a worker to this swarm, run the following command: docker swarm join --token SWMTKN-1-xxxxxmr075to2v3k-decb975h5g5da7xxxx 172.31.53.71:2377 To add a manager to this swarm, run 'docker swarm join-token manager' and follow the instructions.
ubuntu@kafka-swarm-node2:~$ sudo docker swarm join --token SWMTKN-1-2xjkynhin0n2zl7xxxk-decb975h5g5daxxxxxxxxn 172.31.53.71:2377 This node joined a swarm as a worker.
ubuntu@kafka-swarm-node1:~$ sudo docker node ls ID HOSTNAME STATUS AVAILABILITY MANAGER STATUS ENGINE VERSION yui9wqfu7b12hwt4ig4ribpyq * kafka-swarm-node1 Ready Active Leader 18.09.6 vb235xtkejim1hjdnji5luuxh kafka-swarm-node2 Ready Active 18.09.6
curl -L https://github.com/docker/compose/releases/download/1.25.0-rc1/docker-compose-`uname -s`-`uname -m` -o /usr/local/bin/docker-compose % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 100 617 0 617 0 0 2212 0 --:--:-- --:--:-- --:--:-- 2211 100 15.5M 100 15.5M 0 0 8693k 0 0:00:01 0:00:01 --:--:-- 20.1M
root@kafka-swarm-node1:/home/ubuntu/dockerlabs/solution/kafka-swarm# chmod +x /usr/local/bin/docker-compose
ubuntu@kafka-swarm-node1:~/dockerlabs/solution/kafka-swarm$ sudo docker-compose version docker-compose version 1.25.0-rc1, build 8552e8e2 docker-py version: 4.0.1 CPython version: 3.7.3 OpenSSL version: OpenSSL 1.1.0j 20 Nov 2018

Apache Kafka is an open-source stream-processing software platform developed by LinkedIn and donated to the Apache Software Foundation. It is written in Scala and Java. The project aims to provide a unified, high-throughput, low-latency platform for handling real-time data feeds.

Apache Kafka is a distributed, partitioned, and replicated publish-subscribe messaging system that is used to send high volumes of data, in the form of messages, from one point to another. It replicates these messages across a cluster of servers in order to prevent data loss and allows both online and offline message consumption. This in turn shows the fault-tolerant behaviour of Kafka in the presence of machine failures that also supports low latency message delivery. In a broader sense, Kafka is considered as a unified platform which guarantees zero data loss and handles real-time data feeds.

git clone https://github.com/ajeetraina/developer/solution/iot/ai/pico cd pico/kafka/

Using docker stack deploy to setup 3 Node Kafka Cluster

docker stack deploy -c docker-compose.yml mykafka

By now, you should be able to access kafka manager at https://:9000

  • Cluster Name = pico (or whatever you want)
  • Cluster Zookeeper Hosts = zk-1:2181,zk-2:2181,zk-3:2181
  • Kafka Version = leave it at 0.9.01 even though we’re running 1.0.0
  • Enable JMX Polling = enabled

Click on Topic on the top center of the Kafka Manager to create a new topic with the below details -

  • Topic = testpico
  • Partitions = 6
  • Replication factor = 2

which gives an even spread of the topic across the three kafka nodes.

While saving the settings, it might ask to set minimal parameter required. Feel free to follow the instruction provided.

Run the below Docker container for preparing environment for Consumer scripts

docker run -d -p 5000:5000 ajeetraina/opencv4-python3 bash
#git clone https://github.com/ajeetraina/developer/ cd solutions/iot/ai/pico

You will need 2 scripts — Image Processor and Consumer

cd pico/deployment/objects/

This script is placed under hhttps://github.com/ajeetraina/developer/ed.. location. Before you run this script, ensure that it has right AWS Access Key and Broker IP address

python3 image_processor.py

This script is placed under https://github.com/ajeetraina/developer/edit/master/solutions/iot/ai/pico/blob/master/deployment/objects/ directory. Before you run this script, ensure that it has right Broker IP address

git clone https://github.com/ajeetraina/developer cd developer/solutions/iot/ai/pico
cd pico/deployment/objects/
brokers = ["35.221.213.182:9092"]
apt install -y python-pip libatlas-base-dev libjasper-dev libqtgui4 python3-pyqt5 python3-pyqt5 libqt4-test pip3 install kafka-python opencv-python pytz pip install virtualenv virtualenvwrapper numpy
python3 producer_camera.py

Please Note: This script should be run post the consumer scripts (Image_Processor & Consumer.py) is executed

Pre-requisite:

  • Ensure that you have followed all the above steps.
  • Ensure that Docker Swarm is up and running on AWS Cloud

Sequence:

  • First run the Image_Processor Script on AWS Instance
  • Then run the Consumer.py Script on AWS Instance
  • Finally, run the Producer_camera.py script on Pi

Place an object in front of camera module and watch out for both text as well as object detection under http://broker-ip:5000

Originally published at https:github.com/collabnix/pico

Did you find this article valuable?

Support Collabnix by becoming a sponsor. Any amount is appreciated!