Saturday 11 September 2021

IOT: Designing the Arduino Part of the Face Recognition System

After finalizing the Stock Analyzer project, installing Solar Panels to my house and exploring Quantum Computers, I will continue with my IOT project. 

I am currently able to detect a face using OpenCV on a laptop. The next step will be to migrate that functionality to a headless Raspberry PI that is connected to an Arduino. In this blog post, I will design the Arduino board.

The person in the picture is more similar to James Dean than to Marlon Brando and Audrey Hepburn.
 Training data is very limited, with only 20 pictures of three celebrities, and that causes the identifier to perform poorly

The Arduino Panel
When a user is pressing a button on Arduino, the RPI will take pictures every x seconds. It will try to identify the faces on the last picture. If a face is identified, it will send the corresponding name back to the Arduino. The names will show on the LCD and corresponding LED's will activate. If faces are detected but not identified, an alarm will activate. 

The Piezo element and the red LED to the left are used for alarms.
The button sends a start/stop signal to the connected Raspberry PI computer.
The four LEDs to the right identifies four persons. 


There are three cases:
  • No face is detected on the last picture
  • At least one familiar faces - disable the alarm, if active.
  • No familiar faces - start an alarm.


Saturday 21 August 2021

Using a Quantum Computer to Roll a Dice

There has been some buzz about Quantum Computers (QC) over the last years. The QC's that are in use today are very limited but the capacity is growing rapidly. QC advocates hope to see computers that can solve problems much faster than classical computers can do. Other fear that some of the current cryptography systems may be broken with Quantum algorithms, such as Shoor's algorithm.

In this post, I will show how easy it is for a random blogger to create a simple Quantum algorithm and run on a Quantum computer.

Really Cool.
Coolness level: 15 mK ( -273.13 degrees Celsius)


Why a Dice?

There are tons of electronical dices out there. But they are made for classical computers and classical computers are built to be predictable. So if you know the random seed and the algorithm for the random generator, it is possible to predict the outcome of a dice. 

For a physical dice, it is also possible (in theory) to predict the outcome, given the rotation, position and thetranslational movement of the dice, combined with the surface and athmospheric conditions of the environment. 

For Quantum Computers, the outcome is by it's very nature random. The randomness comes from a superpositon of two equal states that collapses into one state.

Step 0: Foundations

I strongly recommend taking a university level course in quantum mechanics to get familiar with the basic concepts. 

I enjoyed reading Jack Hidary's book Quantum Computing: An Applied Approach



Anastasia Marchenkova has an interesting Youtube channel where she covers up-to date material about Quantum Computing.

If you're really short on time, you can see Wired's five levels of difficulty introduction to Quantum Computing


Step 1: Accessing IBM Quantum (Web)

The second step is to create an account on IBM Quantum. IBM Quantum allows the general public to upload programs to QC's and run for free.

Once I've got an account, I can create programs to run, either on a simulated or physical Quantum computer. There is an online Development Editor that is useful for learning. 


Step 2: Create and Run a Simple Program:
It is quite easy to add elements to the program - click and drag elements to the proper places.


In the program above, I use two Qubits. One is set to a superposition of |0> and |1>. The other Qubit is entangled with the first Qubit. Finally, both Qubits are measured, one by one.

To run the program on a QC, I select "Setup and Run", where I can select the QC or simulator to use
After the completion, I explore the result:

Theory says that both Qubits shall have the same value. This happens most of the times, but sometimes the Quantum Computer fails. This means that there should be some caution when looking into the results of these kinds of QC's.


Step 3: Creating the Dice from Python

I prefer to write the program in Python and it is easy to do it using the API key

The Python program is using qiskit (one of the more popular syntaxes for Quantum Computing).


The program sets up three Qubits that are set into a superposition of |0> and |1> (50% likelihood each). It is measuring the three qbits individually and the result is saved to a binary bitmask of size 3 ("000" -> "111"). To map the bitmask to a six sided dice, I repeat the circuit until I get "001"/1 up to "110"/6. This is the output from the dice.
The red text is a hardcoded warning that I 
wasn't able to supress. 
The dice repeated once, but the final value was 1.

That's it. There are no excuses. YOU can program a quantum computer and YOU can run the program on a physical quantum computer.

Saturday 31 July 2021

Silence Doesn't Mean Inaction

My blog has been silent for a couple of months. This often happens when I focus on other projects - and I think it should be that way. 

Children: Spending time with a baby and a pre-school infant is sometimes challenging, always rewarding.

Work: In my new position (same company), I have more challenging and interesting tasks than before. I create reports, make changes in the test framework and work closer to the hardware. As I've had a steep learning curve at work, I've had a much slower pace for my pet projects.

Home: We have had a major renovation project in our house, including:

  • A new roof
  • Solar panels on the new roof (not yet connected to the power grid). More details on my other blog.
  • Wood floor for the attic
  • Replacing the side panels
  • Installing a skylight window
  • Installing a wood stove

The project took two months of time and I didn't have time to focus on pet projects. 

As I've entered a long parental leave and we have no (major) projects in the house in a near future, I hope to have more time for pet projects. 

Saturday 1 May 2021

Python: Learning OpenCV and Detecting Faces from a Live Camera

The next step for my IOT project is to use facial recognition so that the Raspberry P can decide whether or not to alert the home owner.

I'll use OpenCV for this part. OpenCV is a very capable free package for computer vision and imaging.

OpenCV can be installed for Python and comes in four different options:

  1. Main modules: opencv-python
  2. Main modules with extra modules such as contributions from the opencv community: opencv-contrib-python
  3. Headless mode (no GUI modules): opencv-python-headless
  4. Headless mode with extra modules: opencv-contrib-python-headless

As I want to use it in a headless Raspberry Pi later, I'll go for the first option for development and the fourth option for deployment.

Detecting a face using openCv is a two step process:

1. Detect the faces in a picture

2. Identify a face from step 1. That will require a training set of some images of the person that shall be identified.

Face Detecton

OpenCV is using Haar Cascades to detect various objects such as faces, eyes, mouths and license plates for example. The models are available as xml files at the OpenCV Github repository and no machine learning training will be necessary for this step. 

After downloading the file haarcascade_frontalcatface to a local folder, my script will apply the Haar cascade model to a webcam session:


The Haar cascade algorithm is quite sensitive to noise. In the image below, five faces were detected, but only one face was authentic.

In the right region, some false faces were detected.


It is possible to reduce the risk of false faces by tweaking some parameters, but then the risk of missing authentic faces increases. Below are some faces of Hollywood celebrities that weren't detected by the algorithm:

It seems that the algorithm fails detecting faces that are tilting too much. Shadows in the faces can also confuse the algorithm.

In the next blog post. I'll try to train an existing algorithm to identify faces.

Saturday 17 April 2021

RPI: Demo of First Sprint and Sending SMS from 4G Router

In my summer house setup, I have a 4G/Wifi router. The data plan I have allows for some SMS messages to be sent. 

To be able to send SMS messages automatically, I copied a script that I found on a French blog. The code and repo is designed for Jeedom, but it worked on Raspberry PI, too. 

Demo of the First Sprint

With almost all targets met for this sprint, I am able to show a demo of what I've done in my pet project over the very limited spare time I have:

The 4G connection in the summer house is quite slow - 6 Mbit/s. Transmitting a picture of 4 MB will take some 7 seconds, and buffering the video stream will take some time too. When I tried with a better connection, it was a bit quicker.

The Code

I had to create a separate shell script for the stream and SMS setup



NAT Forwarding

The IP number that the 4G router gets is a NAT-ed IP number in the 24 bit block (starting with 10). This makes it hard to access the network from the external internet.

From forum discussions, it seems that I either need to buy a router from the current Internet Service Provider (ISP), or change ISP. Another option would be VPN, but I'll investigate that later in the future.

Next Step

I'll move the remaining task "RPI12: RPI server available from cellular" to the backlog. In the second sprint I'll explore face recognition for openCV. Depending on the output, I'll add more tasks to the sprint later.



Saturday 3 April 2021

IOT: Server Behind Cellular Access Point

Update: My current ISP has NAT restrictions that affects the ability to reach a server externally.

My family's summer house is now equipped with a 4G hotspot that provides Wi-Fi connectivity to the house. That will make it suitable for remote surveillance.

I will need to connect the Raspberry PI computer in that network, but since IP addresses on cellular networks aren't static, the network will be hard to reach. This blog post will explore how to reach a server behind a cellular network.

The task is divided into two sub tasks:

  1. Reach a server behind a router (Port Forwarding)
  2. Being able to access a cellular router whose IP will change once in a while

Step 1: Reach a Server Behind a Router

This is quite straight forward - I just used the port forwarding settings. When one sends a request to the router with a specific port number, the router translates that port number into an IP number inside the local network. 

You can find much better explanations here.


In order to make my Raspberry PI less vulnerable for malicious access, I have changed the SSH port to a secret port number. 

On the router, I've enabled port forwarding for the new port number to my Raspberry PI. To verify, I opened a ssh connection from my laptop->Iphone->4G Network ->Internet-> Router-> Raspberry PI

As an extra layer of safety, I installed fail2ban, a software that protects servers from brute force attacks.

Step 2: Handle Dynamic IP Numbers

This will be handled using DDNS (Dynamic Domain Name System). A script on the Raspberry PI will regularly update the IP number to the DDNS server. Whenever a user tries to reach the DDNS, the server will provide the current IP number.

Step 2a: Change the SSH port on the Raspberry PI

This is a security measure that will be more necessary since my network will be easier to find.

Step 2b: Register to a DDNS Service

The easiest option would be to make the router itself connect to a dynamic DNS service. That must be done on site.



No-IP or Duck DNS

DLink had a DDNS service, but that one is unfortunately closed. That's a pity since I am using a DLink router.


Step 2c: Register a Client

https://community.home-assistant.io/t/guide-how-to-set-up-duckdns-ssl-and-chrome-push-notifications/9722

The password will later be sent in plain text (CURL) to the server - don't use a password that you use for other services!

https://www.wundertech.net/how-to-setup-duckdns-on-a-raspberry-pi/

https://www.youtube.com/watch?v=uhJ1zQIjujg

https://www.youtube.com/watch?v=ZKEGP_qBmxg

Saturday 20 March 2021

RPI: Upload a File to Cloud in Raspberry Pi CLI

I want to upload a photo to Google Drive or Dropbox using a Python script in a Raspberry PI.

A home surveillance use case might be:
  • A sensor detects that someone has entered the room (not implemented yet!)
  • A camera takes a photo of the living room
  • The picture is uploaded to the cloud before the burglar destroys the Raspberry PI.
Trying Google Drive
I enabled the Google Drive OAUTH using the public documentation and a guide from Iperius Backup. When running the python script, I got a error message that told me that I need to verify the app/script towards Google and that process seemed to be complicated, so I decided to try another approach.

Testing Dropbox

After giving up Google Drive, I found the Dropbox approach to be much more successful. It takes two steps to activate: Create a local script that connects to Dropbox, and define what the script is allowed to do. 

First, I define what the script is allowed to do:
Step 1: Configure the new app access credentials in Dropbox
  1. Log in to Dropbox Developers and go to the App Console and select Create App.
  2. There are three steps to take:
    1. Choose an API - Dropbox allows only scoped access (the creator of the app can select what authorities the app can have).
    2. Choose the Type of Access  You Need - I choose App Folder for security reasons. The Full Dropbox option would allow the app to access all files in my account and that would be risky.
    3. Name your app - this name must be unique in Dropbox. You can't use a name that any other Dropbox developer has used.
Step 2: Now when the app is created, I need to define the scope (privileges) of the app. This is done in Scoped App
Step 3: Select what the app shall be allowed to read and modify in my Dropbox account:
Step 4: Once that is configured, it is time to generate an access token. The default Access token expiration  is Short-lived (expires in four hours). I select No expiration. I click Generate and I copy the code that is shown.
The access token must be re-generated if any access token is changed.
The second part is to write the Python code. 
The script uploads the specified picture with the current time as file name.

That's it! When I run the script, the file is uploaded to my Dropbox account

Of course, I could have set up Dropbox the normal way (assigning a folder and sync it to Dropbox). In this case, I didn't want to save 2G of files on a SD card with limited disk space.

I found a video tutorial that illustrates how to do it:

In my project anatomy, there are only two steps left, before starting the next sprint of my IOT project. 





Saturday 13 March 2021

IOT: Data from computer to Arduino

One of the tasks on my first IOT  project anatomy is to send data to the Arduino from Raspberry PI to my Arduino board. 

There is one Arduino feature one needs to consider when sending serial data to Arduino over USB port: Every the serial connection is established, the Arduino is reset. This is a feature that simplifies the process of loading software to the board. 

If a script establishes a serial connection to the Arduino, it takes one second for the board to boot. If the script sends serial data during that time, that data is lost. Further, if the connection is released and reestablished, the board reboots.

My solution to that issue is to wait for the board to boot and to keep the connection alive during the entire session.

The data must be encoded to binary format

The sender script specifies interface, baudrate and timeout and the receiving script sends the incoming string to the display for some seconds. 

And finally, the incoming message is shown on the Arduino LCD.

I'm getting close to completion of the first step of my IOT journey. The remaining tasks are to make the Raspberry Pi available from the external internet, send SMS from the router and to push pictures to the cloud from the Raspberry Pi whenever someone activates the emergency button.

After that, I'll plan the next steps for the IOT project.



Saturday 6 March 2021

IOT: Connecting Raspberry Pi to Thingspeak

This one was easier than I thought. I wanted to send/log data from my Raspberry Pi to Thingspeak. 

Step 1 - Activate a Thingspeak Account and set up a channel

The channel has a number of fields. In this case, I use only one field, "field1".

Step 2 - Get an API key for Thingspeak

The API Write key is necessary for Thingspeak to know what channel to publish to. 



Step 3 - Send data to Thingspeak using "POST" with the channel number and the data.
The free version of Thingspeak allows for one update every 15 seconds. My script simply takes a number from the console and posts it to Thingspeak with a 15 second interval.


The result:

Saturday 20 February 2021

RPI: Streaming Video from Internal Website

Previously, I was able to setup a web camera with an update interval of 5 seconds. Now, I want to stream video from the camera.

Option 1: Using a Script to Implement a Super Simple Web Server with Webcam:

I followed the tutorial and got a pretty good result. There is some lag in the video stream, but overall the experience is quite good.

The Python script implements:

  • a small web server, which can make it hard to embed into a larger web site. 
  • a stream using the camera that is fed to the web site.

In this case, the web server and stream are on port 8000. 

The drawback with this approach is that the web server is extremely simple and hard to integrate to other functionality. The other option is even simpler: Using YouTube to stream the video.

Option 2: Stream Video Over YouTube

Step 1 - Preparations

First, I need to activate live streaming online on Youtube: 

"Sänd live" translates to "Go live".


There is a 24 hour delay to activate the "Go Live" functionality. For the mobile app, it seems that the "Go Live" feature is only available to accounts with more than 1000 viewers. I plan to stream from RPI using an encoder, so I hope that it will work anyways.

I expect streaming from a RPI to generate quite some heat, so I have removed the Lego case for my RPI as a precaution. 

Step 2 - Setting Up Livestream and saving URL and key

Step 3 - Running ffmpg / raspivid command from RPI

I use this command:

raspivid -o - -t 0 -vf -hf -fps 30 -b 6000000 | ffmpeg -re -ar 44100 -ac 2 -acodec pcm_s16le -f s16le -ac 2 -i /dev/zero -f h264 -i - -vcodec copy -acodec aac -ab 128k -g 50 -strict experimental -f flv rtmp://a.rtmp.youtube.com/live2/<SESSION>

raspivid captures video from a Raspberry Pi Camera module. The different options are:

  • -o - means that the output will be sent to stdout. Actually, it will be piped to ffmpeg. 
  • -vf  and -hf  means that the stream will be vertically flipped.
  • -fps 30 means that the stream will capture 30 frames per second
  • -b 6000000 means that the bit rate will be 6Mbit per second. It is maybe too much for the built in wifi adapter, so I may have to reduce the bitrate. 

The output is piped to ffmpeg that is used to record, convert and stream video. 

  • -re means reading input at native frame rate.
  • -ar 44100 sets the audio sampling frequency. The Raspberry Pi Camera module doesn't support audio so I should be able to skip this one.
  • -ac 2 sets the number of audio channels to two. I should be able to skip this one too.
  • -acodec pcm_s16le sets the audio codec.  I should be able to skip this one too.
  •  -f s16le forces format, like signed, 16 bits and little endian. 
  •  -i /dev/zero specifies input filename. This input provides a continuous stream of null characters. I don't know why that is specified to be the input.
  • -f h264 forces the format to H.264, a video coding format used in mpeg-4 
  • -vcodec copy means that the raw codec data is copied as is.
  •  -acodec aac -specifies audio codec againg
  • -ab 128k -sets the audio bitrate.
  • -g 50 sets the "Group of Pictures" size to 50
  • -strict experimental - specifies that the program doesn't need to be super-strict to the standards
  • -f flv rtmp://a.rtmp.youtube.com/live2/<SESSION> forces the output to go to my Youtube stream

The streaming key must be copied to the RPI CLI command.

I had ffmpeg installed already, so I didn't need to recompile it. The first streaming attempt had the image flipped upside down. After removing the fv and hv flags, the stream was initiated properly. 

It took a short while before the stream appeared on my Youtube channel.

This makes it much easier to access streams from my RPI. As long as I have the link to the stream, I can access it. I'll also be able to embed the stream into a html page.



Saturday 30 January 2021

IOT: Data from Arduino to Raspberry PI

In this step, I'll send data from Arduino to Raspberry PI. 

When the user activates the emergency function, a signal will be sent to the RPI that will take a photo and publish on a web server. You can find more information about the traffic lights project here.

Step 1: Connecting Arduino to Raspberry PI

The RPI is connected to a camera module.
A USB cable connects power and serial from the RPI to the Arduino.


The first step is to find the serial port. For the RPI, I've compared the tty ports without and with the Arduino.

The interface /dev/ttyACM0 shows up when I connect the Arduino over USB.

I uploaded a small python script with code that I found on DiyIOt from my Windows computer to my Raspberry PI.

I added a couple of lines to take a photo on the webcam


Step 2: Take a picture, if the emergency button is pressed.
The script checks if the message matches the expected string. If it does, the script will ask the shell to take a photo and save it in /var/www/html/ folder.

When the RPI detects "Switch to Emergency", it captures a JPEG image.
In order to reduce the download time, I've selected a lower resolution than the 3280x2464 that is supported.

Step 3: Publish the image on the web server
The /var/www/html folder is owned by root. This makes it hard to save files there automatically. To resolve this, I've changed the ownership and permissions for that folder. 

A very simple web page that reloads every third second shows the picture. Code and screenshot below:

The updated webpage looks like this:

Now, an event on the Arduino can trigger the RPI to take a photo and show it on an internal web page. The next step will be to send some feedback from RPI to Arduino and to explore video streaming from RPI.