Having an old webcam hanging about and a pizero (in a customized matchbox case) available (a MagPi freebee) I decided to try my hand at Motion Detection. Looking at some of the stuff around, I was introduced to Pyimagesearch Search a site which is excellent for anything to do with OpenCV. A lot of the ideas for my system came from here. Also very worthy of praise is Derek Simkowiak work on tracking gerbils.
But to go back a bit in time, I came from a slightly different angle than the above. My idea was based on how I thought a robotic vision should work. Rather than processing every frame for faces in order to do facial recognition or object detection I thought the following sequence of events should take place. First the robot should be triggered by movement. Then within the movement area which is a lot smaller than the whole frame you should be able to identify a figure. Then from that you could identify a face in the top part of the figure. From the recognition of a face you should be able to identify a particular person. In my mind there was also a lot of inefficiency – why would you after identifying a person keep repeating the same procedure frame after frame. Once something or someone was recognized you should be able to follow it.
With that in mind I started off by looking at the Lightweight Motion Detector which seemed a very simple cut to the bone motion detection that I could be the first triggering. But I soon realized that this just wasn’t sophisticated enough for a security camera looking out through the window. Mainly because you have natural movement of pixels due to natural atmospheric conditions (or fairies as my daughter liked to think). That is where I came across Adrian at Pyimage’s work using accumulated weight and then subtracting the current frame from the average. Then contours are calculated which basically makes a box around any movement and the size of this is calculated in order to ascertain whether it is a genuine movement.
Where I differed from Adrian was that I didn’t want to check every frame and I wanted to save complete videos of the motion. But straight away the issue is if you only detect motion after say four frames and you start filming at that point you then have lost the first part of the motion.
To get past this I came up with the idea to set up a queue of images so that a couple of seconds of film is held at all times. If there isn’t any motion the frame at the beginning of the queue is removed and forgotten. If there is motion the film can start to be saved from the queue. This gives a couple of seconds of film before the event which is a bit like going back in time. I also allowed a few seconds after motion has stopped before the filming finishes. The problem here was if the postman went past the window to post letters through the door the film would stop. But then he would go back past the window and the queue hadn’t had enough time to fill up again. So a lot of fiddling was required – first to allow motion to be detected more rapidly if motion had just stopped so filming didn’t have to stop and start in situations like that.
Another issue I had was too many things triggering the detection. Since the window looks out on to a road every car that passed I got tired of looking at the videos. For that reason I created masked areas. But simple masking an area was not really good enough because if someone walked off the road and down the path if the road didn’t count as a motion detect area then I would lose the first part of the movement. To get round that I created the concept of motion in a blue area ie not that important and green area important and several blue area plus a single green area was enough to count as true movement.
Though I didn’t do the motion calculations every frame it seemed to be that there still was a lot of inefficiency as why would you wait while you calculated motion before grabbing the next frame from the camera. In my mind there were three distinct processes involved here. The first being the camera grabbing process which should be consistently taking frames and pushing them on to the queue. This is of the utmost priority as you don’t want to lose frames per second as this would ruin the film. The second process is the motion detection. This has medium priority as it doesn’t need to process every frame in order to detect motion (that is unless you want to prove the existence of superman). And thirdly the process that takes frames off the queue and writes them to a file if required.
I saw an article on the Pyimagesearch about reducing latency using threads. However, I was a bit disappointed in that this didn’t work on the Pizero. To be fair Adrian pointed out that due to the number of cores the benefits of threading on the Pizero were minimal. My experience was worse – even without and processing on the image the fact of even having threads slowed the performance down.
But after a bit of research into multiprocessing I was able to rewrite the system with three processes running at the same time. Though this is a bit more tricky to program with using events to communicate between the processes. One of the things you can do is use the nice() facility which will set the priority of the process. Maybe this should be called polite() instead because the less the figure is the more demanding on the system it is. But works really quite well. In good light the film works at 27 to 30 frames per second. And mostly it is flicker free.
What about in bad light? – what about in the night? Well I have a motion detection floodlight outside the house anyway so if there is anything moving outside the light will come on. This means that there is just about enough light to film any movements outside. Though this is a bit splodgy and films around 4fps.
Though it really works well, the system does suffer from weather conditions. On a semi cloudy day with wind the light can vary and create movement of shadows. When the sun is low it reflects in bus windows or side of vans which also creates false positives. Heavy wind can make the leaves of a bush move intensely. Unfortunately I can’t mask this out as I need to capture movement in front of it. To counter this I first need to ensure that the bush is well trimmed and change the minimum size of the contours in strong winds some times of the year.
Sometimes I get what looks like dementors flying past, which I assume to be insects. And strangely, for two weeks an insect of some kind kept walking on the window in front of the camera. Not just for a short while – it would do it once or twice every hour!.
Uploading the Film
Of course a security camera is not much use if you can’t check up at what’s happening at home when you are away. I got this idea again from Pyimage Search, where Adrian sends a still photo to his dropbox. Unfortunately the formats that OpenCV supports is very limited and in particular it can’t write mp4s. I tried various ways of converting from avi, but there didn’t seem to be any way to do it on the fly, though you could once the file had been saved. But the conversion uses a lot of resources and is time consuming. But luckily there was little reason to fret, dropbox kindly converts avi files on the fly when you click on them and streams the video back to yourself.
Having got this working I found I found there was a problem. I would do a small change to some setting or another – leave for work and find that if it wasn’t quite correct I would then have to wait until the evening before I could make any change. Same for the masked areas. So I went a bit further and used my dropbox folder to sync the configuration files so that they could be updated from a far. Also for the same purpose of debugging I uploaded the log files. If the config file changes then the motion detector program gets restarted to take on the new values. I also wrote a program that checks that the motion detect is working correctly and tries to restart the system if it doesn’t look healthy. As I wanted to check the logs from the whole system I also created a socket logger which also logs errors from the standard error. For example if OpenCV throws a wobbly. Finally there is a housekeeping prog which removes old film files stored locally. It doesn’t remove them from dropbox – that I still do manually.
Code is uploaded here at bitbucket.
There are many possible ways to take this project further. I am thinking possible of separating it into three separate services which communicate with each other. I could add another process which fires to detect a figure within the movement ROI (region of interest). Once movement is detected maybe object tracking would be preferable to continued motion detection -see camshift. I have also plans to get the system to only turn on at certain times and when everyone has left the house. This could be done by bluetooth vicinity.
08/04/2018 – Some of these developments have been done in a later post – see Multi Processing OpenCV Video Image Environment.
November 24, 2017 at 12:13 am
Your code is not accessible at BitBucket.
November 26, 2017 at 6:09 pm
Terribly sorry about that. Have only managed to sort out as I’ve been away. But should be ok now. Apologies to anyone else who has tried to access this.
January 31, 2018 at 3:23 am
This is very cool and I would love to try it. Does this work with the Pi Camera or is it exclusive to USB webcams?
January 31, 2018 at 8:19 am
Thanks Mark. You should be able to use it with the Pi Camera as long as you have a Video4Linux driver installed for it. This will set it to device 0 which OpenCV will then access opens the stream.
Failing that a lot of people use PiCamera to capture the stream – I haven’t tried this myself. Have a look at this https://raspberrypi.stackexchange.com/questions/24262/getting-image-data-from-raspberry-pi-camera-module-in-opencv-with-python.
March 19, 2018 at 9:09 pm
I really like your Project and I try to get it Running with a rtsp Webcam.
But something is Not working.
I Used the code from Adrian Rosebrock and Changed it to my RTSP Stream which worked.
Now I Tried your Project and changed in the init the src to my RTSP link.
But for some reason the code Hangs at multimotion.py in the while pass loop in line 171.
Do you have any idea how I can Jump into the debugging more deeper?
March 20, 2018 at 1:02 pm
Thanks for your interest in this project. The problem you are getting suggests that the checkmotion process is not receiving an image. Check around line 101-103 whether the frame (image) exists and that it is sent to self.ParentConn. You can do this by putting in a logging statement or print statement. If you still have problems if you send me the code changes you have made and I’ll take a look.
It is a bit in the nature of multi processing that they are a bit difficult to debug. However, I’m in the process of writing a new version which should help with this. Watch this space.
March 21, 2018 at 2:51 pm
Thanks for your reply. I recompiled my whole openCV and changed RTSP against Webcam (src=0). But it’s still not working.
I get a frame and the resolution from the camera in the main process.
I think it has sth to do with the process communication. Have you tried your code on other Pi’s like the Pi 3?
The code seems to hang at self.ParentConn.send(frame)
March 21, 2018 at 10:26 pm
I’m not sure what the problem would be. I haven’t tried the code on anything but a pizero but there is no reason to think that would be an issue.
If you are sure that the frame is correct and the code reaches the send, then I would check that lines 48 to 56 are correct. What you could also do is set – frame=”2″ just before the self.ParentConn.send(frame) and then print out the frame in the checkmotion at line 175 process to see whether the pipe connection is valid. Try commenting out the reqEvent lines around the send and receive in case there is an issue with the Event. This will then check every frame for motion. Also try a really cut down dummy version containing two processes with a pipe connection between the two and pass something through just to establish that in principle there is no problem with multiprocessing on your machine. Let me know how it goes and if you need any more help.
March 23, 2018 at 3:50 pm
Hey Dani, thanks again for your help. I think I “found” the problem but I still don’t know by what it’s caused.
I may have some problems with the reqEvent. When I comment out lines 171,172 and 185 it is working.
But now I have more problems. When I take my useCase (1080p Cam) and start mmd.py my RAM is filled up in seconds. Could it be that the queued frames are not freed when they are not needed anymore? Maybe you didn’t noticed it because of your “small” resolution. 🙂
And another question: Why are you sleeping 10 seconds in CheckMotion? I can see no real reason for that.
March 23, 2018 at 7:44 pm
Hi Maui. Part of the problem may be removing the recEvent. What that means is that the camera keeps pushing every frame to check motion and it can’t keep up with the demand. So it fills up memory. If you can’t get the reqEvent to work I would only send 3 frames per second through the connection. You also might need to reduce the queue size for larger images. The sleep for ten seconds was put in place because of the setSpeed function runs for that amount of time. This works out the frames per second rate and also warms up the camera. Otherwise as it warms up motion might be detected. You are right that it isn’t strictly necessary as the checkMotion process would wait anyway until it could receive the first image. Dani
March 23, 2018 at 4:48 pm
EDIT: I think the RAM is freed but it looks in htop like all the spawned processes don’t share the memory and “makes his own copy of the frame Queue?”. When I set a maxItem size in Queue like queue(50) the RAM is not exploding but still at some rate about 500Mb.
For now I will live with the small feed my cam is delivering (640x…).
March 23, 2018 at 7:51 pm
There shouldn’t be multiple copies of the Queue. It should act like a stream from one process to the other. I don’t think there would be a memory leak because I have it running all day/every day.