So what is this. Well partly an update on my previous project Multi Processing OpenCV Home Surveillance System but also a lot more. It is a mix and match environment of classes that can be combined to do lots of different image tasks each working in its own process. Ideal for rapid prototyping etc.

Though the original project stands up well, there were a few weaknesses. The three processes read camera, motion detect and write file though running asynchronously were rather tightly coupled which made debugging rather difficult. Slightly over complicated interface with stop events write events and check events as well as the queues and pipes.  I had also put in logic that the read camera process sent an image to the check motion process as well as the write image at the same time. This meant that if you wanted to see a rectangle showing the motion on the image you couldn’t as the writing of the video was in a different process to the check motion. My initial idea was that you should be able to run the saved video file through the motion detect this time displaying the rectangle on the image as well as the masked area. The difficulties with this again pointed to the over coupling of the processes.

So it slowly dawned on me that what was needed was some way of slotting processes in and out linked by a pipe. So you could replace a read camera process linked to a motion detect process with a read file process linked to the motion detect process. And if you can do that why not replace the write file process with a display video process. And so the imagination expanded. You might instead of displaying the image with openCV  you might want to stream to a web page. It also opened the possibility of passing a motion containing section of image to an object detector or facial recognition process. Something I had originally planned but haven’t got round to yet.

The way I implemented this was to create three base classes for three types of process. Send process which initializes the flow of data a Receive process which receives the data and a Transform process which receives the data manipulates it and then passes it on. So the flow will be like this

Send Process -> Transform Process -> Transform Process -> Receive Process

The miaEnv class then links these processes up with a single direction pipe. Because all this code is inherited it leaves the logic of the main classes clean with all the pipe connections happening under the bonnet. And though in my case image frames are the main thing transferred between processes, pipes can send anything really.

If we look at one of the simplest classes which displays the video using OpenCV.

class DisplayCam(ReceiveProcess):

 def run(self):
  while True:
   frame=self.ReceiveConn.recv()
   if frame is None: break
   elif type(frame)==float:continue
   elif type(frame)==str:continue
   cv2.imshow('frame',frame)
   cv2.waitKey(1)
  cv2.destroyAllWindows()
 return

First of all we see that this is a receive process so there is no subsequent process. This class doesn’t have an __init__ method because it doesn’t need it, but there is no reason why parameters can’t be passed to it.

frame=self.ReceiveConn.recv(). This line gets the frame from the preceding process whatever it is.

if frame is None: break. This terminates the process. Rather than having a stop event fire the easiest way to stop every process in the flow is to pass None down the pipe. If the Send process captures Ctrl-C then each one can be terminated in this way. Also if you are running unmanned then you can have the same effect by sending a SIGINT.

Other than frames passed down the pipe, the frames per second rate also needs passing as this varies depending on light. As this isn’t used by the display we simply continue to get the next item from the pipe. The buffer class keeps a few seconds context of images which stores and then releases images on command (a bit like a sink with a plug put in or taken out). In order to control this we need to pass a string to tell it when to do this – either “” or “”. In the case of the display class this is not required and can be safely ignored. The rest of the code is simply OpenCV commands to display the image frame.

A SendProcess type process is very similar except that it will have a line such as self.SendConn.send(frame) to pass the frame or fps to the next process. A TransformProcess contains both a ReceiveConn and a SendConn.

Streaming to the Web
As I said earlier following the logic of this project the user should be able to send the images to a web page. Initially I went down the route of installing Flask and trying to get that working. However I couldn’t get it working with this system. However it really isn’t that difficult to do streaming just using a socket listening on a port.

When a request comes in the first line will determine which page the browser is looking for. If you load your browser with the ip from where you are running the server followed by the port eg http://192.168.1.183:8888/ the web server(miaWebServer) will receive the following first line in the request. “GET / HTTP/1.1”. This means that it is requesting / ie the default page. When we get this request we send back a response initally of HTTP/1.1 200 OK followed by a very simple html page.

<html><head>
 <title>Cyber-Renegade Video Streaming Demonstration</title>
 </head><body><h1>Cyber-Renegade Video Streaming Demonstration</h1>
 <img id="bg" src="/video_feed">
 </body></html>

Very sneakily the source of the image is set to /video_feed which creates a new request to the same server but this time with “GET /video_feed HTTP/1.1” as the first line. When we see this we need first to return  HTTP/1.1 200 OK and then  Content-Type: multipart/x-mixed-replace; boundary=frame which tells the browser what it is going to receive. Then for each frame we send –frame followed by Content-Type: image/jpeg. Please be aware that the correct carriage returns are critical to get it to work. Notice also that the image needs to be converted to jpeg and bytes for displaying on the web. OpenCV has a bit of a strange way of dealing with jpegs but converting like this works ok.

This is a pretty simple way to stream to the web. Though the system lacks a few things if you wanted to use in production. It only allows one connection which is ok by me as I don’t need anyone else to see it. As it is it would only work within a local network or wifi. To view across the web you would have to do some more work. If you are interested I would look at ngrok which creates a tunnel from the internet to your server. In a live system you should also deal with broken connections and only stream if a browser is connected. For that you might want to look at web sockets. But given those caveats it still works really well in a very simple way.

So what can it do?
Here is a list of a few things that can be done.

 # Display video from the camera.
 readCam = ReadCam()
 displayVid = DisplayCam()
 bsPrs=MiaEnv([readCam, displayVid])
 
 # Read a video file and write to a separate file
 rdVid=ReadVideofile('./Video12-09-52-175806.avi')
 writeVideo=WriteVideofile('./temp.avi')
 bsPrs=MiaEnv([rdVid, writeVideo])
 
 # Write motion detect to a file.
 writeVideo=WriteVideofile()
 Buff=miaBuffer()
 redd=ReadCam()
 Motion = miaMotionDetect()
 bsPrs=MiaEnv([redd, Motion, Buff, writeVideo])
 
 # Display camera to web page. Open page in browser.
 mserv= miaWebServer()
 readCam = ReadCam()
 bsPrs=MiaEnv([readCam, mserv])
 
 # Display camera with motion detection to web page. Open page in browser.
 mserv= miaWebServer()
 readCam = ReadCam()
 Motion = miaMotionDetect()
 bsPrs=MiaEnv([readCam, Motion, mserv])

There is much more that could be bolted on. If you wanted to do a timelapse video simply use the readcam to pass images to a timelapse process which passes only a few frame images to the write video process.

Another thing that I will do shortly is add transformList process which will still accept a single pipe connection but will be able to output to several processes rather than a single one.

All code can be found at bitbucket.