PDA

View Full Version : Render two videos in an application



divzar
29th July 2019, 17:09
I am receiving a video stream from an external source (Pupil eye-tracker) frame by frame as a buffer via zmq. I have to process each frame in two ways: draw a bounding box in the image with a certain logic and display it in one view, use those bounding box to crop region of interest and display it in another. The bounding box will always have the same size.

As a POC, I added two labels to a widget, and each label has a separate thread to handle changePixmap, both of which currently receive the frames from the source. However, I am facing a couple of issues -


I am not really sure what is the best way to interact between the two threads - as my second thread uses the bounding box from the first thread to render the region of interest
This rendering is not really performant, I see my UI lag after a certain point of time


As an alternative, I am thinking of moving to a model view approach, where I will use two QAbstractTableModels to store pixel-wise information of the current frame and of the region of interest. Correspondingly, I will create two QImages for each image. I am thinking of a flow like this:

The buffer will update the data in the first model -> Use a signal to notify the change, I will then convert it to an image and create the bounding box -> Update the pixel indexes with a certain color in the first table corresponding to pixels in the bounding box -> update region of interest in the second table by picking indexes inside bounding box from the previous step.

I have a few queries with this approach:

Is this the best way to approach this problem, which, to me, seems very complicated? Any other thoughts of how I can go about it?
What is the best way to store the image as a table, how do I add a numpy array to a TableModel?
How should I convert data in a table to an image as fast as possible, should I use setPixel for the QImage and then use setPixmap?


Any help is greatly appreciated.

anda_skoa
1st August 2019, 19:27
Using a model sounds wrong, this is image/pixel data, not something shown in an item view.

Since you likely want to show the same image in parallel in both views, using two threads sounds also like more trouble than necessary.

There are likely two approaches

1)
* Worker thread receives the data and created an image
* Creates second image applying the crop region
* Stores both images in variables accessible by the main thread (obviously under protection of a mutex)
* Notifies the main thread via signal of the new images

* Slot on the main thread retrieves the two images and puts them into each QLabel
* Conversion from QImage to QPixmap can take a bit of processing time if the image data formats are not the same, e.g. color depth begin different

2)
* Worker thread receives data and creates an OpenGL texture
* Performs cropping in OpenGL generating a second texture
* Use OpenGL widgets for displaying the two textures

Cheers,
_

wysota
1st August 2019, 22:38
Regarding approach 2, you don't even need to crop to get the second texture, just modify coordinates of the texture when displaying it in the second view so that only the needed part is rasterized.