I am receiving a video stream from an external source (Pupil eye-tracker) frame by frame as a buffer via zmq. I have to process each frame in two ways: draw a bounding box in the image with a certain logic and display it in one view, use those bounding box to crop region of interest and display it in another. The bounding box will always have the same size.

As a POC, I added two labels to a widget, and each label has a separate thread to handle changePixmap, both of which currently receive the frames from the source. However, I am facing a couple of issues -

  • I am not really sure what is the best way to interact between the two threads - as my second thread uses the bounding box from the first thread to render the region of interest
  • This rendering is not really performant, I see my UI lag after a certain point of time


As an alternative, I am thinking of moving to a model view approach, where I will use two QAbstractTableModels to store pixel-wise information of the current frame and of the region of interest. Correspondingly, I will create two QImages for each image. I am thinking of a flow like this:

The buffer will update the data in the first model -> Use a signal to notify the change, I will then convert it to an image and create the bounding box -> Update the pixel indexes with a certain color in the first table corresponding to pixels in the bounding box -> update region of interest in the second table by picking indexes inside bounding box from the previous step.

I have a few queries with this approach:
  • Is this the best way to approach this problem, which, to me, seems very complicated? Any other thoughts of how I can go about it?
  • What is the best way to store the image as a table, how do I add a numpy array to a TableModel?
  • How should I convert data in a table to an image as fast as possible, should I use setPixel for the QImage and then use setPixmap?


Any help is greatly appreciated.