PDA

View Full Version : OpenCV image sending through Qt socket



franco.amato
15th May 2018, 07:19
Hi,
I'm trying to port to QT, a program that sends webcam frames (cv :: Mat) through Qt socket, like a streaming video. I could do it successfully using C++ and pthreads only but I have issues using the QTcpSocket mechanism.

The program logic is really simple: at client side i capture webcam frames and I send to the server after connecting.

In a timer slot I capture the frames, resize to 640x480, convert to gray scale and write to a socket:



// The main code of the timer slot
QObject::connect(&captureLoopTimer, &QTimer::timeout, [&]()
{
mutex.lock();
capture.read(frame);
if(frame.empty())
{
qWarning() << "Empty frame";
mutex.unlock();
return;
}
else
{
qDebug() << "Frame received";
}
// I correctly got a webcam frame -> convert to grayscale
flip(frame, frame, 1);
cvtColor(frame, gray, CV_BGR2GRAY);

// Frame is ready to be written to a tcp socket
isDataReady = 1;
mutex.unlock();
});

The frames are written to a QTcpSocket in an external thread launched using a QtConcurrent::run(writeFramesToSocket), following the main code


void writeFramesToSocket()
{
QTcpSocket socket;
socket.connectToHost("127.0.0.1", 2222);
if(socket.waitForConnected(5000))
{
qDebug() << "Connected...";
}
else
{
qWarning() << "Can't connect to host";
return;
}

reshaped = (gray.reshape(0, 1)); // To make it continuous
unsigned int frameSize = reshaped.total() * reshaped.elemSize(); // Frame size is 307200 (640x480)
qint64 bytes;

while(1)
{
if(aborted) //Aborted is a global variable
{
socket.disconnectFromHost();
break;
}

QMutexLocker locker(&mutex);
if(isDataReady == 1)
{
qDebug() << "Writing frame to socket...";
bytes = socket.write((char*)reshaped.data, frameSize);
socket.waitForBytesWritten();
isDataReady = 0;
}
locker.unlock();
QThread::usleep(100);
}
}

The frames are written correctly to the socket, I can visualize them with a server written in pure C++ and pthreads but I can not do the same using Qt ( the reason of this post )

At the server side (a really basic implementation) I accept a client connect and I try to get the bytes sent from the client


// FrameServer.cpp
FramesServer::FramesServer(QObject *parent)
: QObject(parent)
{
m_server = new QTcpServer(this);
connect(m_server, SIGNAL(newConnection()),
this, SLOT(newConnection()));

if(!m_server->listen(QHostAddress::Any, 2222))
{
qDebug() << "Server could not start";
QCoreApplication::quit();
}
else
{
qDebug() << "Server listening on port 2222";
}

m_frame = cv::Mat::zeros(480, 640, CV_8UC1);
m_frameSize = m_frame.total() * m_frame.elemSize(); // Frame size is always 307200

m_bytesReceived = 0;
m_frameBuffer.clear();
}

void FramesServer::newConnection()
{
m_socket = m_server->nextPendingConnection();

connect(m_socket, SIGNAL(disconnected()),
m_socket, SLOT(deleteLater()));

connect(m_socket, SIGNAL(readyRead()),
this, SLOT(readFrame()));
}

void FramesServer::readFrame()
{
// Get the data

while((m_bytesReceived = m_socket->bytesAvailable()) > 0)
{
m_frameBuffer.append(m_socket->readAll());
if(m_bytesReceived == m_frameSize) // I received a complete frame so I want to display it
{
qDebug() << m_frameBuffer.size();
for (int i = 0; i < img.rows; ++i)
{
for (int j = 0; j < img.cols; ++j)
{
(img.row(i)).col(j) = (uchar)m_frameBuffer.at(((img.cols) * i) + j);
cv::imshow("DISPLAY", img);
}
}
m_frameBuffer.clear();
}
}
}

From the Qt doc I understood that the readyRead signal is fired every time that there is available data for reading so I connected to the slot readFrame thar read data and try to complete a frame ( 307200 bytes )
Unfortunately the m_frameBuffer not always shows 307200 value, most of time is a different value and I can not construct correctly the cv::Mat image so I can not display it.

I don't know if the logic is well implemented, I would receive a help on it

Thank you

Lesiok
15th May 2018, 10:21
FrameServer::readFrame should look something like this
void FramesServer::readFrame()
{
// Get the data

while(m_socket->bytesAvailable() > 0)
{
m_frameBuffer.append(m_socket->readAll());
if(m_frameBuffer.size() >= m_frameSize) // I received a complete frame so I want to display it
{
QByteArray workBuffer = m_frameBuffer.left(m_frameSize);
for (int i = 0; i < img.rows; ++i)
{
for (int j = 0; j < img.cols; ++j)
{
(img.row(i)).col(j) = (uchar)workBuffer.at(((img.cols) * i) + j);
cv::imshow("DISPLAY", img);
}
}
m_frameBuffer.remove(0,m_frameSize);
}
}
}Think about why.

franco.amato
15th May 2018, 16:10
Hi Lesiok,
it worked. I only had to change the code to convert from QByteArray to cv::Mat in this way:


QByteArray workBuffer = m_frameBuffer.left(m_frameSize);
cv::Mat frame = cv::Mat(480, 640, CV_8UC1, workBuffer.data());
if(!frame.empty())
{
// Display it on an OpenCV window
imshow("DISPLAY", frame);
}


For some reason that I don't understand yet the previous conversion code gave as result an empty image but still having the right size.

Thank you