Gstreamer stream is not working with OpenCV
I want to use a Gstreamer pipeline directly with OpenCV to manage the image acquisition from a camera. Currently I don't have the camera so I've been experimenting getting the video from URIs and local files. i'm using a Jetson AGX Xavier with L4T (ubuntu 18.04), my OpenCV build includes Gstreamer and both libraries seem to work fine independently.
The issue I've encountered is that when I pass the string defining the pipeline to the VideoCapture class with the cv2.CAP_GSTREAMER, I receive some warnings like these:
[ WARN:0] global /home/nvidia/opencv/modules/videoio/src/cap_gstreamer.cpp (854) open OpenCV | GStreamer warning: Error opening bin: could not link playbin0 to whatever sink I've defined
[ WARN:0] global /home/nvidia/opencv/modules/videoio/src/cap_gstreamer.cpp (597) isPipelinePlaying OpenCV | GStreamer warning: GStreamer: pipeline have not been created
I've tried several options, you can see them in the next code:
bool receiver(const char* context)
{
VideoCapture cap(context, CAP_GSTREAMER);
int fail = 0;
while(!cap.isOpened())
{
cout<<"VideoCapture not opened"<<endl;
fail ++;
if (fail > 10){
return false;
}
continue;
}
Mat frame;
while(true) {
cap.read(frame);
if(frame.empty())
return true;
imshow("Receiver", frame);
if(waitKey(1) == 'r')
return false;
}
destroyWindow("Receiver");
return true;
}
int main(int argc, char *argv[])
{
GstElement *pipeline;
const char* context = "gstlaunch v udpsrc port=5000 caps=\"application/xrtp\" ! rtph264depay ! ffdec_h264 ! ffmpegcolorspace ! ximagesink sync=false"; //Command for the camera that I don't have yet
const char* test_context = "gstlaunch playbin uri=https://www.freedesktop.org/software/gstreamer-sdk/data/media/sintel_trailer-480p.webm";
const char* thermal_context = "playbin uri=file:///home/nvidia/repos/vidtest/thermalVideo.avi ! appsink name=thermalsink";
const char* local_context = "playbin uri=file:///home/nvidia/repos/flir/Video.avi";
// gst_init(&argc, &argv);
// pipeline = gst_parse_launch(test_context, NULL);
bool correct_execution = receiver(thermal_context);
if(correct_execution){
cout << "openCV - gstreamer works!" << endl;
} else {
cout << "openCV - gstreamer FAILED" << endl;
}
}
For the commands I've tested, the error isPipelinePlaying OpenCV | GStreamer warning: GStreamer: pipeline have not been created is persistent, if I don't define an AppSink the error shown above is changed for open OpenCV | GStreamer warning: cannot find appsink in manual pipeline. From the warnings I can understand that the pipeline is incomplete or is not created properly but I don't know why, I've followed the examples I've found online and they don't include any other steps.
Also, when using directly the Gstreamer pipeline to visualize the stream, when I try to open a local video, everything seems to work fine but the first frame is frozen and doesn't show the video, it just stays in the first frame. Do you know why would that happen? with playbin uri pointing to an internet address everything works well... The code is the next:
#include <gst/gst.h>
#include <unistd.h> // for sleep function
#include <iostream>
using namespace std;
int main (int argc, char *argv[])
{
GstElement *pipeline;
GstBus *bus;
GstMessage *msg;
const char* context = "gstlaunch v udpsrc port=5000 caps=\"application/xrtp\" ! rtph264depay ! ffdec_h264 ! ffmpegcolorspace ! ximagesink sync=false";
const char* local_context = "gst-launch-1.0 -v playbin uri=file:///home/nvidia/repos/APPIDE/vidtest/THERMAL/thermalVideo.avi";
const char* test_context = "gstlaunch playbin uri=https://www.freedesktop.org/software/gstreamer-sdk/data/media/sintel_trailer-480p.webm";
// Initialize gstreamer
gst_init (&argc, &argv);
// Create C pipeline from terminal command (context)
pipeline = gst_parse_launch(local_context, NULL);
// Start the pipeline
gst_element_set_state(pipeline, GST_STATE_PLAYING);
// Wait until error or EOS
bus = gst_element_get_bus (pipeline);
gst_bus_timed_pop_filtered (bus, GST_CLOCK_TIME_NONE, (GstMessageType)(GST_MESSAGE_ERROR | GST_MESSAGE_EOS));
/* Free resources */
if (msg != NULL)
gst_message_unref (msg);
// g_print(msg);
gst_object_unref (bus);
gst_element_set_state (pipeline, GST_STATE_NULL);
gst_object_unref (pipeline);
}
Solution 1:
For using gstreamer backend, opencv VideoCapture expects a valid pipeline string from your source to appsink (BGR format for color).
Your pipeline strings are not correct mainly because they start with the binary command (gstlaunch for gst-launch-1.0, playbin) that you would use in a shell for running these.
You may try instead this pipeline for reading from RTP/UDP an H264-encoded video, decoding with dedicated HW NVDEC, then copying from NVMM memory into system memory while converting into BGRx format, then using CPU-based videoconvert for BGR format as expected by opencv appsink:
const char* context = "udpsrc port=5000 caps=application/x-rtp,media=video,encoding-name=H264 ! rtph264depay ! h264parse ! nvv4l2decoder ! nvvidconv ! video/x-raw,format=BGRx ! videoconvert ! video/x-raw,format=BGR ! appsink drop=1";
Or for uridecodebin the ouput may be in NVMM memory if a NV decoder has been selected, or in system memory otherwise, so the following nvvidconv instance is first copying to NVMM memory, then the second nvvidconv converts into BGRx with HW and outputs into system memory:
const char* local_context = "uridecodebin uri=file:///home/nvidia/repos/APPIDE/vidtest/THERMAL/thermalVideo.avi ! nvvidconv ! video/x-raw(memory:NVMM) ! nvvidconv ! video/x-raw,format=BGRx ! videoconvert ! video/x-raw,format=BGR ! appsink drop=1";
Note for high resolutions that:
- CPU-based videoconvert may be a bottleneck. Enable all cores and boost the clocks.
- OpenCv imshow may not be that fast depending on your OpenCv build's graphical backend (GTK, QT4, QT5..). In such case a solution is to use an OpenCv Videowriter using gstreamer backend to output to a gstreamer video sink.