Pregunta

My current project:

Send video from a device with a usb camera to a server, on the server do video processing and then send it to another client where it is displayed.

I have gotten gstreamer to work in the terminal:

On the receiving server:

gst-launch-1.0 udpsrc port=5000 ! \
application/x-rtp,media=video,clock-rate=90000,encoding-name=H264 ! \
rtph264depay ! h264parse ! avdec_h264 ! videoconvert ! \
timeoverlay shaded-background=true text="host" deltay=20 ! \
ximagesink async=true sync=false

On the capturing client:

gst-launch-1.0 -v v4l2src ! \
timeoverlay shaded-background=true text="pi" ! \
video/x-raw,height=480,width=640,framerate=30/1 ! \
videoconvert ! omxh264enc ! rtph264pay ! \
udpsink host=136.225.61.68 port=5000

This works very well and the video is being transfered. Now I need to (on the receiving end) capture the stream in c code so that I can do face detect ect with opencv and send this stream to another client. Either this is done with the gstreamer bad plugins that have opencv support or it is done by converting the stream into mats and using opencv. Does anybody know which is easier and do you have any examples? (I am using gstreamer 1.0).

thanks in advance

¿Fue útil?

Solución 2

I finally found a solution to the first step. I can now use gst_parse_launch to receive the stream in C code.

The code on the server side is now as follows:

#include <gst/gst.h>

int main(int argc, char *argv[]) {
  GstElement *pipeline;
  GstBus *bus;
  GstMessage *msg;

  /* Initialize GStreamer */
  gst_init (&argc, &argv);

  /* Build the pipeline */
  pipeline = gst_parse_launch ("udpsrc port=5000 ! \
  application/x-rtp,media=video,clock-rate=90000,encoding-name=H264 ! \
  rtph264depay ! h264parse ! avdec_h264 ! videoconvert ! \
  timeoverlay shaded-background=true deltay=20 ! \
  ximagesink async=true sync=false", NULL);

  /* Start playing */
  gst_element_set_state (pipeline, GST_STATE_PLAYING);

  /* Wait until error or EOS */
  bus = gst_element_get_bus (pipeline);
  msg = gst_bus_timed_pop_filtered (bus, GST_CLOCK_TIME_NONE, GST_MESSAGE_ERROR | GST_MESSAGE_EOS);

  /* Free resources */
  if (msg != NULL)
    gst_message_unref (msg);
  gst_object_unref (bus);
  gst_element_set_state (pipeline, GST_STATE_NULL);
  gst_object_unref (pipeline);
  return 0;
}

Now the next step is to connect this with OpenCV or to an OpenCV plugin so that I can do facedetect etc.

Otros consejos

You can use opencv VideoCapture function to receive the stream and then you can have image processing on it.

Licenciado bajo: CC-BY-SA con atribución
No afiliado a StackOverflow
scroll top