GStreamer

Índex

General

Instal·lació / Installation

  • From packages
    • Mageia
      • urpmi  ... gstreamer1.0-plugins-bad gstreamer1.0-plugins-ugly gstreamer1.0-x264
  • Compilació des de codi font / Compilation from source
    • Mòduls i dependències / Modules and dependencies
      • module desc
        dependencies


        Mageia CentOS Ubuntu
        /data/doc/gstreamer/head

        urpmi ...
        yum install ...
        apt-get install ...
        gstreamer

        autoconf gettext-devel libtool bison flex gtk-doc yasm autoconf gettext-devel libtool bison flex gtk-doc yasm glib2-devel gcc-c++ freetype freetype-devel
        autoconf bison flex ...
        gst-plugins-base

        lib64opus-devel libvorbis-devel libogg-devel libtheora-devel libxv-devel libsoup-devel opus-devel libvorbis-devel libogg-devel libtheora-devel libxv-devel pango-devel wayland-devel
        libopus-dev libvorbis-dev libogg-dev libtheora-dev libxv-dev libpango1.0-dev
        gst-plugins-good

        libvpx-devel ...
        libvpx-devel pulseaudio-libs-devel libsoup-devel
        libvpx-dev
        gst-plugins-bad

        librtmp-devel librtmp-devel librtmp-dev
        gst-plugins-ugly

        libx264-devel libx264-devel libx264-dev
        gst-python


        python-devel pygobject3-devel

        gst-libav




        gstreamer-editing-services


        libxml2-devel


      • Mageia
        • urpmi autoconf gettext-devel libtool bison flex gtk-doc yasm
        • For plugins-base:
          • urpmi lib64opus-devel lib64vorbis-devel lib64ogg-devel lib64theora-devel lib64xv-devel libsoup-devel
      • Raspbian
        • ...
      • CentOS
        • automake >=1.14 (CentOS 7 provides version 1.13)
        • yum install -y autoconf gettext-devel libtool bison flex gtk-doc yasm glib2-devel gcc-c++ freetype freetype-devel
        • plugins-base
          • yum install opus-devel libvorbis-devel libogg-devel libtheora-devel libxv-devel pango-devel
        • plugins-good
          • yum install libvpx-devel
        • plugins-bad
          • yum -y install http://li.nux.ro/download/nux/dextop/el7/x86_64/nux-dextop-release-0-5.el7.nux.noarch.rpm
          • yum -y install librtmp-devel
        • plugins-ugly
          • yum install libx264-devel
    • Modules
      • module
        /usr/local/lib/
        /usr/local/lib/gstreamer-1.0/ /usr/local/lib/gi-repository-1.0/
        /usr/local/share/gir-1.0/
        /usr/local/lib/pkconfig/
        gstreamer
        libgstreamer-1.0.so
        libgstbase-1.0.so
        libgstcheck-1.0.so
        libgstcontroller-1.0.so
        libgstnet-1.0.so
        libgstcoreelements.so
        libgstcoretracers.so
        Gst*1.0.typelib
        Gst-1.0.gir
        GstBase-1.0.gir
        GstController-1.0.gir
        GstNet-1.0.gir
        GstCheck-1.0.gir
        gstreamer-1.0.pc
        gstreamer-base-1.0.pc
        gstreamer-check-1.0.pc
        gstreamer-controller-1.0.pc
        gstreamer-net-1.0.pc
        ...





        gstreamer-editing-services
        libges-1.0.so

        GES-1.0.typelib
        GES-1.0.gir
        gst-editing-services-1.0.pc

    • From git
    • From tar files
      • https://gstreamer.freedesktop.org/src/
      • if you want to be able to access GStreamer by using PyGObject (applications made in Python):
        • Dependencies
          • Mageia
            • urpmi lib64girepository-devel
          • CentOS
            • sudo yum install gobject-introspection-devel
        • Check config.log
          • HAVE_INTROSPECTION_TRUE=''
            INTROSPECTION_COMPILER='/usr/bin/g-ir-compiler'
            ...

        • Check that these files exist after compilation:
          • /usr/local/lib/girepository-1.0/Gst*.typelib
        • In order to access Gst from your applications, you will need to set environment variable (to avoid error: ValueError: Namespace Gst not available):
          • export GI_TYPELIB_PATH=/usr/local/lib/girepository-1.0
      • gstreamer_install.sh 1.15.1
      • gstreamer_install.sh
        • #!/bin/bash -e

          EXPECTED_ARGS=2
          if (( $# != $EXPECTED_ARGS ))
          then
              cat <<EOF
          Usage: `basename $0` action version

          Examples:
          - `basename $0` get 1.15.1
          - `basename $0` install 1.15.1
          - `basename $0` uninstall 1.15.1
          EOF
              exit 1
          fi

          # parameters
          action=$1
          version=$2

          mkdir gst-${version}
          cd gst-${version}


          # dependencies:
          # automake 1.14
          # gstreamer: autoconf gettext-devel libtool bison flex gtk-doc yasm glib2-devel gcc-c++ freetype freetype-devel
          # plugins-base: libogg-devel libtheora-devel libvorbis-devel opus-devel wayland-devel
          # plugins-good: libvpx-devel pulseaudio-libs-devel libsoup-devel
          # plugins-bad: http://li.nux.ro/download/nux/dextop/el7/x86_64/nux-dextop-release-0-5.el7.nux.noarch.rpm librtmp-devel
          # plugins-ugly: libx264-devel
          # gst-python: python-devel pygobject3-devel
          # gstreaming-editing-services: libxml2-devel

          # update ldconfig
          sudo sudo sh -c 'echo "/usr/local/lib" > /etc/ld.so.conf.d/local.conf'
          sudo ldconfig

          modules="gstreamer gst-plugins-base gst-plugins-good gst-plugins-bad gst-plugins-ugly gst-libav gst-python gstreamer-editing-services"

          for module in $modules
          do
              src_name="${module}-${version}"
              echo "========================================= $src_name
          ========================================="
              case $action in
                  "get")
                      curl -s -L -O https://gstreamer.freedesktop.org/src/${module}/${tar_filename}
                      ;;
                  "install")
                      tar_filename="${src_name}.tar.xz"
                      tar xJf ${tar_filename}
                      export XDG_DATA_DIRS="/usr/local/share/:/usr/share/"

                      cd ${src_name}
                      ./autogen.sh PKG_CONFIG_PATH=/usr/local/lib/pkgconfig/
                      make
                      sudo make install
                      sudo ldconfig
                      cd ..
                      ;;
                  "uninstall")
                      cd ${src_name}
                      sudo make uninstall
                      cd ..
                      ;;
              esac
          done

          exit 0

      • gstreamer will be installed in:
        • /usr/local/lib/gstreamer-1.0/
    • Problemes / Problems
      • 1.16.0
      • .../tmp-introspectAx4R5G/.libs/lt-GstMpegts-1.0: error while loading shared libraries: libgstvideo-1.0.so.0: cannot open shared object file: No such file or directory
        • Solution
          • sudo ldconfig
      • (gst-plugin-scanner:21672): GStreamer-WARNING **: 15:56:30.108: Failed to load plugin '/usr/local/lib/gstreamer-1.0/libgstpango.so': /lib64/libcairo.so.2: undefined symbol: FT_Get_Var_Design_Coordinates
        • gst-inspect-1.0 timeoverlay
        • Solució / Solution
          • update freetype to 2.8
      • Opus audio encoder not found
        • gst-plugins-bad/ext/opus/opusenc.c was present until version 1.6; then it was moved to plugins-base
        • Solució / Solution
          • install system-wide Opus devel libraries and reconfigure and make plugins-base

Ús / Usage

  • Edició / Edit
  • Acceleració / Acceleration
  • Raspberry Pi
  • Tools (CLI)
    • How do I use the GStreamer command line interface ?
    • Command line tools
    • gst-validate
    • ges-launch
      • GES in Python
      • Help
        • ges-launch-1.0 -h
      • ges-videocrop-effect.sh
      • Sintaxi / Syntax
        • project-related options
          • --load[-l]=
          • -s --save=
          • -p --sample-path
          • -r --sample-path-recurse
          rendering options
          playback options
          • -v --videosink=
          • -a --audiosink=
          • -m --mute
          helpful options
          • --inspect-action-type=
          • --list-transitions
          generic options
          • --disable-mixing
          • -r --repeat=
          • --set-scenario
          +clip
          • <path|uri>
          • inpoint[i]=
          • duration[d]=
          • start[s]=
          • layer[l]=
          • set-
            • alpha
            • posx
            • posy
            • width
            • height
            • volume
            • mute
          +test-clip
          • smpte
          • ...
          +effect
          • <bin-description>
            • agingtv
            • videocrop
            • ...
          set-

          +title
          • <text>

      • Exemples / Examples
        • play a clip from second 4.0 to second 6.0:
          • ges-launch-1.0 +clip bbb_720p.mp4 i=4.0 d=2.0
        • play a clip from second 4.0 to second 6.0 and then another clip from the beginning:
          • ges-launch-1.0 +clip bbb_720p.mp4 i=4.0 d=2.0 +clip sintel_720p.mp4
        • play a clip with a logo during 10 seconds on top right:
          • ges-launch-1.0 +clip bbb_720p.mp4 +clip logo.jpeg s=0 d=10 set-alpha 0.8 set-width 200 set-height 100 set-posx 1000 set-posy 20
        • save project to play a clip from second 4.0 to second 6.0:
          • ges-launch-1.0 +clip bbb_720p.mp4 i=4.0 d=2.0 --save bbb.xges
        • play according to project (can also be generated by pitivi):
          • ges-launch-1.0 --load bbb.xges
        • launch pitivi with this project:
          • pitivi bbb.xges
        • render to Ogg - Theora - Vorbis (default encoding profile):
          • ges-launch-1.0 +clip bbb_720p.mp4 i=4.0 d=2.0 -o bbb.ogg
          • ges-launch-1.0 +clip bbb_720p.mp4 i=4.0 d=2.0 -o bbb.ogg -f "video/ogg:video/x-theora:audio/x-vorbis"
        • render to WebM - VP8 - Vorbis:
          • ges-launch-1.0 +clip bbb_720p.mp4 i=4.0 d=2.0 -o bbb.webm -f "video/webm:video/x-vp8:audio/x-vorbis"
          • Problemes / Problems
            • ERROR from element qtdemux1: Internal data stream error.
              • Solució / Solution:
                • urpmi gstreamer1.0-vp8
        • render to MP4 - H.264 - MP3:
          • ges-launch-1.0 +clip bbb_720p.mp4 i=4.0 d=2.0 -o bbb.mp4 -f "video/quicktime,variant=iso:video/x-h264:audio/mpeg,mpegversion=1,layer=3"
        • render to MP4 - H.264 - AAC (Mageia: gstreamer1.0-plugins-bad...tainted):
          • ges-launch-1.0 +clip bbb_720p.mp4 i=4.0 d=2.0 -o bbb.mp4 -f "video/quicktime,variant=iso:video/x-h264:audio/mpeg,mpegversion=4"
          • Problemes / Problems
            • ERROR from element qtdemux1: Internal data stream error.
              • Solució / Solution
                • urpmi gstreamer1.0-x264
        • ...
    • gst-transcoder
      • gst-transcoder-1.0 [OPTION?] <source uri> <destination uri> <encoding target name>[/<encoding profile name>]
      • Exemples / Examples
        • Create a target file called device/mp4target.gep
        • gst-transcoder-1.0 <input_file> <output_file>.mp4 mp4target/mp4
        • gst-transcoder-1.0 input.mp4 output.mkv matroska
      • Problemes / Problems
        • WARN: ... no such element factory "uritranscodebin"!
    • gst-discoverer
    • gst-inspect
      • list of all plug-ins
        • gst-inspect-1.0
      • available properties for a specified plugin
        • gst-inspect-1.0 videoconvert
        • ...
      • ...
    • gst-launch (wp)
      • gst-launch-1.0 ... ! ... ! ...
      • gst-launch-1.0 ... ! ... ! ...demux name=mydemux ...mux name=mymux ! ... ! ... mydemux. ! ... ! mymux. mydemux. ! ... ! mymux.
        • input + demux
        • mux + output
        • audio
        • video
      • Options
        • -e: end of stream on shutdown
        • -f, --no_fault : ...
        • --help : ...
        • -q, --quiet : ...
        • -m, --messages :  ...
        • -o FILE, --output=FILE : ...
        • -t, --tags : ...
        • -T, --trace : ...
        • -v: verbose : ...
        • --gst-debug-level=2
      • Verbose messages (-v, -q): sent to stdout (1)
        • /<element>:<name>/<element>:<name>.<subelement>:<name>: <property>=<value>, <property>=<value> ...
        • gst_format_logs.sh
          • #!/bin/bash
            input_path=$1
            awk -F'\\\\ ' 'BEGIN {OFS="\n";ORS="\n\n"} $1 ~ /^\/GstPipeline/ {$1=$1;print $0}' ${input_path}
            exit 0
      • Debug: sent to stderr (2)
        • Debugging tools
        • export GST_DEBUG=1 # default
        • export GST_DEBUG="*:2"
        • export GST_DEBUG=WARN,udpsrc:INFO,videodecoder:DEBUG
        • export GST_DEBUG=3,rtpjitterbuffer:3,rtpbasedepayload:6,videodecoder:4
        • number
          name
          1
          ERROR
          2
          WARNING
          3
          FIXME
          4
          INFO
          5
          DEBUG
          6
          LOG
          7
          TRACE
          8

          9
          MEMDUMP
        • Gstreamer pipeline diagram
          • How to generate a Gstreamer pipeline diagram (graph)
          • Dependencies
            • Mageia
              • urpmi graphviz
          • Utilització / Usage
            • mkdir /tmp/dots
            • export GST_DEBUG_DUMP_DOT_DIR=/tmp/dots
            • gst-launch ...
            • cd /tmp/dots
            • to generate svg:
              • dot -Tsvg ...-gst-launch...PLAYING....dot >pipeline.svg
              • gwenview pipeline.svg
            • to generate png:
              • dot -Tpng ...-gst-launch...PLAYING....dot >pipeline.png
              • gwenview pipeline.png
      • Sintaxi / Syntax
        • element:
          ELEMENTTYPE [PROPERTY1 ...]
          elements can be put into bins:
          [BINTYPE.] ( [PROPERTY1 ...] PIPELINE-DESCRIPTION )
          property:
          NAME=*[(TYPE)]*VALUE in lists and ranges: *[(TYPE)]*VALUE
          • range: [VALUE,VALUE]
          • list: {VALUE[,VALUE...]}
          type:
          • -i int
          • -f float
          • -4 fourcc
          • -b bool boolean
          • -s str string
          • -fraction
          link:
          [[SRCELEMENT].[PAD1,...]] ! [[SINKELEMENT].[PAD1,...]] [[SRCELEMENT].[PAD1,...]] ! CAPS ! [[SINKELEMENT].[PAD1,...]]
          caps:
          MIMETYPE [, PROPERTY[, PROPERTY ...]]] [; CAPS[; CAPS ...]]
        • input
          demuxer
          decoder
          filter
          encoder muxer

          demux
          buffer
          parse
          (to get specific packets from demuxer)
          decode
          filter
          encode codec parameters
          (CAPS)
          parse
          (to prepare specific packets for muxer)
          mux
          output
          file
          • filesrc location=videofile
          • uri=file:///path/to/test.ts
          devices
          • dvbsrc modulation="QAM 64" trans-mode=8k bandwidth=8 frequency=658000000 code-rate-lp=AUTO code-rate-hp=2/3 guard=4 hierarchy=0 pids=111:112
          network
          unix
          decodebin name=decoder
          (does not include sdpdemux)
          • corresponding parameters (caps) can be grouped, but must appear after filter declaration (except videoconvert: videoconvert ! <caps> ! videoconvert)
          • if a value in caps must be used, the corresponding filter must be present. If the value in caps is not used, because the input already has this value, the filter is not needed


          filter
          description
          mimetype, comma separated key=value
          video

          video/x-raw
          videoscale
          width=360 height=288
          pixel-aspect-ratio=1/1
          videorate
          framerate=25/1
          videoconvert
          format=BGRA
          ?

          interlace-mode=progressive
          audio


          audio/x-raw
          audiorate
          Drops/duplicates/adjusts timestamps on audio samples to make a perfect stream
          tolerance=...
          ...
          audioresample
          Resamples audio
          rate=48000
          audioconvert Convert audio to different formats
          format=S16LE
          channels=2
          layout=interleaved

          • video
            • video/x-h264,
              profile=baseline
          • video
            • h264parse
          • audio
          • mpegtsmux name=mux
          • flvmux  streamable=true name=mux
          • mp4mux faststart=true
          file
          • filesink location=music.ogg
          devices
          network
          • udpsink host=192.168.0.8 port=5004 sync=false
          • rtmpsink location=rtmp://rtmp_server:1935/app/stream
          unix
          • fdsink
          • shmsink socket-path=...
            shm-size=...
            wait-for-connection=...
          demux
          specific stream (source Element Pads)

          • demux.
          matroskademux
          • demux.audio_%u
          • demux.video_%u
          • demux.subtitle_%u
          qtdemux name=demux
          • demux.video_0
          • demux.audio_0
          • demux.audio_1
          • ...
          sdpdemux name=demux
          • demux.stream_0
          • demux.stream_1
          • ...
          tsdemux program-number=805 name=demux
          flvdemux name=demux
          • audio
          • video
          ...


          • video
            • mpegvideoparse
            • h264parse
          • audio
            • mpegaudioparse
            • aacparse
          • decodebin
          • video
            • omxmpeg2videodec
            • omxh264dec
          • audio
            • avdec_mpeg2video





          • video
            • video/x-raw, framerate=25/1, width=640, height=360,
            • if format is specified, videoconvert must be specified after it
              • format=...,
                ...
          • audio
            • audio/x-raw,
              format=...,
              layout=...,
              rate=...,
              channels=...

      • sdpdemux

        rtpbin
        udpsrc
        rtpsession
        rtpssrcdemux
        rtpjitterbuffer
        rtpptdemux

      • Sources and sinks
        • GstBaseSrc
          • do-timestamp
            • it has to be specified at source if we want lipsync at the output: sync=true
        • GstBaseSink
          • async
          • sync
            • the source must be specified with do-timestamp=true
        • fd (file descriptor) (see Snowmix audio)
          • fdsink
            • ...
          • fdsrc
            • ...
          • Exemple / Example
            • send audio to file descriptor 3 and listen to it. Verbose and warning logs are shown in terminal
              • export GST_DEBUG=WARN
                AUDIOCAPS="audio/x-raw,format=S16LE,layout=interleaved,rate=44100,channels=2"
                gst-launch-1.0 -v audiotestsrc wave=5 ! volume volume=0.1 ! ${AUDIOCAPS} ! fdsink fd=3 3>&1 1>&2 | gst-launch-1.0 fdsrc ! ${AUDIOCAPS} ! autoaudiosink

        • shm (shared memory) (see Snowmix video)
          • shmsink
            • gst-launch-1.0 -v videotestsrc ! video/x-raw,framerate=25/1,width=640,height=480,format=BGRA ! videoconvert ! shmsink socket-path=/tmp/feed1 shm-size=`echo 640*480*4*22 | bc` wait-for-connection=0
            • gst-launch-1.0 -v videotestsrc is-live=true do-timestamp=true ! video/x-raw,framerate=25/1,width=640,height=480,format=BGRA ! videoconvert ! clockoverlay halignment=right valignment=top shaded-background=true font-desc="Sans, 24" ! shmsink socket-path=/tmp/feed1 shm-size=`echo 640*480*4*22 | bc` wait-for-connection=1 sync=true
            • gst-launch-1.0 filesrc location=sintel_timecode_640x272_44100_stereo.mp4 ! qtdemux name=demux demux. ! decodebin ! videoconvert ! videoscale ! videorate ! video/x-raw,width=320,height=136,format=BGRA ! shmsink socket-path=/tmp/feed1 shm-size=`echo 320*136*4*22 | bc -l` wait-for-connection=1 sync=true
          • shmsrc
            • you must specify: width, height, framerate, format+videoconvert, and they shoud match values specified in shmsink
            • gst-launch-1.0 -v shmsrc socket-path=/tmp/feed1 do-timestamp=true is-live=true ! video/x-raw,width=640,height=480,framerate='25/1',format=BGRA ! videoconvert ! autovideosink
            • to play at a framerate different from the input, specify a different framerate and add videorate
          • du -h /dev/shm
          • ls -l /dev/shm
          • netstat -pena --unix
        • shm (video) + fd (audio)

          • send
            play
            Notes
            audio
            feed_rate=44100
            feed_channels=2
            AUDIOCAPS="audio/x-raw,format=S16LE,layout=interleaved,rate=${feed_rate},channels=${feed_channels}"


            gst-launch-1.0 \
                audiotestsrc wave=5 ! volume volume=0.1 ! ${AUDIOCAPS} ! fdsink fd=3 3>&1 1>&2 \

            | gst-launch-1.0 fdsrc ! ${AUDIOCAPS} ! autoaudiosink

            video
            ratefraction="25/1"
            feed_width=320
            feed_height=180
            video_input_sar="1:1"
            pixel_aspect_ratio=${video_input_sar/:/\/} # replace : -> /
            VIDEOCAPS="video/x-raw,framerate=${ratefraction},width=${feed_width},height=${feed_height},pixel-aspect-ratio=${pixel_aspect_ratio}"
            FORMAT_SHM="BGRA"
            VIDEOCAPS_WITH_FORMAT="${VIDEOCAPS},format=${FORMAT_SHM}"

            shm_socket_path=/tmp/shm_toto
            FORMAT_DISPLAY="I420"
            VIDEOCONVERT_DISPLAY="video/x-raw,format=${FORMAT_DISPLAY}"
            FORMAT_SHM="BGRA"
            VIDEOCONVERT_SHM="video/x-raw,format=${FORMAT_SHM}"


            • gst-launch-1.0 \
                  videotestsrc ! ${VIDEOCAPS_WITH_FORMAT} ! videoconvert ! shmsink socket-path=${shm_socket_path} shm-size=`echo ${feed_width}*${feed_height}*4*22 | bc` wait-for-connection=0 \
            • # send video with tee to monitor
              gst-launch-1.0 \
                  filesrc location=${video_path} ! qtdemux name=demux \
                  demux.video_0 ! queue ! decodebin ! videoscale method=5 ! queue ! videorate ! ${VIDEOCAPS} ! tee name=tv \
                  tv. ! queue ! videoconvert ! ${VIDEOCONVERT_SHM} ! shmsink socket-path=${shm_socket_path} shm-size=`echo ${feed_width}*${feed_height}*4*22 | bc` wait-for-connection=0 \
                  tv. ! queue ! videoconvert ! ${VIDEOCONVERT_DISPLAY} ! autovideosink sync=true \
                  3>&1 1>&2

            gst-launch-1.0 shmsrc socket-path=${shm_socket_path} do-timestamp=true is-live=true ! ${VIDEOCAPS_WITH_FORMAT} ! videoconvert ! autovideosink


            audio + video
            • gst-launch-1.0 \
                  videotestsrc ! ${VIDEOCAPS_WITH_FORMAT} ! videoconvert ! shmsink socket-path=${shm_socket_path} shm-size=`echo ${feed_width}*${feed_height}*4*22 | bc` wait-for-connection=0 \
                  audiotestsrc wave=5 ! volume volume=0.1 ! ${AUDIOCAPS} ! fdsink fd=3 3>&1 1>&2 \
            • gst-launch-1.0 \
                  filesrc location=${video_path} ! qtdemux name=demux \
                  demux.video_0 ! queue ! decodebin ! videoscale method=5 ! queue ! videorate ! videoconvert ! ${VIDEOCAPS_WITH_FORMAT} ! \
                  shmsink socket-path=${shm_socket_path} shm-size=`echo ${feed_width}*${feed_height}*4*22 | bc` wait-for-connection=0 \
                  demux.audio_0 ! queue ! decodebin ! audioconvert ! audioresample ! queue ! audiorate ! $AUDIOCAPS ! tee name=ta \
                  ta. ! queue ! fdsink fd=3 \
                  3>&1 1>&2
              \
            • # send audio and video with tee to monitor
              gst-launch-1.0 \
                  filesrc location=${video_path} ! qtdemux name=demux \
                  demux.video_0 ! queue ! decodebin ! videoscale method=5 ! queue ! videorate ! ${VIDEOCAPS} ! tee name=tv \
                  tv. ! queue ! videoconvert ! ${VIDEOCONVERT_SHM} ! shmsink socket-path=${shm_socket_path} shm-size=`echo ${feed_width}*${feed_height}*4*22 | bc` wait-for-connection=0 \
                  tv. ! queue ! videoconvert ! ${VIDEOCONVERT_DISPLAY} ! autovideosink sync=true \
                  demux.audio_0 ! queue ! decodebin ! audioconvert ! audioresample ! queue ! audiorate ! ${AUDIOCAPS} ! tee name=ta \
                  ta. ! queue ! autoaudiosink sync=true \
                  ta. ! queue ! fdsink fd=3 \
                  3>&1 1>&2 \
            | gst-launch-1.0 \
                shmsrc socket-path=${shm_socket_path} do-timestamp=true is-live=true ! ${VIDEOCAPS_WITH_FORMAT} ! videoconvert ! autovideosink \
                fdsrc ! ${AUDIOCAPS} ! autoaudiosink

            • es queden fitxers creats (/tmp/shm_toto, /dev/shm/shmpipe*) quan:
              • no arrenca
              • s'atura amb la X de la finestra de la sortida
            • s'esborren bé els fitxers creats quan:
              • s'atura amb CTRL-C

      • Demux
        • sdpdemux
          • Play from SDP file
          • Includes
          • Parameters
            • latency (ms)
              • INFO         rtpjitterbuffer gstrtpjitterbuffer.c:3942:do_deadline_timeout:<rtpjitterbuffer0> got deadline timeout
      • Codecs
      • Bins
      • Play
        • general: using playbin
          • gst-launch-1.0 -v playbin uri=...
        • gst-launch-1.0 -v playbin uri=...
          gst-launch-1.0 uridecodebin uri=... name=decoder
          playsink name=sink decoder.src_0 ! sink.video_sink decoder.src_1 ! sink.audio_sink

        • from testsrc
          • videotestsrc
            • gst-launch-1.0 -v videotestsrc ! video/x-raw,framerate=25/1,width=1280,height=720 ! autovideosink
            • gst-launch-1.0 -v videotestsrc pattern=snow ! video/x-raw,framerate=12/1,width=1280,height=720 ! autovideosink
            • gst-launch-1.0 -v videotestsrc ! video/x-raw,framerate=12/1,width=1280,height=720,format=BGRA ! videoconvert ! autovideosink
            • clock overlay
              • gst-launch-1.0 -v videotestsrc is-live=true ! clockoverlay halignment=right valignment=top shaded-background=true font-desc="Sans, 24" ! autovideosink
            • time overlay
              • gst-launch-1.0 -v videotestsrc is-live=true ! timecodestamper ! timeoverlay shaded-background=true 'time-mode=time-code'  font-desc="Sans, 24" ! autovideosink
              • gst-launch-1.0 -v videotestsrc is-live=true ! video/x-raw, framerate=25/1, width=640, height=360 ! timecodestamper ! timeoverlay halignment=right valignment=bottom text="Stream time:" shaded-background=true font-desc="Sans, 24" ! autovideosink
            • clock + time overlay
              • gst-launch-1.0 -v videotestsrc is-live=true ! video/x-raw, framerate=25/1, width=640, height=360 ! timecodestamper ! timeoverlay halignment=left valignment=top shaded-background=true font-desc="Sans, 24" ! clockoverlay halignment=right valignment=top shaded-background=true font-desc="Sans, 24" ! autovideosink
          • audiotestsrc
            • gst-launch-1.0 -v audiotestsrc ! autoaudiosink
            • white noise (wave=5), stereo (channels=2)
              • gst-launch-1.0 -v audiotestsrc is-live=true wave=5 ! 'audio/x-raw,format=S16LE,layout=interleaved,rate=48000,channels=2' ! autoaudiosink
          • test video + audio
            • gst-launch-1.0 -v videotestsrc ! video/x-raw,framerate=25/1,width=1280,height=720 ! autovideosink audiotestsrc ! autoaudiosink
        • from DVB device
          • only video from DVB device (mpegvideoparse is needed because if not, maybe teletext is taken; and error is shown):
            • gst-launch-1.0 -v dvbsrc modulation="QAM 64" trans-mode=8k bandwidth=8 frequency=658000000 code-rate-lp=AUTO code-rate-hp=2/3 guard=4 hierarchy=0  ! tsdemux program-number=805 ! queue ! mpegvideoparse ! decodebin ! autovideosink
          • only audio:
            • gst-launch-1.0 -v dvbsrc modulation="QAM 64" trans-mode=8k bandwidth=8 frequency=658000000 code-rate-lp=AUTO code-rate-hp=2/3 guard=4 hierarchy=0  ! tsdemux program-number=805 ! queue ! mpegaudioparse ! decodebin ! autoaudiosink
            • gst-launch-1.0 dvbsrc modulation="QAM 64" trans-mode=8k bandwidth=8 frequency=658000000 code-rate-lp=AUTO code-rate-hp=2/3 guard=4 hierarchy=0 ! tsdemux program-number=806 name=demux demux. ! queue ! mpegaudioparse ! decodebin ! omxanalogaudiosink
          • audio and video from program 805 in DVB input:
            • gst-launch-1.0 -v dvbsrc modulation="QAM 64" trans-mode=8k bandwidth=8 frequency=658000000 code-rate-lp=AUTO code-rate-hp=2/3 guard=4 hierarchy=0 ! tsdemux program-number=805 name="demux" \
              demux. ! queue ! mpegaudioparse ! decodebin ! autoaudiosink \
              demux. ! queue ! mpegvideoparse ! decodebin ! autovideosink
        • from file
          • gst-launch-1.0 -v playbin uri=file:/absoulte/path/to/your_video_file
          • gst-launch-1.0 \
                filesrc location=${video_path} ! decodebin name=dec \
                dec. ! queue ! autovideosink \
                dec. ! queue ! autoaudiosink

          • MP4
            • gst-launch-1.0 -v playbin uri=file:/absoulte/path/to/toto.mp4
            • audio and video from an MP4 file (queue is needed when playin audio and video)
              • gst-launch-1.0 filesrc location=sintel-1024-stereo.mp4 ! qtdemux name=demux \
                demux. ! queue ! decodebin ! autovideosink \
                demux. ! queue ! decodebin ! autoaudiosink
                 
            • rescale an anamorphic video
              • gst-launch-1.0 filesrc location=toto_720x576_anamorphic.mp4 ! qtdemux name=demux demux. ! queue ! decodebin ! videoscale ! video/x-raw,width=176,height=140,pixel-aspect-ratio=64/45 ! autovideosink
            • only audio from an MP4 file
              • gst-launch-1.0 filesrc location=sintel-1024-stereo.mp4 ! qtdemux name=demux \
                demux.audio_0 ! decodebin ! autoaudiosink
          • TS
            • program in a TS file (first program found?):
              • gst-launch-1.0 -v playbinuri=file:/tmp/toto.ts
            • only video from TS file (program_number=802)
              • gst-launch-1.0 -v filesrc location=/disc/videos/tvc/tvc_794_20140821_1709.ts ! tsdemux program-number=802 ! mpegvideoparse ! decodebin ! autovideosink
          • OGG
            • audio and video from an OGG file (queue is needed when playin audio and video)
              • gst-launch-1.0 filesrc location=sintel_trailer-720p.ogv ! oggdemux name=demux \
                demux. ! queue ! decodebin ! autovideosink \
                demux. ! queue ! decodebin ! autoaudiosink
          • WebM / Matroska
            • gst-launch-1.0 -v playbin uri=file:/absoulte/path/to/toto.webm
            • playbinaudio and video from a webm file:
              • gst-launch-1.0 -v \
                filesrc location=/path/to/toto.webm ! matroskademux name=demux \
                demux.video_0 ! queue ! decodebin ! autovideosink sync=true \
                demux.audio_0 ! queue ! decodebin ! autoaudiosink sync=true

              • gst-launch-1.0 -v \
                filesrc location=/path/to/toto.webm ! matroskademux name=demux \
                demux. ! queue ! vp8dec ! autovideosink sync=true \
                demux. ! queue ! opusdec ! autoaudiosink sync=true
          • SDP
            • See also: play from RTP
            • sdpdemux
            • UDP buffer
              • value is taken from kernel parameter net.core.rmem_default
              • GST_DEBUG=3,udpsrc:4 gst-launch filesrc location=toto.sdp ! sdpdemux name=demux ...
                • udpsrc gstudpsrc.c:1428:gst_udpsrc_open:<udpsrc0> have udp buffer of 212992 bytes
            • audio and video from SDP file (RTP):
              • gst-launch-1.0 filesrc location=toto.sdp do-timestamp=true ! sdpdemux latency=1000 debug=true name=bin \
                bin. ! "application/x-rtp, media=(string)video" ! decodebin ! autovideosink sync=true \
                bin. ! "application/x-rtp, media=(string)audio" ! decodebin ! autoaudiosink sync=true

              • # RTP + RTCP, using sdpdemux
                gst-launch-1.0 \
                      filesrc location=$sdp_path do-timestamp=true ! sdpdemux latency=${sdpdemux_latency_ms} name=bin \
                      bin. ! "application/x-rtp, media=(string)video" ! queue ! decodebin ! videoconvert ! videoscale ! videorate ! autovideosink sync=true \
                      bin. ! "application/x-rtp, media=(string)audio" ! queue ! decodebin ! audioconvert ! audioresample ! audiorate ! autoaudiosink sync=true

              • H.264 + AAC
                • gst-launch-1.0 filesrc location=toto.sdp do-timestamp=true ! sdpdemux name=demux \
                  demux. ! queue ! rtph264depay ! decodebin ! autovideosink sync=true \
                  demux. ! queue ! rtpmp4gdepay ! decodebin ! autoaudiosink sync=true

            • only video from SDP file (RTP). Make sure that video is the first stream specified in sdp file:
              • gst-launch-1.0 filesrc location=toto.sdp ! sdpdemux name=demux \
                demux.stream_0 ! queue ! decodebin ! autovideosink
            • only audio from SDP file (RTP). Make sure that audio is the second stream specified in sdp file:
              • gst-launch-1.0 filesrc location=toto.sdp ! sdpdemux name=demux \
                demux.stream_1 ! queue ! decodebin ! autoaudiosink
            • Problemes / Problems
              • Pèrdua de paquets / Packet loss
              • Manca de fluïdesa / Lack of smoothness
                • videodecoder gstvideodecoder.c:2775:gst_video_decoder_prepare_finish_frame:<avdec_h264-0> decreasing timestamp (0:00:00.008558259 < 0:00:00.058900688)
                  • Solució / Solution
                    • Increase latency parameter (default: 200 ms) for sdpdemux:
                    • gst-launch-1.0 filesrc location=toto.sdp ! sdpdemux latency=400 name=demux ...
                • audiobasesink gstaudiobasesink.c:1787:gst_audio_base_sink_get_alignment:<autoaudiosink0-actual-sink-alsa> Unexpected discontinuity in audio timestamps of +0:00:00.131360544, resyncing
                  • Solució / Solution
                    • ...
              • “delayed linking failed”
              • lipsync
        • from network
          • RTP
            • See also: Play from SDP file
            • Problemes / Problems
              • udpsrc
                • videodecoder gstvideodecoder.c:2775:gst_video_decoder_prepare_finish_frame:<avdec_h264-0> decreasing timestamp (0:00:45.183879818 < 0:00:45.188331700)
            • gst-launch-1.0 udpsrc address=127.0.0.1 port=5004 ! "application/x-rtp, media=(string)video, clock-rate=(int)90000, encoding-name=(string)H264, packetization-mode=(string)1, profile-level-id=(string)64001f, payload=(int)96" ! queue ! \
              rtph264depay ! decodebin ! autovideosink
            • gst-launch-1.0 -v \
                  udpsrc address=${address} port=${video_port} do-timestamp=true ! queue ! "application/x-rtp, media=(string)video, payload=(int)96, clock-rate=(int)90000, encoding-name=(string)H264, packetization-mode=(string)1, sprop-parameter-sets=(string)\"Z2QAHqzZQKAv+XARAAADAAEAAAMAPA8WLZY\=\,aOvssiw\=\", profile-level-id=(string)64001E" ! rtph264depay ! decodebin ! autovideosink sync=true \
                  udpsrc address=${address} port=${audio_port} do-timestamp=true ! queue ! "application/x-rtp, media=(string)audio, payload=(int)97, clock-rate=(int)44100, encoding-name=(string)MPEG4-GENERIC, encoding-params=(string)2, profile-level-id=(string)1, mode=(string)AAC-hbr, sizelength=(string)13, indexlength=(string)3, indexdeltalength=(string)3, config=(string)121056E500" ! rtpmp4gdepay ! decodebin ! autoaudiosink sync=true
              • caps can be obtained e.g. by executing: gst-launch -v ... sdpdemux ...
            • common variables:
              • # ffmpeg -re -i easylife.mp4 -c:v copy -an -f rtp -cname toto rtp://234.1.2.3:5004 -vn -c:audio copy -f rtp -cname toto rtp://234.1.2.3:5006 -sdp_file /mnt/nfs/sdp/toto.sdp

                address=234.1.2.3

                video_rtp_port=5004
                video_rtcp_port=$(( video_rtp_port + 1 ))
                VIDEOCAPS="application/x-rtp, media=(string)video, payload=(int)96, clock-rate=(int)90000, encoding-name=(string)H264, packetization-mode=(string)1, sprop-parameter-sets=(string)\"Z2QAHqzZQKAv+XARAAADAAEAAAMAPA8WLZY\=\,aOvssiw\=\", profile-level-id=(string)64001E"

                audio_rtp_port=$(( video_rtp_port + 2 ))
                audio_rtcp_port=$(( video_rtp_port + 3 ))
                AUDIOCAPS="application/x-rtp, media=(string)audio, payload=(int)97, clock-rate=(int)44100, encoding-name=(string)MPEG4-GENERIC, encoding-params=(string)2, profile-level-id=(string)1, mode=(string)AAC-hbr, sizelength=(string)13, indexlength=(string)3, indexdeltalength=(string)3, config=(string)121056E500"
            • RTP
              • # RTP
                gst-launch-1.0 \
                    udpsrc address=${address} port=${video_rtp_port} do-timestamp=true ! "$VIDEOCAPS" ! \
                    rtph264depay ! queue ! decodebin ! autovideosink sync=true \
                    udpsrc address=${address} port=${audio_rtp_port} do-timestamp=true ! "$AUDIOCAPS" ! \
                    rtpmp4gdepay ! queue ! decodebin ! autoaudiosink sync=true

            • RTP (using rtpbin)
              • # RTP, using rtpbin
                gst-launch-1.0 \
                    rtpbin name=bin \
                    udpsrc address=${address} port=${video_rtp_port} do-timestamp=true ! "$VIDEOCAPS" ! bin.recv_rtp_sink_0 \
                    bin. ! rtph264depay ! queue ! decodebin ! autovideosink sync=true \
                    udpsrc address=${address} port=${audio_rtp_port} do-timestamp=true ! "$AUDIOCAPS" ! bin.recv_rtp_sink_1 \
                    bin. ! rtpmp4gdepay ! queue ! decodebin ! autoaudiosink sync=true

            • RTP with RTCP (using rtpbin)
              • # RTP + RTCP, using rtpbin
                gst-launch-1.0 \
                    rtpbin name=bin \
                    udpsrc address=${address} port=${video_rtp_port} do-timestamp=true ! "$VIDEOCAPS" ! bin.recv_rtp_sink_0 \
                    bin. ! "application/x-rtp, media=(string)video" ! queue ! decodebin ! autovideosink sync=true \
                    udpsrc address=${address} port=${video_rtcp_port} ! "application/x-rtcp" ! bin.recv_rtcp_sink_0 \
                    udpsrc address=${address} port=${audio_rtp_port} do-timestamp=true ! "$AUDIOCAPS" ! bin.recv_rtp_sink_1 \
                    bin. ! "application/x-rtp, media=(string)audio" ! queue ! decodebin ! autoaudiosink sync=true \
                    udpsrc address=${address} port=${audio_rtcp_port} ! "application/x-rtcp" ! bin.recv_rtcp_sink_1

              • # RTP + RTCP, using rtpbin
                gst-launch-1.0 \
                    rtpbin name=bin \
                    udpsrc address=${address} port=${video_rtp_port} do-timestamp=true ! "$VIDEOCAPS" ! bin.recv_rtp_sink_0 \
                    bin. ! rtph264depay ! queue ! decodebin ! autovideosink sync=true \
                    udpsrc address=${address} port=${video_rtcp_port} ! "application/x-rtcp" ! bin.recv_rtcp_sink_0 \
                    udpsrc address=${address} port=${audio_rtp_port} do-timestamp=true ! "$AUDIOCAPS" ! bin.recv_rtp_sink_1 \
                    bin. ! rtpmp4gdepay ! queue ! decodebin ! autoaudiosink sync=true \
                    udpsrc address=${address} port=${audio_rtcp_port} ! "application/x-rtcp" ! bin.recv_rtcp_sink_1
            • Problemes / Problems
              • lipsync
                • source: specify do-timestamp=true
                • sinks: specify sync=true
                • sdpdemux: latency must be big enough. If stream has B images, try with latency=1000
                • check that RTP flow contains RTCP packets with "Source Description" (SDES) information
          • HTTP
            • check that souphttpsrc is present
              • gst-inspect-1.0 | grep souphttpsrc
            • if not present, compile it
              • Dependencies
                • CentOS
                  • sudo yum install libsoup-devel
                • Mageia
                  • urpmi libsoup-devel
              • gst-plugins-good
                • ./configure
                • make
                • sudo make install
            • gst-launch-1.0 playbin uri=http://download.blender.org/peach/bigbuckbunny_movies/BigBuckBunny_320x180.mp4
          • RTMP
            • gst-launch-1.0 -v playbin uri=rtmp://nginx-server/myapp/mystream
            • gst-launch-1.0 -v \
                                rtmpsrc location=${source} do-timestamp=true ! queue2 ! decodebin name=mydecoder \
                                mydecoder. ! autovideosink sync=true \
                                mydecoder. ! autoaudiosink sync=true

            • (not working?) source=rtmp://nginx-server/myapp/mystream
              gst-launch-1.0 \
                  rtmpsrc location=${source} do-timestamp=true ! flvdemux name=demux \
                  demux.video ! queue ! decodebin ! autovideosink sync=true \
                  demux.audio ! queue ! decodebin ! autoaudiosink sync=true

      • Mux to
        • test to TS
          • gst-launch-1.0 -v videotestsrc ! video/x-raw,framerate=24/1,width=1280,height=720 ! videoconvert ! x264enc ! video/x-h264,profile=high ! mpegtsmux ! filesink location=toto.ts
        • test to MP4
          • NOTE: stream-format=(string)byte-stream is not supported by MP4
          • gst-launch-1.0 -v videotestsrc ! video/x-raw,framerate=24/1,width=1280,height=720 ! videoconvert ! x264enc ! video/x-h264,profile=high ! mp4mux ! filesink location=toto.mp4
            • Problem:
              • moov atom not found
                • Solution
                  • (?) mp4mux faststart=true
      • Transmux
        • rtmp -> rtmp
          • gst-launch-1.0 rtmpsrc location=rtmp://server.org/my_app/first do-timestamp=true ! queue2 ! flvdemux name=demux \
            flvmux name=mux
            \
            demux.video ! queue ! mux.video
            \
            demux.audio ! queue ! mux.audio
            \
            mux.src ! queue ! rtmpsink location=rtmp://server.org/my_app/second

          • Note: if using nginx-rtmp-module as destination, check nginx.conf configuration
        • only video (mp4 -> ts):
          • gst-launch-1.0 filesrc location=sintel-1024-stereo.mp4 ! qtdemux name=demux \
            mpegtsmux name=mux ! filesink location=toto.ts \
            demux. ! queue ! h264parse ! mux.
          • gst-launch-1.0 filesrc location=sintel-1024-stereo.mp4 ! qtdemux name=demux \
            mpegtsmux name=mux ! filesink location=toto.ts \
            demux. ! video/x-h264 ! queue ! h264parse ! mux.
        • video and audio (mp4 -> ts)
          • gst-launch-1.0 filesrc location=sintel-1024-stereo.mp4 ! qtdemux name=demux \
            mpegtsmux name=mux ! filesink location=toto2015.ts \
            demux. ! queue ! h264parse ! mux. \
            demux. ! queue ! aacparse ! mux.
        • video and audio (mp4 -> flv)
          • gst-launch-1.0 -v filesrc location=sintel-1024-stereo.mp4 ! qtdemux name=demux \
            flvmux streamable=true name=mux ! filesink location=toto.flv \
            demux. ! queue ! h264parse ! mux. \
            demux. ! queue ! aacparse ! mux.
             
        • video and audio (flv ->mp4)
          • ...
        • video (H.264) (sdp -> ts)
          • gst-launch-1.0 -v filesrc location=toto.sdp ! sdpdemux name=demux \
            mpegtsmux name=mux ! filesink location=toto.ts \
            demux. ! queue ! rtph264depay ! mux.
        • video (H.264) and audio (AAC) (sdp -> ts)
          • gst-launch-1.0 -v filesrc location=toto.sdp ! sdpdemux name=demux \
            mpegtsmux name=mux ! filesink location=toto.ts \
            demux. ! queue ! rtph264depay ! mux. \
            demux. ! queue ! rtpmp4gdepay ! mux.
        • video (sdp->mp4) (not working)
          • gst-launch-1.0 filesrc location=/tmp/bbb.sdp ! sdpdemux name=demux \
            mp4mux name=mux ! filesink location=/tmp/toto.mp4 \
            demux. ! rtph264depay ! h264parse ! mux.

            • problem
              • ffplay toto.mp4
                [mov,mp4,m4a,3gp,3g2,mj2 @ 0x7f128c0008c0] moov atom not found
                toto.mp4: Invalid data found when processing input
              • Solution?
                • mp4mux faststart=true
        • video and audio (sdp -> mp4) (not working)
          • gst-launch-1.0 -v filesrc location=toto.sdp ! sdpdemux name=demux \
            mp4mux name=mux ! filesink location=toto.mp4 \
            demux. ! queue ! h264parse ! mux. \
            demux. ! queue ! aacparse ! mux.
      • Transcode
        • only video, to file:
          • gst-launch-1.0 -v dvbsrc modulation="QAM 64" trans-mode=8k bandwidth=8 frequency=658000000 code-rate-lp=AUTO code-rate-hp=2/3 guard=4 hierarchy=0 ! tsdemux program-number=805 name="demux" \
            demux. ! queue ! mpegvideoparse ! decodebin ! videoconvert ! x264enc ! video/x-h264,stream-format=byte-stream,profile=high ! h264parse ! \
            mpegtsmux ! filesink location=/tmp/toto.ts
        • and resize, stream:
          • gst-launch-1.0 -v filesrc location=tvc_20150604.ts ! tsdemux program-number=806 ! \
            mpegvideoparse ! decodebin ! videoscale ! video/x-raw, width=320, height=320 ! videoconvert ! omxh264enc ! h264parse ! \
            mpegtsmux ! udpsink host=192.168.0.8 port=5004 sync=false
          • gst-launch-1.0 -v dvbsrc modulation="QAM 64" trans-mode=8k bandwidth=8 frequency=658000000 code-rate-lp=AUTO code-rate-hp=2/3 guard=4 hierarchy=0 ! tsdemux program-number=806 ! \
            mpegvideoparse ! decodebin ! videoscale ! video/x-raw, width=320, height=320 ! videoconvert ! omxh264enc ! h264parse ! \
            mpegtsmux ! udpsink host=192.168.0.8 port=5004 sync=false

          • gst-launch-1.0 -v dvbsrc modulation="QAM 64" trans-mode=8k bandwidth=8 frequency=658000000 code-rate-lp=AUTO code-rate-hp=2/3 guard=4 hierarchy=0 ! tsdemux program-number=805 name="demux" \
            demux. ! queue ! mpegvideoparse ! decodebin ! videoscale ! 'video/x-raw, width=360, height=288' ! videoconvert ! x264enc ! video/x-h264,stream-format=byte-stream,profile=main ! h264parse ! \
            mpegtsmux ! udpsink host=192.168.0.8 port=5004 sync=false

      • tee
        • two windows from MP4 file:
          • gst-launch-1.0 \
                filesrc location=${video_path} ! qtdemux name=demux \
                demux. ! queue ! decodebin ! tee name=tv \
                tv. ! queue ! autovideosink sync=true \
                tv. ! queue ! autovideosink sync=true \
                demux. ! queue ! decodebin ! tee name=ta \
                ta. ! queue ! autoaudiosink sync=true \
                ta. ! queue ! autoaudiosink sync=true

        • two windows from SDP file:
          • gst-launch-1.0 \
                filesrc location=${video_path} do-timestamp=true ! sdpdemux name=bin \
                bin. ! "application/x-rtp, media=(string)audio" ! queue ! decodebin ! tee name=ta \
                ta. ! queue ! autoaudiosink sync=true \
                ta. ! queue ! autoaudiosink sync=true \
                bin. ! "application/x-rtp, media=(string)video" ! queue ! decodebin ! tee name=tv \
                tv. ! queue ! autovideosink sync=true \
                tv. ! queue ! autovideosink sync=true

      • Stream
        • Introduction to network streaming using GStreamer
        • TS over UDP
          • UDP unicast stream only audio:
            • gst-launch-1.0 dvbsrc modulation="QAM 64" trans-mode=8k bandwidth=8 frequency=658000000 code-rate-lp=AUTO code-rate-hp=2/3 guard=4 hierarchy=0 ! tsdemux program-number=806 name=demux \
              demux. ! queue ! mpegaudioparse ! decodebin ! audioconvert ! lamemp3enc ! \
              mpegtsmux ! udpsink host=192.168.0.8 port=5004 sync=false
            • gst-launch-1.0 dvbsrc modulation="QAM 64" trans-mode=8k bandwidth=8 frequency=658000000 code-rate-lp=AUTO code-rate-hp=2/3 guard=4 hierarchy=0 ! tsdemux program-number=806 name=demux \
              demux. ! queue ! mpegaudioparse ! decodebin ! audioconvert ! lamemp3enc ! mux. \
              mpegtsmux name=mux ! udpsink host=192.168.0.8 port=5004 sync=false
            • gst-launch-1.0 dvbsrc modulation="QAM 64" trans-mode=8k bandwidth=8 frequency=658000000 code-rate-lp=AUTO code-rate-hp=2/3 guard=4 hierarchy=0 ! tsdemux program-number=806 name=demux \
              mpegtsmux name=mux ! udpsink host=192.168.0.8 port=5004 sync=false \
              demux. ! queue ! mpegaudioparse ! decodebin ! audioconvert ! lamemp3enc ! mux.

          • Encode to H.264, mux to TS, UDP stream:
            • gst-launch-1.0 -e videotestsrc ! video/x-raw, framerate=25/1, width=640, height=360 ! x264enc ! \
              mpegtsmux ! udpsink host=192.168.0.8 port=5004 sync=false

            • gst-launch-1.0 -v -e videotestsrc ! video/x-raw, framerate=25/1, width=640, height=360 ! x264enc bitrate=512 ! video/x-h264,profile=high ! h264parse ! \
              mpegtsmux ! udpsink host=192.168.0.8 port=5004 sync=false
            • gst-launch-1.0 -e mpegtsmux name="muxer" ! udpsink host=192.168.0.8 port=5004 sync=false \
              videotestsrc ! video/x-raw, framerate=25/1, width=640, height=360 ! x264enc bitrate=512 ! video/x-h264,profile=high ! h264parse ! muxer.

          • Mux video and audio, UDP stream:
            • gst-launch-1.0 -e mpegtsmux name="muxer" ! udpsink host=192.168.0.8 port=5004 sync=false \
              videotestsrc ! video/x-raw, framerate=25/1, width=640, height=360 ! x264enc bitrate=512 ! video/x-h264,profile=high ! h264parse ! muxer. \
              audiotestsrc wave=5 ! audioconvert ! lamemp3enc ! muxer.
            • gst-launch-1.0 dvbsrc modulation="QAM 64" trans-mode=8k bandwidth=8 frequency=658000000 code-rate-lp=AUTO code-rate-hp=2/3 guard=4 hierarchy=0 ! tsdemux program-number=806 name=demux \
              mpegtsmux name=mux ! udpsink host=192.168.0.8 port=5004 sync=false \
              demux. ! queue ! mpegaudioparse ! decodebin ! audioconvert ! lamemp3enc ! mux. \
              demux. ! queue ! mpegvideoparse ! decodebin ! videoscale ! video/x-raw, width=320, height=320 ! videoconvert ! omxh264enc inline-header=true periodicty-idr=50 ! h264parse ! mux.
            • Notes:
              • default for omxh264enc is inline-header=true
              • specification of periodicty-idr is needed by vlc to be played
              • these parameters are available on latest version of gstreamer. Maybe you need to compile  version 1.2.
        • to RTP
          • RTP and RTSP support
          • gstreamer/gst-plugins-good/gst/rtp/README
          • Streaming H.264 via RTP
          • Play from SDP file
          • SDP generation
            • RTP components
              • GStreamer SDP library
                • Structure
                • gst-plugins-base / gst-libs / gst / sdp
                • webrtcbidirectional.c
                • gstwebrtcbin.h
                • media
                  caps
                  SDP
                  ret, media = GstSdp.SDPMedia.new()
                  caps = pad.get_current_caps()
                  (all sets)
                  ret = GstSdp.SDPMedia.set_media_from_caps(caps, media)
                  media.as_text()


                  m=<caps.media> <media.port> <media.proto> <caps.payload>
                  m=video 5004 RTP/AVP 96

                  i=<media.information>
                  i=my info
                  • media.add_connection("IN", "IP4", "1.2.3.4", 16, 1)

                  c=<media.connection.nettype> <media.connection.addrtype> <media.connection.address>
                  c=IN IP4 1.2.3.4


                  a=rtpmap:<caps.payload> <caps.encoding-name>/<caps.clock-rate>
                  a=rtpmap:96 H264/90000

                  (caps) a=fmtp:<caps.payload>  ...
                  a=fmtp:96 ...
                • mp4rtp.py
                  • class Streamer(object):
                        def build_sdp(self, rtpbin, address, port, sdp_path, session_name=None):
                            """
                            rtpbin: Gst.RtpBin element
                            address: destination address
                            port: initial destination port
                            sdp_path: created file with SDP
                            session_name: (optional) session name (s=...)
                            """
                            ret, sdp_message = GstSdp.SDPMessage.new()
                            sdp_message.set_version('0')
                            ttl = 64
                            number_addresses = 1
                            sdp_message.set_connection("IN", "IP4", address, ttl, number_addresses)
                            if session_name:
                                sdp_message.set_session_name(session_name)
                           
                            pads = rtpbin.iterate_pads()
                            while True:
                                ret, pad = pads.next()
                                if ret==Gst.IteratorResult.OK:
                                    # only source pads
                                    if pad.direction != Gst.PadDirection.SRC:
                                        continue
                                   
                                    # only pads with name send_rtp_src...
                                    pad_name = pad.get_name()
                                    if not pad_name.startswith('send_rtp_src'):
                                        continue
                               
                                    print("pad: {0:s}".format(pad.name))
                                    caps = pad.get_current_caps()
                                    print("  {0:s}".format(caps.to_string()))

                                    ret, media = GstSdp.SDPMedia.new()
                                    if ret != GstSdp.SDPResult.OK:
                                        print("Error")
                                        return
                                   
                                    media.set_port_info(port, 1)
                                    port = port + 2
                                    media.proto = "RTP/AVP"
                           
                                    ret = GstSdp.SDPMedia.set_media_from_caps(caps, media)
                                    if ret != GstSdp.SDPResult.OK:
                                        print("Error")
                                   
                                    sdp_message.add_media(media)
                                                   
                                elif ret==Gst.IteratorResult.DONE:
                                    break
                                elif ret==Gst.IteratorResult.ERROR:
                                    break
                     
                            print(sdp_message.as_text())
                            f = open(sdp_path, 'w')
                            f.write(sdp_message.as_text())

                        def on_message(self, bus, msg, user_data):
                            t = msg.type
                            ...
                            elif t == Gst.MessageType.STATE_CHANGED:
                                old, new, pending = msg.parse_state_changed()
                                if new == Gst.State.PAUSED:
                                    if msg.src.name=="bin":
                                        print("RtpBin")
                                        rtpbin = msg.src
                                        self.build_sdp(rtpbin, self.dst_address, self.initial_port, "output.sdp", None)
                    ...
            • webrtcbin
            • Generating a SDP file from a streaming pipeline
            • caps to SDP (README)
          • gstreamer
            sdp file
            command
            gst-launch-1.0 -v videotestsrc ! videoconvert ! x264enc ! rtph264pay config-interval=10 pt=96 ! udpsink host=234.1.2.3 port=5004
            v=0
            m=<media> <port> RTP/AVP <payload>
            c=IN IP4 <host>
            a=rtpmap:<payload> <encoding-name>/<clock-rate>
            a=fmtp:96 packetization-mode=<packetization-mode>; sprop-parameter-sets=<sprop-parameter-sets>; profile-level-id=<profile-level-id>
            caps
            (given by -v)
            application/x-rtp,
            media=(string)video,
            clock-rate=(int)90000,
            encoding-name=(string)H264,
            packetization-mode=(string)1,
            profile-level-id=(string)f4000d,
            sprop-parameter-sets=(string)"Z/QADZGbKCg/YC1BgEFQAAADABAAAAMDyPFCmWA\=\,aOvsRIRA",
            payload=(int)96,
            ssrc=(uint)3934427744,
            timestamp-offset=(uint)2187273080,
            seqnum-offset=(uint)1602,
            a-framerate=(string)30


            media=(string)audio,
            ...


          • send
            receive

            from file

            common code
            input_path=$1

            sdp_path=/tmp/toto.sdp
            dst_address=234.1.2.3

            video_rtp_port=5004
            video_rtcp_port=$(( video_rtp_port + 1 ))
            video_media_subtype="H264"
            rtp_video_payload_type=96

            audio_rtp_port=$(( video_rtp_port + 2 ))
            audio_rtcp_port=$(( video_rtp_port + 3 ))
            audio_media_subtype="aac"
            rtp_audio_payload_type=$(( rtp_video_payload_type + 1 ))
            audio_rate=48000
            audio_channels=2
            sdp_path=$1

            address=234.1.2.3

            video_rtp_port=5004
            video_rtcp_port=$(( video_rtp_port + 1 ))
            rtp_video_payload_type=96
            VIDEOCAPS="application/x-rtp, media=(string)video, payload=(int)96, clock-rate=(int)90000, encoding-name=(string)H264, packetization-mode=(string)1, sprop-parameter-sets=(string)\"Z2QAHqzZQKAv+XARAAADAAEAAAMAPA8WLZY\=\,aOvssiw\=\", profile-level-id=(string)64001E"

            audio_rtp_port=$(( video_rtp_port + 2 ))
            audio_rtcp_port=$(( video_rtp_port + 3 ))
            rtp_audio_payload_type=$(( rtp_video_payload_type + 1 ))
            AUDIOCAPS="application/x-rtp, media=(string)audio, payload=(int)97, clock-rate=(int)44100, encoding-name=(string)MPEG4-GENERIC, encoding-params=(string)2, profile-level-id=(string)1, mode=(string)AAC-hbr, sizelength=(string)13, indexlength=(string)3, indexdeltalength=(string)3, config=(string)121056E500"
            SDP
            function create_sdp {
                local sdp_path=$1

                # sdp
                cat >$sdp_path <<EOF
            v=0
            c=IN IP4 ${dst_address}
            m=video ${video_rtp_port} RTP/AVP ${rtp_video_payload_type}
            a=rtpmap:${rtp_video_payload_type} ${video_media_subtype}/90000
            m=audio ${audio_rtp_port} RTP/AVP ${rtp_audio_payload_type}
            a=rtpmap:${rtp_audio_payload_type} ${audio_media_subtype}/${audio_rate}/${audio_channels}
            EOF
                if (( channels == 2 )) && [[ ${audio_media_subtype} == "opus" ]]
                then
                echo "a=fmtp:${rtp_audio_payload_type} sprop-stereo=1" >>${sdp_path}
                fi
            }


            RTP
            gst-launch-1.0 \
                filesrc location=${input_path} ! qtdemux name=demux \
                demux.video_0 ! queue ! rtph264pay pt=$rtp_video_payload_type ! \
                udpsink host=${dst_address} port=${video_rtp_port} sync=true \
                demux.audio_0 ! queue ! rtpmp4gpay pt=$rtp_audio_payload_type ! \
                udpsink host=${dst_address} port=${audio_rtp_port} sync=true
            gst-launch-1.0 \
                udpsrc address=${address} port=${video_rtp_port} do-timestamp=true ! "$VIDEOCAPS" ! \
               
            "application/x-rtp, media=(string)video" ! queue ! decodebin ! autovideosink sync=true \
                udpsrc address=${address} port=${audio_rtp_port} do-timestamp=true ! "$AUDIOCAPS" ! \
               
            "application/x-rtp, media=(string)audio" ! queue ! decodebin ! autoaudiosink sync=true
            RTP using rtpbin
            gst-launch-1.0 \
                rtpbin name=bin \
                filesrc location=${input_path} ! qtdemux name=demux \
                demux.video_0 ! queue ! rtph264pay pt=$rtp_video_payload_type ! bin.send_rtp_sink_0 \
                bin.send_rtp_src_0 ! udpsink host=${dst_address} port=${video_rtp_port} sync=true \
                demux.audio_0 ! queue ! rtpmp4gpay pt=$rtp_audio_payload_type ! bin.send_rtp_sink_1\
                bin.send_rtp_src_1 ! udpsink host=${dst_address} port=${audio_rtp_port} sync=true
            gst-launch-1.0 \
                rtpbin name=bin \
                udpsrc address=${address} port=${video_rtp_port} do-timestamp=true ! "$VIDEOCAPS" ! bin.recv_rtp_sink_0 \
                bin. !
            "application/x-rtp, media=(string)video" ! queue ! decodebin ! autovideosink sync=true \
                udpsrc address=${address} port=${audio_rtp_port} do-timestamp=true ! "$AUDIOCAPS" ! bin.recv_rtp_sink_1 \
                bin. !
            "application/x-rtp, media=(string)audio" ! queue ! decodebin ! autoaudiosink sync=true

            RTP+RTCP using rtpbin
            gst-launch-1.0 \
                rtpbin name=bin \
                filesrc location=${input_path} ! qtdemux name=demux \
                demux.video_0 ! queue ! rtph264pay pt=$rtp_video_payload_type ! bin.send_rtp_sink_0 \
                bin.send_rtp_src_0 ! udpsink host=${dst_address} port=${video_rtp_port} sync=true \
                bin.send_rtcp_src_0 ! udpsink host=${dst_address} port=${video_rtcp_port} sync=false async=false \
                demux.audio_0 ! queue ! rtpmp4gpay pt=$rtp_audio_payload_type ! bin.send_rtp_sink_1\
                bin.send_rtp_src_1 ! udpsink host=${dst_address} port=${audio_rtp_port} sync=true \
                bin.send_rtcp_src_1 ! udpsink host=${dst_address} port=${audio_rtcp_port} sync=false async=false \
            gst-launch-1.0 \
                rtpbin name=bin \
                udpsrc address=${address} port=${video_rtp_port} do-timestamp=true ! "$VIDEOCAPS" ! bin.recv_rtp_sink_0 \
                bin. !
            "application/x-rtp, media=(string)video" ! queue ! decodebin ! autovideosink sync=true \
                udpsrc address=${address} port=${video_rtcp_port} ! "application/x-rtcp" ! bin.recv_rtcp_sink_0 \
                udpsrc address=${address} port=${audio_rtp_port} do-timestamp=true ! "$AUDIOCAPS" ! bin.recv_rtp_sink_1 \
                bin. !
            "application/x-rtp, media=(string)audio" ! queue ! decodebin ! autoaudiosink sync=true \
                udpsrc address=${address} port=${audio_rtcp_port} ! "application/x-rtcp" ! bin.recv_rtcp_sink_1

            RTP+RTCP using sdpdemux

            gst-launch-1.0 -v \
                  filesrc location=$sdp_path do-timestamp=true ! sdpdemux latency=${sdpdemux_latency_ms} name=bin \
                  bin. ! "application/x-rtp, media=(string)video" ! queue ! decodebin ! videoconvert ! videoscale ! queue ! videorate ! autovideosink sync=true \
                  bin. ! "application/x-rtp, media=(string)audio" ! queue ! decodebin ! audioconvert ! audioresample ! queue ! audiorate ! autoaudiosink sync=true


          • encode webcam, UDP stream:
            • gst-launch v4l2src ! video/x-raw-yuv,width=128,height=96,format='(fourcc)'UYVY ! ffmpegcolorspace ! ffenc_h263 ! video/x-h263 ! rtph263ppay pt=96 ! udpsink host=192.168.1.1 port=5000 sync=false
          • test VP8 / Opus to RTP (no RTPC) (WebRTC and Janus)
            • gst-launch-1.0 \
              audiotestsrc is-live=true wave=5 ! audioresample ! audioconvert ! audio/x-raw,channels=2,rate=16000 ! opusenc bitrate=20000 ! rtpopuspay pt=97 ! udpsink host=127.0.0.1 port=5002 \
              videotestsrc ! video/x-raw,width=320,height=240,framerate=15/1 ! videoscale ! videorate ! videoconvert ! timeoverlay ! vp8enc ! rtpvp8pay pt=96 ! udpsink host=127.0.0.1 port=5004
            • sdp
              • v=0
                c=IN IP4 127.0.0.1
                m=video 5100 RTP/AVP 96
                a=rtpmap:96 VP8/90000
                m=audio 5102 RTP/AVP 97
                a=rtpmap:97 opus/48000/2
                a=fmtp:97 sprop-stereo=1

          • test VP8 / Opus to RTP (with RTCP, using rtpbin):
            • sdp_path=/tmp/toto.sdp
              dst_address=225.4.3.2

              video_rtp_port=5100
              video_rtcp_port=$(( video_rtp_port + 1 ))
              video_media_subtype="VP8"
              rtp_video_payload_type=96

              audio_rtp_port=$(( video_rtp_port + 2 ))
              audio_rtcp_port=$(( video_rtp_port + 3 ))
              audio_media_subtype="opus"
              rtp_audio_payload_type=$(( rtp_video_payload_type + 1 ))

              rate=48000
              channels=2

              # sdp
              cat >$sdp_path <<EOF
              v=0
              c=IN IP4 $dst_address
              m=video $video_rtp_port RTP/AVP $rtp_video_payload_type
              a=rtpmap:$rtp_video_payload_type ${video_media_subtype}/90000
              m=audio $audio_rtp_port RTP/AVP $rtp_audio_payload_type
              a=rtpmap:$rtp_audio_payload_type ${audio_media_subtype}/${rate}/${channels}
              EOF
              if (( channels == 2 )) && [[ ${audio_media_subtype} == "opus" ]]
              then
                  echo "a=fmtp:${rtp_audio_payload_type} sprop-stereo=1" >>${sdp_path}
              fi

              gst-launch-1.0 -v \
              rtpbin name=bin \
              videotestsrc ! video/x-raw,width=320,height=240,framerate=25/1 ! videoscale ! videorate ! videoconvert ! timeoverlay ! vp8enc ! rtpvp8pay pt=$rtp_video_payload_type ! bin.send_rtp_sink_0 \
              bin.send_rtp_src_0 ! udpsink host=${dst_address} port=${video_rtp_port} sync=true \
              bin.send_rtcp_src_0 ! udpsink host=${dst_address} port=${video_rtcp_port} sync=false async=false \
              audiotestsrc is-live=true wave=5 ! audioconvert  ! audioresample ! audio/x-raw,channels=${channels},rate=${rate} ! opusenc bitrate=64000 ! rtpopuspay pt=$rtp_audio_payload_type ! bin.send_rtp_sink_1 \
              bin.send_rtp_src_1 ! udpsink host=${dst_address} port=${audio_rtp_port} sync=true \
              bin.send_rtcp_src_1 ! udpsink host=${dst_address} port=${audio_rtcp_port} sync=false async=false


          • test H.264 to RTP (no RTCP)
            • gst-launch-1.0 -v videotestsrc ! videoconvert ! x264enc ! rtph264pay config-interval=10 pt=96 ! udpsink host=234.1.2.3 port=5004
            • player:
              • toto.sdp
                • v=0
                  m=video 5004 RTP/AVP 96
                  c=IN IP4 234.1.2.3
                  a=rtpmap:96 H264/90000
                  a=fmtp:96 packetization-mode=1
              • ffplay -i toto.sdp
          • from file (VP8, Opus) to RTP (no RTCP)
            • sdp_path=/tmp/toto.sdp
              dst_address=225.4.3.2

              video_rtp_port=5100
              video_rtcp_port=$(( video_rtp_port + 1 ))
              video_media_subtype="VP8"
              rtp_video_payload_type=96

              audio_rtp_port=$(( video_rtp_port + 2 ))
              audio_rtcp_port=$(( video_rtp_port + 3 ))
              audio_media_subtype="opus"
              rtp_audio_payload_type=$(( rtp_video_payload_type + 1 ))

              rate=48000
              channels=2

              # sdp
              cat >$sdp_path <<EOF
              v=0
              c=IN IP4 $dst_address
              m=video $video_rtp_port RTP/AVP $rtp_video_payload_type
              a=rtpmap:$rtp_video_payload_type ${video_media_subtype}/90000
              m=audio $audio_rtp_port RTP/AVP $rtp_audio_payload_type
              a=rtpmap:$rtp_audio_payload_type ${audio_media_subtype}/${rate}/${channels}
              EOF
              if (( channels == 2 )) && [[ ${audio_media_subtype} == "opus" ]]
              then
                  echo "a=fmtp:${rtp_audio_payload_type} sprop-stereo=1" >>${sdp_path}
              fi

              gst-launch-1.0 -v \
              filesrc location=/path/to/toto.webm ! matroskademux name=demux \
              demux.video_0 ! queue ! rtpvp8pay pt=$rtp_video_payload_type ! udpsink host=${dst_address} port=${video_rtp_port} sync=true \
              demux.audio_0 ! queue ! rtpopuspay pt=$rtp_audio_payload_type ! udpsink host=${dst_address} port=${audio_rtp_port} sync=true

          • from file (H.264, AAC) to RTP
            • gst-launch \
                  filesrc location=${input_path} ! qtdemux name=demux \
                  demux.video_0 ! queue ! rtph264pay pt=$rtp_video_payload_type ! udpsink host=${dst_address} port=${video_rtp_port} sync=true \
                  demux.audio_0 ! queue ! rtpmp4gpay pt=$rtp_audio_payload_type ! udpsink host=${dst_address} port=${audio_rtp_port} sync=true

          • from file (H.264, AAC) to RTP, using rtpbin
            • ...
          • from file (H.264, AAC) to RTP + RTCP, using rtpbin
            • ...
        • RTMP to:
          • options from librtmp
          • nginx-rtmp
            • Video and audio:
              • from test to H.264, AAC
                • gst-launch-1.0 -v flvmux name=mux ! rtmpsink location=rtmp://nginx-server/myapp/mystream \
                  videotestsrc ! video/x-raw, width=360, height=288 ! x264enc ! video/x-h264,profile=baseline,width=360,height=288 ! h264parse ! mux. \
                  audiotestsrc wave=5 ! audioconvert !  avenc_aac compliance=experimental ! aacparse ! mux.

              • from test to H.264 (omx), MP3
                • gst-launch-1.0 -v flvmux name=mux ! rtmpsink location=rtmp://nginx-server/myapp/mystream \
                  videotestsrc ! video/x-raw, width=360, height=288 ! omxh264enc ! video/x-h264,profile=baseline,width=360,height=288 ! h264parse ! mux. \
                  audiotestsrc wave=5 ! audioconvert ! lamemp3enc ! mpegaudioparse ! mux.
              • from DVB
                • gst-launch-1.0 -v dvbsrc modulation="QAM 64" trans-mode=8k bandwidth=8 frequency=658000000 code-rate-lp=AUTO code-rate-hp=2/3 guard=4 hierarchy=0 ! tsdemux program-number=806 name=demux \
                  flvmux name=mux ! rtmpsink location=rtmp://nginx-server/myapp/mystream \
                  demux. ! queue ! mpegvideoparse ! decodebin ! videoscale ! video/x-raw, width=320, height=320 ! videoconvert ! omxh264enc inline-header=true periodicty-idr=50 ! h264parse ! mux. \
                  demux. ! queue ! mpegaudioparse ! decodebin ! audioconvert ! avenc_aac compliance=experimental ! aacparse ! mux.
              • video, audio with PID 0x7c:
                • gst-launch-1.0 -vvv dvbsrc modulation="QAM 64" trans-mode=8k bandwidth=8 frequency=658000000 code-rate-lp=AUTO code-rate-hp=2/3 guard=4 hierarchy=0 ! tsdemux program-number=806 name=demux \
                  flvmux name=mux ! rtmpsink location=rtmp://192.168.0.8/myapp/mystream \
                  demux. ! queue ! mpegvideoparse ! decodebin ! videoscale ! video/x-raw, width=320, height=320 ! videoconvert ! omxh264enc inline-header=true periodicty-idr=50 ! h264parse ! mux. \
                  demux.audio_007c ! queue ! mpegaudioparse ! decodebin ! audioconvert ! avenc_aac compliance=experimental ! aacparse ! mux.

              • video, test audio:
                • gst-launch-1.0 -vvv dvbsrc modulation="QAM 64" trans-mode=8k bandwidth=8 frequency=658000000 code-rate-lp=AUTO code-rate-hp=2/3 guard=4 hierarchy=0 ! tsdemux program-number=806 name=demux \
                  flvmux name=mux ! rtmpsink location=rtmp://nginx-server/myapp/mystream \
                  demux. ! queue ! mpegvideoparse ! decodebin ! videoscale ! video/x-raw, width=320, height=320 ! videoconvert ! omxh264enc inline-header=true periodicty-idr=50 ! h264parse ! mux. \
                  audiotestsrc wave=5 ! audioconvert ! lamemp3enc ! mpegaudioparse ! mux.

              • video, audio forced to 44100Hz (MP3 at 48000Hz is not supported by FLV) (AAC at 48000Hz is supported, though) (queue max-size-time must be increase from 1000000ns [1s] to ... )
                • gst-launch-1.0 -vvv dvbsrc modulation="QAM 64" trans-mode=8k bandwidth=8 frequency=658000000 code-rate-lp=AUTO code-rate-hp=2/3 guard=4 hierarchy=0 ! tsdemux program-number=806 name=demux \
                  flvmux name=mux ! rtmpsink location=rtmp://nginx-server/myapp/mystream \
                  demux. ! queue max-size-time=4000000000 ! mpegvideoparse ! decodebin ! videoscale ! video/x-raw, width=320, height=320 ! videoconvert ! omxh264enc inline-header=true periodicty-idr=50 ! h264parse ! mux. \
                  demux.audio_007c ! queue max-size-time=4000000000 ! mpegaudioparse ! decodebin ! audioconvert ! audioresample ! audio/x-raw,rate=44100 ! lamemp3enc ! mpegaudioparse ! mux.
              • video, audio AAC at 48000Hz:
                • gst-launch-1.0 -vvv dvbsrc modulation="QAM 64" trans-mode=8k bandwidth=8 frequency=658000000 code-rate-lp=AUTO code-rate-hp=2/3 guard=4 hierarchy=0 ! tsdemux program-number=806 name=demux \
                  flvmux name=mux ! rtmpsink location=rtmp://nginx-server/myapp/mystream \
                  demux. ! queue ! mpegvideoparse ! decodebin ! videoscale ! video/x-raw, width=32, height=32 ! videoconvert ! omxh264enc inline-header=true periodicty-idr=50 ! h264parse ! mux. \
                  demux.audio_007c ! queue ! mpegaudioparse ! decodebin ! audioconvert ! audioresample ! audio/x-raw,rate=48000 ! avenc_aac compliance=experimental ! aacparse ! mux.

            • Problemes / Problems
          • Wowza
            • Live Streaming from RaspberryPi using GStreamer - Help please?
              • Incoming security / Flash Version String:
                • Wirecast/|FME/|FMLE/|Wowza GoCoder*|Gstreamer/|Gstreamer/*|Gstreamer*
            • How to secure publishing from an RTMP encoder that does not support authentication (ModuleSecureURLParams)
            • Streaming to a Flash Media Server using the rtmpsink element
              • profile=baseline
            • woking / not working
              • working
                • omxh264enc ! video/x-h264,profile=high
                • omxh264enc ! video/x-h264,profile=baseline
                • x264enc ! video/x-h264,profile=baseline
              • not working
                • x264enc ! video/x-h264,profile=high
            • only video:
              • gst-launch-1.0 -v -e flvmux name=mux ! rtmpsink  location=rtmp://wowza_server/application/stream \
                videotestsrc ! video/x-raw, framerate=25/1, width=640, height=360 ! x264enc bitrate=512 ! video/x-h264,profile=baseline ! h264parse ! mux.
              • gst-launch-1.0 -v dvbsrc modulation="QAM 64" trans-mode=8k bandwidth=8 frequency=658000000 code-rate-lp=AUTO code-rate-hp=2/3 guard=4 hierarchy=0 ! tsdemux program-number=805 name=demux \
                flvmux name=mux ! rtmpsink location=rtmp://wowza_server/application/stream \
                demux. ! queue ! mpegvideoparse ! decodebin ! videoscale ! video/x-raw, width=360, height=288 ! videoconvert ! omxh264enc inline-header=true periodicty-idr=1 ! video/x-h264,profile=baseline ! h264parse ! mux.
              • gst-launch-1.0 -v dvbsrc modulation="QAM 64" trans-mode=8k bandwidth=8 frequency=658000000 code-rate-lp=AUTO code-rate-hp=2/3 guard=4 hierarchy=0 ! tsdemux program-number=805 name=demux \
                flvmux name=mux ! rtmpsink location=rtmp://wowza-server/application/stream \
                demux. ! queue ! mpegvideoparse ! decodebin ! videoscale ! video/x-raw, width=360, height=288 ! videoconvert ! omxh264enc inline-header=true periodicty-idr=1 ! video/x-h264,profile=high ! h264parse ! mux.
              • gst-launch-1.0 -v flvmux name=mux ! rtmpsink location=rtmp://wowza-server/application/stream \
                videotestsrc ! video/x-raw, width=360, height=288 ! omxh264enc ! video/x-h264,profile=high ! h264parse ! mux.
            • video and audio:
              • gst-launch-1.0 -v -e flvmux name=mux ! rtmpsink  location=rtmp://wowza-server/application/stream videotestsrc ! video/x-raw, framerate=24/1, width=1024, height=436 ! x264enc bitrate=800 ! video/x-h264,profile=baseline ! h264parse ! mux. audiotestsrc wave=5 ! audioconvert ! lamemp3enc ! mpegaudioparse ! mux.
              • gst-launch-1.0 -v flvmux name=mux ! rtmpsink location=rtmp://wowza-server/application/stream videotestsrc ! video/x-raw, width=360, height=288 ! x264enc        ! video/x-h264,profile=baseline ! h264parse ! mux. audiotestsrc wave=5 ! audioconvert ! lamemp3enc ! mpegaudioparse ! mux.
              • gst-launch-1.0 -v flvmux name=mux ! rtmpsink location=rtmp://wowza-server/application/stream videotestsrc ! video/x-raw, width=360, height=288 ! omxh264enc ! video/x-h264,profile=baseline ! h264parse ! mux. audiotestsrc wave=5 ! audioconvert ! lamemp3enc ! mpegaudioparse ! mux.
              • gst-launch-1.0 -v flvmux name=mux ! rtmpsink location=rtmp://wowza-server/application/stream videotestsrc ! video/x-raw, width=360, height=288 ! omxh264enc ! video/x-h264,profile=baseline ! h264parse ! mux.
          • Flash Media Server
      • gst-launch filesrc location=videofile ! decodebin name=decoder \
        decoder. ! queue ! audioconvert ! audioresample ! osssink \
        decoder. ! ffmpegcolorspace ! xvimagesink
    • capture timestamped frames (BeagleCam)
      • gst-launch v4l2src num-buffers=1 ! video/x-raw-yuv,width=640,height=480,framerate=30/1 ! ffmpegcolorspace ! jpegenc ! filesink location=$(date +"%s").jpg
    • gstreamer dvb streaming
  • Graphical editor
    • gst-editor (only for gstreamer 0.8)

Desenvolupament / Development

  • Documentació / Documentation
    module submodule
    GitLab C Reference Manual
    PyGObject API reference
    symbol mapping (C - Python)
    CLI tool
    gtk


    GLib
    Symbol Mapping

    gstreamer gstreamer gstreamer
    GStreamer 1.0 Core Reference Manual

    Gst
    Symbol Mapping
    gst-launch-1.0
    GstBase

    GstCheck

    GstController

    GstNet


    gstreamer-libs
    gstreamer / libs
    GStreamer 1.0 Library Reference Manual



    gstreamer-plugins gstreamer / plugins
    GStreamer Core Plugins 1.0 Plugins Reference Manual



    gst-plugins-base gst-plugins-base-libs gst-plugins-base /  gst-libs GStreamer Base Plugins 1.0 Library Reference Manual
    GstAllocators


    GstApp


    GstAudio


    GstPbutils

    GstRtp

    GstRtsp

    GstSdp Symbol Mapping
    GstTag

    GstVideo

    GstGL

    gst-plugins-base-plugins gst-plugins-base / gst
    GStreamer Base Plugins 1.0 Plugins Reference Manual



    gst-plugins-good gst-plugins-good gst-plugins-good GStreamer Good Plugins 1.0 Plugins Reference Manual



    gst-plugins-ugly gst-plugins-ugly gst-plugins-ugly GStreamer Ugly Plugins 1.0 Plugins Reference Manual



    gst-plugins-bad gst-plugins-bad-libs gst-plugins-bad / gst-libs
    GStreamer Bad Plugins 1.0 Library Reference Manual GstInsertBin

    GstMpegts

    GstPlayer

    GstWebRTC

    gst-plugins-bad-pugins gst-plugins-bad GStreamer Bad Plugins 1.0 Plugins Reference Manual



    gst-python





    gst-rtsp-server

    GStreamer RTSP Server Reference Manual GstRtspServer

    gst-validate


    GstValidate Reference Manual


    gst-validate-transcoding-1.0
    gst-editing-services
    gstreamer / gst-editing-services
    pitivi / gst-editing-services (mirror)
    GStreamer Editing Services 1.12.2 Reference Manual GES
    ges-launch-1.0
    pitivi (developer) gst-transcoding
    pitivi / gst-transcoding

    GstTranscoder

    gst-transcoder-1.0
  • Llenguatges / Languages
  • Resum / Summary
    task
    options
    steps
    C
    Python




    methods

    headers


    #include <gst/gst.h>
    #!/usr/bin/env python3

    import sys
    import gi
    gi.require_version('Gst', '1.0')
    from gi.repository import Gst, GObject, GLib


    Initialize Gstreamer


    /* init */
    gst_init (&argc, &argv);

    class Player(object): def __init__(self):
    # init GStreamer
    Gst.init(None)

    main GLib loop
    (optional, bu t needed when using add_signal_watch)


    GMainLoop *main_loop;
    main_loop = g_main_loop_new (NULL, FALSE);

    self.loop = GLib.MainLoop.new(None, False)
    Arguments

    Usage /* check args */
    if (argc != 2) {
      g_print ("Usage: %s <filename>\n", argv[0]);
      return -1;
    }

    # check input arguments
    if len(sys.argv) != 2:
        print("Usage: {0:s} <filename>".format(sys.argv[0]))
        sys.exit(1)


    Arguments

    if Gst.Uri.is_valid(sys.argv[1]):
        uri = sys.argv[1]
    else:
        uri = Gst.filename_to_uri(sys.argv[1])
    print("uri: {0:s}".format(uri))
    Build the pipeline

    option 1:
    Build the pipeline by parsing (basic tutorial 1)

    /* create a new pipeline with all the elements */
    GstElement *pipeline;
    pipeline = gst_parse_launch ("playbin uri=https://www.freedesktop.org/software/gstreamer-sdk/data/media/sintel_trailer-480p.webm")

    # create a new pipeline with all the elements
    pipeline = Gst.parse_launch("playbin uri=https://www.freedesktop.org/software/gstreamer-sdk/data/media/sintel_trailer-480p.webm")

    option 2:
    Build the pipeline from elements
    (basic tutorial 2)

    Create the elements /* create the elements */
    GstElement *pipeline, *source, *sink;

    source = gst_element_factory_make ("videotestsrc", "source");
    sink = gst_element_factory_make ("autovideosink", "sink");
    # create the elements
    source = Gst.ElementFactory.make("videotestsrc", "source")
    sink = Gst.ElementFactory.make("autovideosink", "sink")
    (or create the factory and then the element)
    (Creating a GstElement, basic tutorial 6)
    /* create factory and element */
    factory = gst_element_factory_find ("fakesrc");
    element = gst_element_factory_create (factory, "source");

    # create factory and element
    source_factory = Gst.ElementFactory.find("videotestsrc")
    sink_factory = Gst.ElementFactory.find("autovideosink")
    source = source_factory.create("source")
    sink = sink_factory.create("sink")

    Create the empty pipeline /* create an empty pipeline */
    pipeline = gst_pipeline_new ("test-pipeline");
    # create an empty pipeline
    pipeline = Gst.Pipeline.new("test-pipeline")
    Build the pipeline: add and link elements
    /* add elements to the pipeline */
    gst_bin_add_many (GST_BIN (pipeline), source, sink, NULL);
    if (gst_element_link (source, sink) != TRUE) {
      g_printerr ("Elements could not be linked.\n");
      gst_object_unref (pipeline);
      return -1;
    }

    # add elements to the pipeline
    pipeline.add(source)
    pipeline.add(sink)
    if not source.link(sink):
        print("ERROR: Could not link source to sink")
    sys.exit(1)

    Modify properties /* set property of an element */
    g_object_set (source, "pattern", 0, NULL);

    # set property of an element
    source.set_property("pattern", 0)


    Connect element signal to a callback

    def on_have_type(self, element, probability, caps, user_data):




    # connect signal to a callback
    typefind.connect("have-type", on_have_type, None)
    option 3:
    Dinamically connect the elements in the pipeline
    (basic tutorial 3)
    Callback pad_added_handler(...)
    def on_pad_added(self, ...):
    Connect signal to callback g_signal_connect (data.source, "pad-added", G_CALLBACK (pad_added_handler), &data);
    # connect signal to a callback
    source.connect("pad-added", self.on_pad_added)
    Start playing


    /* set the pipeline to playing state */
    gst_element_set_state (pipeline, GST_STATE_PLAYING);

    # set the pipeline to playing state
    pipeline.set_state(Gst.State.PLAYING)

    Main loop
    (two ways to use a bus)

    Get bus associated to pipeline /* get the bus from the pipeline */
    GstBus *bus;
    bus = gst_element_get_bus (pipeline);
    # get the bus from the pipeline
    bus = pipeline.get_bus()
    option 1:
    Wait until error or EOS
    (basic tutorial 1)

    /* wait until error or end of stream */
    GstMessage *msg;
    msg = gst_bus_timed_pop_filtered (bus, GST_CLOCK_TIME_NONE, GST_MESSAGE_ERROR | GST_MESSAGE_EOS);

    # wait until error or end of stream
    terminate = False
    while True:
        try:
            msg = bus.timed_pop_filtered(0.5 * Gst.SECOND, Gst.MessageType.ERROR | Gst.MessageType.EOS)
            if msg:
                terminate = True
        except KeyboardInterrupt:
            terminate = True
        if terminate:
            break

    option 2:
    GMain loop with callback
    (basic tutorial 12)
    start loop /* start main loop */
    g_main_loop_run (main_loop);
    # start main loop
    self.loop.run()
    data structure (to be available from callback)
    typedef struct _CustomData {
      gboolean is_live;
      GstElement *pipeline;
      GMainLoop *loop;
    } CustomData;
    CustomData data;

    data.loop = main_loop;
    data.pipeline = pipeline;
    self.loop
    self.pipeline

    option a: one single callback for all messages

    static void cb_message (GstBus *bus, GstMessage *msg, CustomData *data) {
      const GstStructure *structure;
      structure = gst_message_get_structure (msg);
      g_print ("Message name: %s\n", gst_structure_get_name (structure) );

      switch (GST_MESSAGE_TYPE (msg)) {
      ...
      }
    def on_message(self, bus, msg, user_data):
        t = msg.type
       
        if (t==Gst.MessageType.EOS):
                print("End of stream")
                self.loop.quit()     
        elif (t == Gst.MessageType.ERROR):
                err, debug = msg.parse_error()
                print("Error: {0:s}".format(err.message))
                self.loop.quit()
        else:
            # Unhandled message
            pass



    /* message handler */
    gst_bus_add_signal_watch (bus);
    g_signal_connect (bus, "message", G_CALLBACK (cb_message), &data);

    # general message handler
    bus.add_watch(GLib.PRIORITY_DEFAULT, self.on_message, None)
    option b: each message has its callback


    def on_error(self, bus, msg, user_data):
    def on_eos(
    self, bus, msg, user_data):
    def on_state_changed(
    self, bus, msg, user_data):
    def on_application_message(
    self, bus, msg, user_data):
    ...



    # individual message handler
    bus.add_signal_watch()
    bus.connect("message::error", self.on_error)
    bus.connect("message::eos", self.on_eos)
    bus.connect("message::state-changed", self.on_state_changed)
    bus.connect("message::application", self.on_application_message)

    ...
    quit loop

    self.loop.quit()
    Free resources


    /* free resources */
    if (msg != NULL)
      gst_message_unref (msg);
    gst_object_unref (bus);
    gst_element_set_state (pipeline, GST_STATE_NULL);
    gst_object_unref (pipeline);
    return 0;

    # free resources
    pipeline.
    set_state(Gst.State.NULL)




    if __name__ == '__main__':
        # to be able to use CTRL-C to quit
        import signal
        signal.signal(signal.SIGINT, signal.SIG_DFL)

        p = Player()


  • Application Development Manual
  • ------ inici destinació ------
  • Application Development Manual (pdf, ps, html)
    • About GStreamer
      • What is GStreamer?
      • Design principles
      • Foundations
        • Communication between application / bus / pipeline:
          • buffers: streaming data between elements (downstream (sources->sinks)) (buffering)
          • events: between elements or from the application to elements (upstream (sinks->sources), downstream (sources->sinks))
          • messages: posted by elements on the pipeline's message bus (message types)
          • queries: allow applications to request information such as duration or current playback position from the pipeline (upstream, downstream) (querying)
    • Building an Application
      • Initializing GStreamer
        • C
          Python
          #include <gst/gst.h>
          import gi
          gi.require_version('Gst', '1.0')
          from gi.repository import Gst, GObject, GLib
          gst_init (&argc, &argv);
          Gst.init(None)

      • Elements
        • Creating elements

          C
          Python
          option 1: factory and element
          factory = gst_element_factory_find("fakesrc")
          element = gst_element_factory_create(factory, "source")
          factory = Gst.ElementFactory.find("fakesrc")
          source = factory.create("source")
          option 2: element
          element = gst_element_factory_make("fakesrc", "source")
          source = Gst.ElementFactory.make("fakesrc", "source")

        • Element States
          name
          C
          Python description

          gst_element_set_state (pipeline, GST_STATE_PLAYING); pipeline.set_state(Gst.State.PLAYING)
          NULL
          GST_STATE_NULL
          Gst.State.NULL
          the NULL state or initial state of an element. Transition to it will free all resources.
          READY
          GST_STATE_READY Gst.State.READY the element is ready to go to PAUSED
          PAUSED
          GST_STATE_PAUSED Gst.State.PAUSED the element is PAUSED, it is ready to accept and process data. Sink elements however only accept one buffer and then block
          PLAYING
          GST_STATE_PLAYING Gst.State.PLAYING the element is PLAYING, the clock is running and the data is flowing
      • Bins

        • C
          Python
          Bin





          gst_bin_new() Gst.Bin.new()
          gst_bin_add()

          gst_bin_remove()

          gst_bin_get_by_name()

          gst_bin_get_by_interface()

          gst_bin_iterate_elements()

          Pipeline
          (special top-level type of bin)


          gst_pipeline_new() pipeline = Gst.Pipeline.new("test-pipeline")

          pipeline.add(...)

          ...
      • Bus

        • C
          Python
          get all messages
          • bus = gst_pipeline_get_bus (GST_PIPELINE (pipeline));
          • bus_watch_id = gst_bus_add_watch (bus, my_bus_callback, NULL);
          • static gboolean my_bus_callback(GstBus *bus, GstMessage *msg, gpointer data)
                ...
          • bus = self.pipeline.get_bus()
          • bus.add_watch(GLib.PRIORITY_DEFAULT, self.on_message, None)
          • def on_message(bus, msg, user_data):
                if (msg.type == Gst.MessageType.ERROR):
                     err, debug = msg.parse_error()
                     print("Error: {0:s}".format(err.message))
                     self.loop.quit()
                 elif (msg.type == Gst.MessageType.EOS):
                     print("EOS")
                     self.loop.quit()      
                 else:
                     # Unhandled message
                     pass
          get selected messages
          bus = gst_pipeline_get_bus (GST_PIPELINE (pipeline);
          gst_bus_add_signal_watch (bus);
          g_signal_connect (bus, "message::error", G_CALLBACK (cb_message_error), NULL);
          g_signal_connect (bus, "message::eos", G_CALLBACK (cb_message_eos), NULL);
          bus = self.pipeline.get_bus()
          bus.add_signal_watch()
          bus.connect("message::error", self.on_error)
          bus.connect("message::eos", self.on_eos)

          (needed loop)
          loop = g_main_loop_new (NULL, FALSE);
          g_main_loop_run (loop);
          ...
          g_main_loop_unref (loop);
          loop = GLib.MainLoop.new(None, False)
          loop.run()
          ...
          loop.quit()
        • Tipus de missatges / Message types
          • message type
            connect(...)
            C
            Python



            Gst.MessageType
            Error (fatal problem),
            warning (non-fatal problem),
            information (not problem)
            message::error
            GST_MESSAGE_ERROR
            gst_message_parse_error()
            _parse_warning ()
            _parse_info ()
            Gst.MessageType.ERROR
            parse_error()
            parse_warning()
            parse_info()
            End-of-stream
            message::eos
            GST_MESSAGE_EOS
            Gst.MessageType.EOS
            Tags

            GST_MESSAGE_TAG
            gst_message_parse_tag()
            Gst.MessageType.TAG
            tags = msg.parse_tag()
            tags.foreach(self.print_one_tag, None)
            def print_one_tag(self, list, tag, user_data):
                res, val = Gst.TagList.copy_value(list, tag)
                print("%s: %s"% (Gst.tag_get_nick(tag), val))

            State-changes

            GST_MESSAGE_STATE_CHANGE
            gst_message_parse_state_changed ()
            Gst.MessageType.STATE_CHANGED
            old, new, pending = msg.parse_state_changed()
            print("State changed: {0:s} -> {1:s}".format(Gst.Element.state_get_name(old), Gst.Element.state_get_name(new)))
            message::async-done
            GST_MESSAGE_ASYNC_DONE Gst.MessageType.ASYNC_DONE
            Buffering
            message::buffering
            GST_MESSAGE_BUFFERING
            gst_message_parse_buffering (message, &percent);
            Gst.MessageType.BUFFERING
            percent = message.parse_buffering()
            Element



            Application-specific

            gst_message_get_structure()
            Threads
            sync-message::stream-status
            GST_MESSAGE_STREAM_STATUS
            gst_message_parse_stream_status (message, &type, &owner);
            Gst.MessageType.STREAM_STATUS
            type, owner = message.parse_stream_status()

      • Pads and capabilities
        • Pads
          availability
          examples
          C
          Python


          • pad = gst_element_get_static_pad(...)
          • pad = gst_element_get_compatible_pad (mux, tolink_pad, NULL);
          • pad = gst_element_get_request_pad (tee, "src%d");
          • name = gst_pad_get_name (pad);
          always



          dynamic (sometimes) pads
          • demuxer
          • /* listen for newly created pads */
            g_signal_connect (demux, "pad-added", G_CALLBACK (cb_new_pad), NULL);
          • static void cb_new_pad(...)
            • gst_element_set_state ()
            • gst_element_sync_state_with_parent ()
          • # listen for newly created pads
            self.demux.connect("pad-added", self.on_new_pad)
          • def on_new_pad(self):
            • ...
          request pads
          (basic tutorial 3)
          • multiplexer
          • aggregator
          • tee


        • Capabilities of a pad
          • GstCaps
            • non-negotiated pad: one or more GstStructure
            • negotiated pad: only one GstStructure (with fixed values)
          • ...
            • possible caps: obtained with gst-inspect
            • allowed caps: subset of possible capabilities, depending on the possible caps of the peer pad
            • negotiated caps:
          • Types of caps:
            • type
              Gst.Structure
              values
              empty
              0

              ANY


              simple
              1
              variable field types
              fixed
              1
              no variable field types

          • C
            Python
            check type
            gst_caps_is_fixed (caps)
            caps.is_fixed()
            get structure
            str = gst_caps_get_structure (caps, 0);
            str = caps.get_structure(0)
            get value
            gst_structure_get_int (str, "width", &width)
            width = str.get_int("width")
            creation of simple caps
            caps = gst_caps_new_simple ("video/x-raw",
                      "format", G_TYPE_STRING, "I420",
                      "width", G_TYPE_INT, 384,
                      "height", G_TYPE_INT, 288,
                      "framerate", GST_TYPE_FRACTION, 25, 1,
                      NULL);
            caps = Gst.Caps.new_empty_simple("video/x-raw")
            caps.set_value("format", "I420")
            caps.set_value("width", 384)
            caps.set_value("height", 288)
            caps.set_value("framerate", ...)

            creation of full caps
              caps = gst_caps_new_full (
                  gst_structure_new ("video/x-raw",
                         "width", G_TYPE_INT, 384,
                         "height", G_TYPE_INT, 288,
                         "framerate", GST_TYPE_FRACTION, 25, 1,
                         NULL),
                  gst_structure_new ("video/x-bayer",
                         "width", G_TYPE_INT, 384,
                         "height", G_TYPE_INT, 288,
                         "framerate", GST_TYPE_FRACTION, 25, 1,
                         NULL),
                  NULL);
            (unavailable)
            filtering using caps (internally creates a capsfilter)
            link_ok = gst_element_link_filtered (element1, element2, caps);
            link_ok = element1.link_filtered(element2, caps)
          • Ghost pads
            • "A ghost pad is a pad from some element in the bin that can be accessed directly from the bin as well."
            • C
              Python
              gst_element_add_pad (bin, gst_ghost_pad_new ("sink", pad));
              ghost_pad = Gst.GhostPad.new("sink",pad)
      • Buffers and events
        • Buffers
        • Events
          • "Events are control particles that are sent both upstream (right to left) and downstream (left to right) in a pipeline along with buffers."
          • Examples: seeking, flushes, end-of-stream notifications, ...

          • C
            Python
            create
            event = gst_event_new_seek (1.0, GST_FORMAT_TIME,
                              GST_SEEK_FLAG_NONE,
                              GST_SEEK_METHOD_SET, time_ns,
                              GST_SEEK_TYPE_NONE, G_GUINT64_CONSTANT (0));
            event = Gst.Event.new_seek(1.0, Gst.Format.TIME,
                Gst.SeekFlags.NONE,
                Gst.SeekType.SET, time_ns,
                Gst.SeekType.NONE, 0)
            send
            gst_element_send_event (element, event);
            element.send_event(event)

      • Your first application
    • Advanced GStreamer Concepts
      • Position tracking and seeking (basic_tutorial_4)

        • C
          Python
          Querying (queries)
          • static gboolean  cb_print_position(GstElement *pipeline)
            • gst_element_query_position (pipeline, GST_FORMAT_TIME, &pos)
            • gst_element_query_duration (pipeline, GST_FORMAT_TIME, &len)
          • g_timeout_add (200, (GSourceFunc) cb_print_position, pipeline);
          • g_timeout_add_seconds (1, (GSourceFunc) cb_print_position, pipeline);
          • g_main_loop_run (loop);
          • def cb_print_position(pipeline)
            • ret, pos = pipeline.query_position(Gst.Format.TIME)
            • ret, len = pipeline.query_duration(Gst.Format.TIME)
            • print("{0:f}/{1:f}".format(pos, len))
          • GLib.timeout_add(200, cb_print_position, pipeline)
            • the function is called repeatedly until it returns False
          • GLib.timeout_add_seconds(1, cb_print_position, pipeline)
          • loop.run()
          Events: seeking (and more)
          • gst_element_seek (pipeline, 1.0,
                GST_FORMAT_TIME,
                GST_SEEK_FLAG_FLUSH,
                GST_SEEK_TYPE_SET, time_nanoseconds,
                GST_SEEK_TYPE_NONE, GST_CLOCK_TIME_NONE)
          • gst_element_seek_simple (...)

      • Metadata
        • Types
          • stream tags: non-technical information (author, title, album ...)
          • stream-info: technical information (GstPad, GstCaps)

        • C
          Python
          Metadata reading

          tags.py
          Tag writing



      • Interfaces
        • GstColorBalance
        • GstVideoOverlay
        • ...
      • Clocks and synchronization in GStreamer
      • Buffering



        • Stream buffering "Buffering up to a specific amount of data, in memory, before starting playback so that network fluctuations are minimized" buffer element: queue2
          • low watermark
          • high watermark
          Download buffering "Download of the network file to a local disk with fast seeking in the downloaded data. This is similar to the quicktime/youtube players." buffering.py
          Timeshift buffering "Caching of (semi)-live streams to a local, on disk, ringbuffer with seeking in the cached area. This is similar to tivo-like timeshifting."


        • C
          Python


          buffering.py (download buffering)
          messages
          gst_message_parse_buffering (message, &percent);
          percent = message.parse_buffering()
          queries
          query = gst_query_new_buffering (GST_FORMAT_TIME);
          gst_element_query (pipeline, query)
          gst_query_parse_buffering_percent (query, &busy, &percent);
          gst_query_parse_buffering_range (query, NULL, NULL, NULL, &estimated_total);
          query = Gst.Query.new_buffering(Gst.Format.TIME)
          pipeline.query(query)
          busy, percent = query.parse_buffering_percent()
          format, start, stop, estimated_total = query.parse_buffering_range()

      • Dynamic Controllable Parameters

        • C
          Python

          GstControlSource
          Gst.ControlSource
          create
          csource = gst_interpolation_control_source_new ();
          g_object_set (csource, "mode", GST_INTERPOLATION_MODE_LINEAR, NULL);

          attach to  the gobject property
          gst_object_add_control_binding (object, gst_direct_control_binding_new (object, "prop1", csource));


          GstTimedValueControlSource *tv_csource = (GstTimedValueControlSource *)csource;
          gst_timed_value_control_source_set (tv_csource, 0 * GST_SECOND, 0.0);
          gst_timed_value_control_source_set (tv_csource, 1 * GST_SECOND, 1.0);


      • Threads
        • Scheduling in GStreamer
          • pad can:
            • push from upstream
            • pull to downstream
        • Configuring threads in GStreamer
          • message: STREAM_STATUS
            • GST_STREAM_STATUS_TYPE_CREATE: when a new thread is about to be created -> you can configure a GstTaskPool in the GstTask
            • when a thread is entered or left -> you can configure thread priorities
            • when a thread starts, pauses and stops -> you can visualize the status of streaming in a gui application
          • Boost priority of a thread
        • When would you force a thread?
      • Autoplugging
      • Pipeline manipulation
        • Using probes
          • Data probes
          • Play a section of a media file
        • Manually adding or removing data from/to a pipeline
        • Forcing a format
        • Dynamically changing the pipeline
    • Higher-level interfaces for GStreamer applications
      • Playback Components
    • Appendices
    • ...
  • Tutorials (source code in gst-docs/examples/tutorials) (playback tutorials are based on playbin)

    C
    Python
    Table of Concepts
    Basic tutorials / Playback tutorials
    francesc.pinyol.m/python-gst-examples
    gkralik / python-gst-tutorial
    GstreamerCodeSnippets
    Python/pygst-sdk-tutorials
    GstreamerCodeSnippets
    Others/0.10/Python/pygst-sdk-tutorials
    GstreamerCodeSnippets
    pygst-tutorial
    (class GTK_Main)

    Basic tutorial 1: Hello world!

    basic-tutorial-1.py basic-tutorial-1.py

    Bus
    Elements
    Links
    Pipelines
    Basic tutorial 2: GStreamer concepts

    basic-tutorial-2.py (buggy)
    basic-tutorial-2-ex-vertigo.py

    basic-tutorial-2.py

    Pads
    Signals
    States
    Basic tutorial 3: Dynamic pipelines
    • CustomData data
    • data.source
    • data.convert
    • data.sink
    • g_signal_connect (data.source, "pad-added", G_CALLBACK (pad_added_handler), &data);

    basic-tutorial-3-ex-video.py
    basic-tutorial-3.py
    • class Player
      • def __init__(self)
        • self.source
        • self.convert
        • self.sink
        • self.source.connect("pad-added", self.on_pad_added)
      • def on_pad_added(self, src, new_pad)
    basic-tutorial-3.py
    • def pad_added_handler(src, new_pad, data)
    • data["source"]
    • data["convert"]
    • data["sink"]
    • data["source"].connect("pad-added", pad_added_handler, data)



    (you can jump to Playback tutorials)





    Queries
    Seeks
    Basic tutorial 4: Time management

    basic-tutorial-4.py
    • class Player
      • def __init__
      • def play
        • bus = self.playbin.get_bus()
        • msg = bus.timed_pop_filtered(100 * Gst.MSECOND, (Gst.MessageType.STATE_CHANGED | Gst.MessageType.ERROR | Gst.MessageType.EOS | Gst.MessageType.DURATION_CHANGED))
        • if msg: self.handle_message(msg)
        • self.playbin.query_position
        • self.playbin.query_duration
        • self.playbin.seek_simple
      • def handle_message
    basic-tutorial-4.py

    GUI
    Basic tutorial 5: GUI toolkit integration
    • gst_bus_add_signal_watch (bus);

    basic-tutorial-5.py
    • class Player
      • def __init__
        • Gtk.init(sys.argv)
        • Gst.init(sys.argv)
        • # connect to interesting signals in playbin
          self.playbin.connect("video-tags-changed", self.on_tags_changed)
          self.playbin.connect("audio-tags-changed", self.on_tags_changed)
          self.playbin.connect("text-tags-changed", self.on_tags_changed)
        • # instruct the bus to emit signals for each received message
          # and connect to the interesting signals
          bus = self.playbin.get_bus()
          bus.add_signal_watch()
          bus.connect("message::error", self.on_error)
          bus.connect("message::eos", self.on_eos)
          bus.connect("message::state-changed", self.on_state_changed)
          bus.connect("message::application", self.on_application_message)
      • def start
        • GLib.timeout_add_seconds(1, self.refresh_ui)
        • Gtk.main()
      • def cleanup
      • def build_ui
      • def on_realize
      • def on_play
      • def on_pause
      • def on_stop
      • ...
      • def on_tags_changed
        • self.playbin.post_message
      • def on_error
      • def on_eos
      • def on_state_changed
      • ...
      • def analyze_streams



    Capabilities
    Basic tutorial 6: Media formats and Pad Capabilities
    • static void print_pad_capabilities
      • /* Retrieve negotiated caps (or acceptable caps if negotiation is not finished yet) */
        caps = gst_pad_get_current_caps (pad);
        if (!caps)
          caps = gst_pad_query_caps (pad, NULL);

    basic-tutorial-6.py
    • def print_field
    • def print caps(caps, pfx)
      • structure = caps.get_structure(i)
      • structure.foreach(print_field, pfx)
    • def print_pad_templates_information(factory)
      • pads = factory.get_static_pad_templates()
      • for pad in pads:
        • padtemplate = pad.get()
        • if padtemplate.get_caps():
          • print_caps(padtemplate.get_caps(), "      ")
    • def print_pad_capabilities(element, pad_name)
      • pad = element.get_static_pad(pad_name)
      • # retrieve negotiated caps (or acceptable caps if negotiation is not yet finished)
        caps = pad.get_current_caps()
        if not caps:
            caps = pad.get_allowed_caps()
      • print_caps(caps, "      ")
    • def main
      • sink_factory = Gst.ElementFactory.find("autoaudiosink")
      • print_pad_templates_information(sink_factory)
      • sink = sink_factory.create("sink")
      • print_pad_capabilities(sink, "sink")



    Pad availability
    • always
    • sometimes
    • on request
    Threads
    Basic tutorial 7: Multithreading and Pad Availability

    basic-tutorial-7.py
    • def main
      • # manually link the tee, which has "Request" pads
        tee_src_pad_template = tee.get_pad_template("src_%u")
           
        tee_audio_pad = tee.request_pad(tee_src_pad_template, None, None)
        audio_queue_pad = audio_queue.get_static_pad("sink")
        tee_audio_pad.link(audio_queue_pad)

        tee_video_pad = tee.request_pad(tee_src_pad_template, None, None)
        video_queue_pad = video_queue.get_static_pad("sink")
        tee_video_pad.link(video_queue_pad)



    Buffers
    • GstBuffer: chunk of data. Can contain multiple GstMemory (memory buffer)
    Basic tutorial 8: Short-cutting the pipeline (same as 7, replacing audiotestsrc -> appsrc; adding a third branch appsink)
    • appsrc
    • appsink
    • /* Configure appsrc */
      gst_audio_info_set_format (&info, GST_AUDIO_FORMAT_S16, SAMPLE_RATE, 1, NULL);
      audio_caps = gst_audio_info_to_caps (&info);
      g_object_set (data.app_source, "caps", audio_caps, NULL);
      g_signal_connect (data.app_source, "need-data", G_CALLBACK (start_feed), &data);
      g_signal_connect (data.app_source, "enough-data", G_CALLBACK (stop_feed), &data);
    Playback tutorial 3: Short-cutting the pipeline




    Discoverer
    Basic tutorial 9: Media information gathering
    • GstDiscoverer
    basic-tutorial-9.py




    gst-discoverer-1.0
    gst-launch-1.0
    Tools
    Basic tutorial 10: GStreamer tools





    Debugging
    Basic tutorial 11: Debugging tools





    Basic tutorial 12: Streaming
    • Setting live streams to PAUSED succeeds, but returns GST_STATE_CHANGE_NO_PREROLL, instead of GST_STATE_CHANGE_SUCCESS to indicate that this is a live stream.
    basic-tutorial-12.py





    Basic tutorial 13: Playback speed






    Basic tutorial 14: Handy elements
    • Bins
      • playbin
      • uridecodebin
      • decodebin
    • File input/output
      • filesrc
      • filesink
    • Network
      • souphttpsrc
    • Test media generation
      • videotestsrc
      • audiotestsrc
    • Video adapters
      • videoconvert
      • videorate
      • videoscale
    • Audio adapters
      • audioconvert
      • audioresample
      • audiorate
    • Multithreading
      • queue
      • queue2
      • multiqueue
      • tee
    • Capabilities
      • capsfilter
      • typefind
    • Debugging
      • fakesink
      • identity






    Basic tutorial 16: Platform-specific elements





    Action signals
    Audio switching
    Tags
    Playback tutorial 1: Playbin usage
    • gst_bus_add_watch(bus);





    Subtitles
    Playback tutorial 2: Subtitle management





    Playback tutorial 3: Short-cutting the pipeline





    Playback tutorial 4: Progressive streaming





    Playback tutorial 5: Color Balance





    Playback tutorial 6: Audio visualization





    Playback tutorial 7: Custom playbin sinks





    Playback tutorial 8: Hardware-accelerated video decoding





    Playback tutorial 9: Digital audio pass-through




    ...
  • Encoding profiles and targets
    • Exemples / Examples
    • Estructura / Structure
      • target
        • container profile
          • video profile
            • video preset
          • audio profile
            • audio preset
    • ...


    • syntax
      C
      Python
      Encoding target

      • location of target files (*.gep)
        • $GST_DATADIR/gstreamer-GST_API_VERSION/encoding-profiles/
          • /usr/share/gstreamer-1.0/encoding-profiles/*.gep
        • $HOME/gstreamer-GST_API_VERSION/encoding-profiles/
        • ~/.local/share/gstreamer-1.0/encoding-profiles/<category>/<name>.gep
        • $GST_ENCODING_TARGET_PATH
        • Pitivi:
          • /usr/share/pitivi/gstpresets/*.gep
      • $(target.category)/$(target.name).gep
        • [GStreamer Encoding Target]
          name : <name>
          category : <category>
          \description : <description> #translatable

          [profile-<profile1name>]
          name : <name>
          \description : <description> #optional
          format : <format>
          preset : <preset>

          [streamprofile-<id>]
          parent : <encodingprofile.name>[,<encodingprofile.name>..]
          \type : <type> # "audio", "video", "text"
          format : <format>
          preset : <preset>
          restriction : <restriction>
          presence : <presence>
          pass : <pass>
          variableframerate : <variableframerate>
      • device/mp4target.gep
        • [GStreamer Encoding Target]
          name=mp4target
          category=device
          description=MP4 (H.264, AAC) target

          [profile-mp4]
          name=mp4
          type=container
          description[c]=MP4 container profile
          format=video/quicktime, variant=(string)iso

          [streamprofile-mp4-0]
          parent=mp4
          type=video
          format=video/x-h264
          restriction=video/x-raw
          presence=0
          pass=0
          variableframerate=false

          [streamprofile-mp4-1]
          parent=mp4
          type=audio
          format=audio/mpeg, mpegversion=(int)4
          restriction=audio/x-raw
          presence=0

          [profile-(null)]
          type=audio
          format=audio/mpeg, mpegversion=(int)4

      GstEncodingTarget
      • gst_encoding_list_all_targets
      GstPbutils.EncodingTarget
      • # create target with all profiles
        # The name and category can only consist of lowercase ASCII letters for the first character, followed by either lowercase ASCII letters, digits or hyphens (‘-‘).
        name = "mp4target"
        category = GstPbutils.ENCODING_CATEGORY_DEVICE # "device"
        description = "MP4 (H.264, AAC) target"
        profiles = [container_profile, video_profile, audio_profile]
        target = GstPbutils.EncodingTarget.new(name, category, description, profiles)
      • # save target to ~/.local/share/gstreamer-1.0/encoding-profiles/<category>/<name>.gep
        ret = target.save()

      • # list targets for all categories
        category = None
        target_list = GstPbutils.encoding_list_all_targets
        (category)
        print("target_list: {}".format(target_list))

      Encoding profile
      (gst-validate-transcoding)




      • general syntax:
        • mux_format:[video_restriction->]video_format[+video_preset][|video_presence]:[audio_restriction->]audio_format[+audio_preset][|audio_presence]
      • element factory:
        • <muxer_factory_name>:<video_encoder_factory_name>:<audio_encoder_factory_name>
        • webmmux:vp8enc:vorbisenc
      • caps:
        • <muxer_source_caps>:<video_encoder_source_caps>:<audio_encoder_source_caps>
        • WebM (VP8 + Vorbis): video/webm:video/x-vp8:audio/x-vorbis
        • MP4 (H.264 + MP3): video/quicktime,variant=iso:video/x-h264:audio/mpeg,mpegversion=1,layer=3
        • OGG (Theora + Vorbis): application/ogg:video/x-theora:audio/x-vorbis
        • MPEG-TS (H.264 + AC3): video/mpegts:video/x-h264:audio/x-ac3
      • caps + preset (/usr/share/gstreamer-1.0/presets/*.prs, /usr/share/pitivi/gstpresets/*.prs, ~/.local/share/gstreamer-1.0/presets/*.prs):
      • caps + presence (number of times an encoding profile can be used inside an encodebin; 0: any):
        • video/webm:video/x-vp8|1:audio/x-vorbis
      • caps + restriction:
        • ...:restriction_caps->encoded_format_caps:...
        • "video/webm:video/x-raw,width=1920,height=1080->video/x-vp8:audio/x-vorbis"
        • "matroskamux:x264enc,width=1920,height=1080:audio/x-vorbis"
      • loading profile from encoding target:
        • target_name[/profilename/category]
        • /path/to/target.gep:profilename
      GstEncodingProfile
      GstPbutils.EncodingProfile
      Container profile
      GstEncodingContainerProfile
      GstPbutils.EncodingContainerProfile
      • # container profile
        name = "mp4"
        description = "MP4 container profile"
        #container_caps = "video/webm"
        container_caps = "video/quicktime,variant=iso"
        format = Gst.Caps(container_caps)
        preset = None
        container_profile = GstPbutils.EncodingContainerProfile.new(name, description, format, preset)

      Video profile
      GstEncodingVideoProfile
      GstPbutils.EncodingVideoProfile
      • # video profile
        #video_caps = "video/x-vp8"
        video_caps = "video/x-h264"
        format = Gst.Caps(video_caps)
        preset = None
        restriction = Gst.Caps("video/x-raw")
        presence = 0 # allow any number of instances of this profile
        video_profile = GstPbutils.EncodingVideoProfile.new(format, preset, restriction, presence)
        container_profile.add_profile(video_profile)

      Audio profile
      GstEncodingAudioProfile
      GstPbutils.EncodingAudioProfile
      • # audio profile
        #audio_caps = "audio/x-vorbis"
        audio_caps = "audio/mpeg,mpegversion=4" # AAC
        format = Gst.Caps(audio_caps)
        preset = None
        restriction = Gst.Caps("audio/x-raw")
        presence = 0 # allow any number of instances of this profile
        audio_profile = GstPbutils.EncodingAudioProfile.new(format, preset, restriction, presence)
        container_profile.add_profile(audio_profile)

      Preset

      • location of preset files
        • /usr/share/gstreamer-1.0/presets/*.prs
        • /usr/share/pitivi/gstpresets/*.prs,
        • ~/.local/share/gstreamer-1.0/presets/*.prs
        • GST_PRESET_PATH
      • <profile_name>.prs
        • ...

      Gst.Preset
  • gst-editing-services
    • Exemples / Examples
    • Resum / Summary
      • Estructura / Structure (see Pitivi):
        • pipeline
          • timeline
            • layer_1
              • clip_asset_1.1
              • clip_asset_1.2
              • ...
            • layer_2
              • clip_asset_2.1
              • ...


      • Python
        headers

        import sys
        import gi
        gi.require_version('Gst', '1.0')
        from gi.repository import Gst, GLib


        init Gst

        class Player(object):
        def __init__(self):
        # init GStreamer
        Gst.init(None)
        init GES



        # workaround to avoid "g_array_append_vals: assertion 'array' failed" when importing GES before Gst.init (using python3)
        gi.require_version('GES', '1.0')
        from gi.repository import GES
          
        # init GES
        GES.init()
        main GLib loop



        # create main glib loop
        self.loop = GLib.MainLoop.new(None, False)
        create timeline



        # create timeline
        timeline = GES.Timeline.new_audio_video()
        create asset/clip



        # create asset
        asset = GES.UriClipAsset.request_sync(uri)
        create layer in timeline



        # create layer
        layer = timeline.append_layer()
        put clips in layer



        # put clip in layer
        # start=0.0
        start_on_timeline = 0
        # inpoint=60.0
        start_position_asset = inpoint * Gst.SECOND
        # duration=5.0
        duration = duration * Gst.SECOND
        clip = layer.add_asset(asset, start_on_timeline, start_position_asset,
                        duration, GES.TrackType.UNKNOWN)
        create GES pipeline



        # create GES pipeline
        pipeline = GES.Pipeline()
        connect message bus to callback



        # connect bus messages to callback
        bus = pipeline.get_bus()
        bus.add_signal_watch()
        bus.connect("message", self.on_message, None)
        add timeline to pipeline



        # add timeline to pipeline
        pipeline.set_timeline(timeline)
        (optional: only render)
        ges_base_renderer.py
        containers for output format


        # container profile
        name = "mp4"
        description = "MP4 container profile"
        #container_caps = "video/webm"
        container_caps = "video/quicktime,variant=iso"
        format = Gst.Caps(container_caps)
        preset = None
        container_profile = GstPbutils.EncodingContainerProfile.new(name, description, format, preset)

        # video profile
        #video_caps = "video/x-vp8"
        video_caps = "video/x-h264"
        format = Gst.Caps(video_caps)
        preset = None
        restriction = Gst.Caps("video/x-raw")
        presence = 0 # allow any number of instances of this profile
        video_profile = GstPbutils.EncodingVideoProfile.new(format, preset, restriction, presence)
        container_profile.add_profile(video_profile)

        # audio profile
        #audio_caps = "audio/x-vorbis"
        audio_caps = "audio/mpeg,mpegversion=4"
        format = Gst.Caps(audio_caps)
        preset = None
        restriction = Gst.Caps("audio/x-raw")
        presence = 0 # allow any number of instances of this profile
        audio_profile = GstPbutils.EncodingAudioProfile.new(format, preset, restriction, presence)
        container_profile.add_profile(audio_profile)
        pipeline in render mode


        # pipeline in render mode
        pipeline.set_render_settings(output_uri, container_profile)
        pipeline.set_mode(GES.PipelineFlags.RENDER)
        progress


        # progress
        GLib.timeout_add(300, self.duration_querier, pipeline)
        start



        # start playing pipeline
        pipeline.set_state(Gst.State.PLAYING)
        self.loop.run()
        stop



        # unset
        pipeline.set_state(Gst.State.NULL)

  • gst-transcoding
  • ...

http://www.francescpinyol.cat/.html
Primera versió: / First version: 27.X.2018
Darrera modificació: 20 d'abril de 2019 / Last update: 20th April 2019

Valid HTML 4.01!

Cap a casa / Back home