A few tips on HTTP live streaming from surveillance camera (RTSP/h264)

This post can be considered as a continuation of my Zoneminder/Setup, Zoneminder/PTZ and Watchdog posts for my Vstarcam cameras.
In my case I had to deal with Vstarcam C7823WIP and C7824WIP cameras.

Once I got a task to create a few http live streams on a web page.
I started to gather information about possibilities in this field and was overwhelmed by the vast diversity of definitions, types, formats, players, browsers and their compatibility.
It took me two days of reading different articles and forums just to begin to understand the basics.

First I tried to use browser’s “native” video capabilities and therefore tried to use ffserver and Webm.
It might have been a good approach if I had hi-end hardware, tried to stream from some kind of raw source like web-camera or used a low resolution stream.
But I already had 720p h264 stream and in order to stream this by means of ffserver/webm I needed to do full video conversion which was absolutely impossible on my hardware.

Next, I tried to find a possibility to play RTSP stream on the a page directly, without installation of any browser-plugins (vlc plugin etc), because Vstarcam cameras have RTSP streams.
This is how I have found out that there are JavaScript video players out there… 🙂
They are not too heavy and have a decent performance.
I have found only one HTTP-RTSP player but its installation procedure was rather complicated so I continued my search.

Finally, I found this post on StackOverflow which convinced me that “the right way” of http streaming is HLS/DASH technologies. As I understand, these technologies (although they are not “the true live stream”) are widely spread and supported.
HLS supports h264 encoding, therefore I only have to repack my camera’s stream from RTSP to MP4 container slices that can be easily done by ffmpeg.

The whole idea looks like:
0) There is a Linux server and one or more digital surveillance cameras. Server is accessible from the Internet, cameras are not.
1) Cameras provide RTSP or other h264-encoded stream.
2) There is a web-accessible folder “StreamFolder” on the server (I think it is better to use some kind of ramdisk for this folder. Typical size – 32MB)
3) ffmpeg reads a stream from camera, slices it in 2-5min segments and put these segments as separate files into web-accessible StreamFolder along with playlist.
4) ffmpeg replaces playlist and overwrites slices according to its segmenter settings so typical size of data on the server is less than 16MB for one camera and time lag is about 10 seconds.
5) On a web page I use some javascript HLS player, which reads playlist, created by ffmpeg, over http and plays segments, described in that playlist. Also this playlist can be opened and be played by VLC player.

Unfortunately, none of components of this system is reliable.
– Cheap cameras often produce incorrect RTSP stream or even have unusual h264 settings. They also can to hang or to stop to response. Different cameras have different firmware, different glitches and different behavior.
– JavaScript video players handle different streams in a different ways, so you should choose a video player which fits your particular situation.
– Web browsers have some influence on player’s stability. A player which works for hours in Firefox may die in Chrome after 20 minutes of playback.

Because of this, ffmpeg process should has the ability to restart itself in case of error (termination).
Also I noticed once that ffmpeg is working but HLS files are remain old.
So, to create a reliable system on this kind of hardware one must write at least a bunch of watchdog scripts.

First, I made an endeavor to use RTSP stream from Vstarcam C7823WIP camera as a source for ffmpeg and hls.js video player on a web page.
This combination didn’t work at all.
I could view HLS stream by VLC video player, but in a browser I could see only the first frame at best.
I also tried hls.js with RTSP stream from my old C7815WIP camera and surprisingly it worked. The only problem with C7815WIP was that the stream had only 15fps but player worked at 30fps. This lead to increasing of playback speed and early depletion of video slices on the server. I think the wrong framerate is a part of RTSP stream because even ffmpeg option “-r” didn’t help. So, the version of firmware of your camera does matter.

Next, I replaced hls.js with video.js
It worked well where hls.js had failed i.e. video.js could handle RTSP stream from C7823WIP much better than hls.js (which showed only one frame) but the quality was poor anyway.
Playback was unsteady, choppy, it hung for a few seconds when a still scene was changing to some motion. And it doesn’t matter which type of encoding I used – constant or variable bitrate, although CBR is preferable in terms of stability.

After a few days of sadness, sorrow and whines I suddenly remembered that native PC client for Vstarcam cameras does not use RTSP stream and has very steady, reliable playback. Vstarcam’s native PC clien uses http://CAMERA_IP:CAMERA_WEB_PORT/livestream.cgi address. Moreover, a couple years ago I accidentally found on the Internet and wrote down a command for ffmpeg, which shows how to use that stream.

I changed a source stream for ffmpeg from RTSP to livestream.cgi and it helped!
For C7823WIP camera livestream.cgi gives a stream of much better quality than standard RTSP output.
No more pauses when a movement begins, no more glitches etc.

Therefore, I advise you to try livestream.cgi if you have any problems with RTSP on C7823WIP, some older camera and maybe in some other cases.

So, for ffmpeg segmenter and C7823WIP camera in case of RTSP source I used

/usr/bin/ffmpeg  -i rtsp://admin:888888@192.168.1.10:10554/udp/av1_0  \
-an \
-codec:v copy  \
-f ssegment \
-segment_list /ramdisk/playlist.m3u8 \
-segment_list_flags +live \
-segment_time 5 \
-segment_list_size 10 \
-segment_wrap 10  \
/ramdisk/out%03d.ts
#

av1_0 use the second stream of main resolution (as I understand there are three streams of main resolution: av0_0, av1_0, av2_0)
-an no audio
-codec:v copy copy video
-f segment use ffmpeg segmenter (I use standard segmenter although there is a special HLS segmenter)
and so on

For ffmpeg segmenter in case of livestream.cgi I used

/usr/bin/ffmpeg  -r 15 -i "http://192.168.1.10:81/livestream.cgi?user=admin&pwd=888888&streamid=0"  \
-an \
-codec:v copy  \
-f ssegment \
-segment_list /ramdisk/playlist.m3u8 \
-segment_list_flags +live \
-segment_time 5 \
-segment_list_size 10 \
-segment_wrap 10  \
/ramdisk/out%03d.ts
#

In case of livestream you should provide actual camera’s framerate for ffmpeg(-r 15 ). It is not a problem for C7823WIP, because this camera is able to maintain given framerate in any light conditions. Its picture has much noise, it needs much light to disable IR vision, but it keeps the fremerate that you have set up.
Later I’ve got C7824WIP camera, where I was unable to use livestream.cgi, because in difficult light conditions this camera decreases framerate to 6fps typically.
C7824WIP has much better picture, less noise in difficult light conditions (and therefore better compressed video stream), but you can’t predict its framerate.
Fortunately, C7824WIP has some improvements in RTSP protocol and it can be used without problems. In case of RTSP you don’t have to provide framerate – it is provided by the protocol (I suppose).

Later, I switched back to hls.js because it has lesser size and wider abilities for configuration (which wasn’t very helpful, though).
hls.js works well with livestream.cgi from C7823WIP (as I mentioned above it couldn’t play HLS stream created form RTSP source of C7823WIP).
Both tested video players (video.js and hls.js) can play my HLS stream in Firefox browser for 8 or more hours and both of them die in Chrome after about an hour of playback.

Now I use livestream.cgi as a source from C7823WIP and RTSP as a source from C7824WIP.
ffmpeg commands are arranged as a systemd services with automatic restart, which is very useful.
Process of repackaging of two 720p streams seems doesn’t require a cpu power at all.

TS segements are written on tmpfs filesystem (ramdisk).
You can create ramdisk manually by mounting some chunk of tmpfs to the folder

mount -t tmpfs -o size=32M tmpfs /ramdisk/
#

But in case of ongoing service you should create a fstab record:

tmpfs /opt/ramdisk tmpfs rw,size=32M 0 0
#

All mentioned above entities had latest firmwares/version numbers on the middle of summer 2017, when all the mess took place.

UPDATE December 2015
After the update of firmware both of my cameras (7823 and 7824) now have variable framerate. They both decrease it to 6fps in low-light conditions so it became impossible to use livestream.cgi because I don’t know current framerate (or don’t know where I can read it). Fortunately, the new firmware also brought an improvement in RTSP protocol so there is no need for livestream.cgi now.
Everything works well over RTSP.

daemon (hlsstream001.pl):
#! /usr/bin/perl
 
use strict;
use utf8;
use IO::File;
 
$SIG{__WARN__}  = \&alert_sysadmin;
$SIG{__DIE__}   = \&alert_sysadmin;
$SIG{HUP}       = \&do_reload;
 
my $PIDFILE     = '/tmp/hlsstream001.pid';
 
open(STDIN, '</dev/null');
open(STDOUT, '>/dev/null');
open(STDERR, '>&STDOUT');
chdir '/path/to/daemon';
 
# there must be only one instance of daemon
my $pidfh = IO::File->new($PIDFILE, O_WRONLY|O_CREAT|O_EXCL, 0644);
if($pidfh){
    print $pidfh $$;
    close $pidfh;
}else{
    alert_sysadmin("first open of the pidfile failed $!");
    open my $pidfile, '<', $PIDFILE or die "unable to open existing pid file for reading";
    my $pid = <$pidfile>;
    if( $pid =~ /^\d+$/){
      if(kill(0, $pid)){# process exists
        alert_sysadmin("daemon process already exists"), die;
      }else{# process doesn't exists
        create_pid_file();
      }
    }else{# file corrupted
      create_pid_file();
    }
}
 
# ramdisk has a small amount of space
# so after daemon restart we should delete old files
`rm -f /StreamFolder/stream001/*`;
 
alert_sysadmin('daemon started');
 
# namespace must be uniq to prevent glitches in player
my $outfname = `cat /dev/urandom | tr -cd 'a-f0-9' | head -c 32`;
 
# the main command (its blocking)
my $r = `/usr/bin/ffmpeg   -i rtsp://admin:888888@192.168.85.202:10554/udp/av0_0  -an -codec:v copy  -f ssegment -segment_list /StreamFolder/stream001/playlist.m3u8 -segment_list_flags +live -segment_time 2 -segment_list_size 20 -segment_wrap 20  /StreamFolder/stream001/$outfname%03d.ts`;
 
 
sub do_reload{
        alert_sysadmin("reload requested");
        unlink $PIDFILE;
        exec './hlsstream001.pl';
}
 
# send message via email or jabber
sub alert_sysadmin{
        my $message = shift;
	system('/path/jabbersend.pl', "HSL stream 001 ALERT: $message");
}
 
sub create_pid_file{
  unlink $PIDFILE or die "unable to unlink old pidfile";
  my $pidfh = IO::File->new($PIDFILE, O_WRONLY|O_CREAT|O_EXCL, 0644);
  die "unable to create pidfile exlusively" unless $pidfh;
  print $pidfh $$;
  close $pidfh;
  alert_sysadmin("pidfile recreated");
}

And corresponding systemd unit:


[Unit]

Description=First HLS stream
After=syslog.target
After=network.target

[Service]
Type=simple
PIDFile=/tmp/hlsstream001.pid
WorkingDirectory=/path/to/daemon
User=myuser
Group=myuser

OOMScoreAdjust=-100
ExecStart=/path/to/daemon/hlsstream001.pl
ExecStop=/bin/kill -TERM  $MAINPID
ExecReload=/bin/kill -HUP $MAINPID

TimeoutSec=15
RestartSec=15
Restart=always

[Install]
WantedBy=multi-user.target

Which should be placed in /etc/systemd/system
installed

systemctl enable hlsstream001

and launched

systemctl start hlsstream001

1 Comment

  • Csongor says:

    Thanks a lot for this post. I am trying to create something similar: I have a Reolink IP Camera with RSTP stream and I want to convert it to HLS stream which I can (I hope) give to my Google Home Hub, so it shows a live video of our front door when somebody rings the doorbell. My Linux skills are patchy at best, and I don’t understand what I need to do. My server is a Raspberry PI with Raspbian.
    1) install ffmpeg, I assume sudo apt-get install ffmpeg?
    2) create a ramdisk
    3) I create my hlsstream001.pl (update the URL) and just place it in /path/to/daemon/
    4) I create systemd unit as “hlsstream001” under /etc/systemd/system as in your example (checking that the paths are correct)
    5) Install the system
    6) Launch or stop the system. Is that all? Did I miss something?
    I will launch the system on demands when somebody presses the doorbell and run it for 1 minutes and shut it down. Or is there an easier way if I do it as an executeable which only runs for a set amount of time?
    What will be the URL of the HLS stream?

    Thanks a lot in advance,
    Csongor