Because of a demo possibly involving management, I was looking for a way to record my desktop. Video recording/editing on Linux still isn’t a great experience, but it’s getting better. There’s still too many apps which are unreliable or terrible, especially on the Ubuntu software center.
I chose ffmpeg and Blender, both not obvious choices for the uninitiated.
ffmpeg
is a collection of multimedia libraries. It’s rock-solid and can record desktop footage via the x11grab
format. It’s CLI-based, which I consider a bonus. Many graphical programs that perform video encoding use ffmpeg
behind the scenes, but usually fall flat in some way. The number of options is quite large, but the internet exists. Might as well go CLI.
Blender is actually an open-source 3D computer graphics program. It’s very professional, very reliable and includes a decent and very capable video editor.
So let’s get started.
Installing ffmpeg
You can try a simple
sudo apt-get install ffmpeg
Except the ffmpeg
version that this installs on my 14.04 Ubuntu wasn’t compiled with x11grab
support. You’ll find out if that’s the case when you run the record command further down, if so read on.
Compiling ffmpeg
from source was painless, but took some time and worked the CPU
tar -xvf ffmpeg-3.0.1.tar.bz2
cd ffmpeg-3.0.1
./configure --enable-nonfree \
--enable-gpl \
--enable-libx264 \
--enable-libfreetype \
--enable-x11grab \
--enable-zlib
# you may want more options
make -j8
sudo make install
Recording with ffmpeg
Recording the desktop is very straight-forward:
ffmpeg \
-f x11grab -video_size 960x540 -framerate 30 -i :0.0+0,81 \
-codec:v libx264 -qp 0 -preset ultrafast capture.mkv
The various parameters:
-f x11grab
sets the input format to bex11grab
-video_size <w>x<h>
sets the size of the recorded region - in my case 1/4 of a full-HD screen. If you wanted to capture the whole screen but resize it, you would insert a scale output filter-framerate <rate>
sets, you guessed it, the framerate of capture-i
is the “input filename”, in this case display 0, screen 0 of my X11 server (I only have one screen). The optional coordinates after the plus sign specify the x and y offset, in my case skipping the menu bar-codec:v libx264
sets the output video codec to be x264, which is excellent as far as quality vs file size, but also fast (a rare combination)-qp 0
sets x264 to encode losslessly. We want this because we’ll be editing the footage, so we don’t want to compress it multiple times. But honestly, when recording desktop video, there is so little movement, the lossless encoder output isn’t much bigger than the lossy encoder. The lossless option is also hassle free, as you don’t have to dial in the encode quality and worry if the text is legible. I recommend it. If you wanted lossy, use the-crf
option-preset ultrafast
tells x264 to haul ass, and use as little resource as possible. For lossless that we’ll also re-encode, this is fine and avoids stuttering during the recordcapture.mkv
is simply the output filename, in this case I’m using a Matroska container, because they just work
If I was outputting a final video, I might use these options instead:
-preset slow
: I really don’t know if this makes a difference with lossless, but possibly the encoder will try and find smaller deltas between frames-tune animation
: Again for lossless this might have no effect, but I’m hoping it gives x264 a better initial starting point for optimisation-pix_fmt yuv420p
: Recorded from the desktop, the video will have equal sampling for luminance (Y) and chroma (UV), called 4:4:4. Some players don’t deal with higher chroma sampling, so this sets the sampling to 4:2:0. It’s highly technical, but required to just work(tm). Wikipedia on YUV has more info
Editing with Blender
Warning: The interface has a steep learning curve. I’ve used Blender for 3D work before, so I’m happy with it. But definitely go check out a tutorial and be prepared to spend a while getting into the swing of things.
I recommend it, because everything else I tried crashes all the time or is too simple - you can even colour grade footage in Blender! It really will allow you to create a professional-looking video, for free.
To install Blender, visit the website, and then look at a tutorial, which will explain stuff much better than I could.
Encoding with Blender
Blender is pretty bad at encoding. You’ll want to use ffmpeg
again. Some Blender builds come with the ffmpeg
output format. If not, this is something we can work around.
You can export every frame and then combine them if you want. If your files are called output0000.png
and so forth, do this:
ffmpeg -framerate 30 -i output%04d.png \
-codec:v libx264 -preset slow -tune animation -qp 0 \
-pix_fmt yuv420p render.mkv
This takes up massive amounts of disk space, and writing so many images out is slow, especially PNG with compression (but otherwise it’d be even bigger). Instead, you can use the frame server to pipe the raw frames directly from Blender to ffmpeg
(script slightly modified from 1 and 2):
#!/bin/sh
FRAMESERVER=http://localhost:8080
eval `wget ${FRAMESERVER}/info.txt -O - 2>/dev/null |
while read key val ; do
echo R_$key=$val
done`
i=$R_start
{
while [ $i -le $R_end ] ; do
wget ${FRAMESERVER}/images/ppm/$i.ppm -O - 2>/dev/null
i=$(($i+1))
done
} | ffmpeg -f image2pipe -vcodec ppm -i pipe:0 \
-video_size ${R_width}x${R_height} -framerate $R_rate \
-codec:v libx264 -preset slow -tune animation -qp 0 \
-pix_fmt yuv420p encode.mkv
wget ${FRAMESERVER}/close.txt -O - 2>/dev/null >/dev/null
As you can see, the ffmpeg
options I’ve already used come in handy here. And that’s it, you should have a beautiful rendered and encoded video at the end (although this can take a while, video work is notoriously hard even for a beefy rig).