Introduction
There are a number of open-source video editors available. The most notable one is Kdenlive.
But suppose you don't need a large graphical editor for your task at hand. You may want to edit a video on a server, where GUI applications are not available. Or you need to process a large number of videos in a similar way, doing so in a graphical editor most likely will be painful and repetitive. The aforementioned Kdenlive doesn't have scripting support, everything needs to be done by clicking the buttons.
This is where FFmpeg comes in handy. It's available on all major platforms. It has a large community of users and good documentation with plenty of examples.
One major drawback of FFmpeg is its complexity and learning curve. It has a complex command line interface, especially if filters are involved.
The good news is that simple video editing tasks like: conversion, trimming, cropping, scaling, merging two videos together, video and audio tracks addition and deletion are relatively easy to do in FFmpeg.
Prerequisites
-
All provided
ffmpeg
commands should work without changes on Windows and Linux, with the exception of some Bash scripts. I don't have access to the macOS system and can't test, but they should work there too. -
Prepare a few video files to play with, while following this tutorial. The resolution of the video files should be the same.
-
Install a good media player to watch your results. I recommend: mpv, SMPlayer, VLC (alphabetical order).
-
Read at least the description of FFmpeg in its manual page. FFmpeg has unusual conventions regarding command-line arguments, you need to understand them.
Example terminology
I use input.mp4
as a placeholder for the file you want to process with FFmpeg,
and output.mp4
for the file that's generated by FFmpeg as a result of your commands.
Replace those filenames appropriately.
Step 1 - Installing
On Windows and Linux, you can download the latest build of FFmpeg from GitHub. Most likely will need the
gpl
variant. Extract it and add thebin
directory to yourPATH
environment variable. Or follow the steps below for an automated installation.
FFmpeg is available on all major platforms. Use the instructions below to install it on your operating system.
Step 1.1 - Installing on Linux
Copy and paste the following Bash snippet into your shell, to download FFmpeg and install it to /usr/local/ffmpeg
:
(
set -e
arch=
case "$(uname -m)" in
'x86_64')
;;
'aarch64')
arch="arm"
;;
*)
echo "Unknown architecture!" >&2
exit 1
;;
esac
ffmpeg_archive=$(mktemp)
dest="/usr/local/ffmpeg"
variant="gpl"
curl -fLo "$ffmpeg_archive" "https://github.com/BtbN/FFmpeg-Builds/releases/download/latest/ffmpeg-master-latest-linux${arch}64-${variant}.tar.xz"
sudo mkdir -p "$dest"
sudo tar --strip-components=1 -C "$dest" -xf "$ffmpeg_archive"
rm "$ffmpeg_archive"
)
Add FFmpeg to the PATH
environment variable:
export PATH=$PATH:/usr/local/ffmpeg/bin
To make it persistent add this line to your ~/.profile
configuration file.
FFmpeg's output to the terminal is noisy by default. It prints the build flags and library versions,
which is not terribly useful information to see on each and every invocation of ffmpeg
.
You can add the following commands to your ~/.bashrc
to silence that information:
alias ffmpeg='ffmpeg -hide_banner'
alias ffprobe='ffprobe -hide_banner'
alias ffplay='ffplay -hide_banner'
Now, apply your changes:
source ~/.bashrc
Step 1.2 - Installing on macOS
Use the instructions on the FFmpeg website. Alternatively, install FFmpeg using Homebrew.
Once installed see FFmpeg's aliases you can define for your shell, to make FFmpeg output quieter.
Step 1.3 - Installing on Windows
Open PowerShell and run the following commands to download FFmpeg. Add the path that is printed to your terminal to your system PATH
environment variable.
If your architecture is Arm64, change the
$build
variable toffmpeg-master-latest-winarm64-gpl
.
$build = "ffmpeg-master-latest-win64-gpl"
curl.exe -fLO --output-dir "$HOME" "https://github.com/BtbN/FFmpeg-Builds/releases/download/latest/$build.zip"
Expand-Archive "$HOME\$build.zip" "$HOME\ffmpeg"
rm "$HOME\$build.zip"
echo "$HOME\ffmpeg\$build\bin"
Open new terminal window and run the following command to check that FFmpeg is installed properly:
ffmpeg
Step 2 - Converting file format
File format conversations in FFmpeg are as easy as this:
ffmpeg -i input.mp4 output.webm
The command above will convert the file from MP4 to WebM file format, which is commonly used on the web.
You don't need to specify the file formats explicitly; they are detected automatically by FFmpeg. It's unlikely that you will need to work with a format that is not supported by FFmpeg. After all, FFmpeg and its libraries are a backbone of popular players like mpv and VLC.
You can list all supported file formats:
ffmpeg -formats
You can check which options are supported by a particular file format and which audio and video codecs are used by default, using the following command:
- Change
webm
to the format name to which you want to convert your file.
ffmpeg -h muxer=webm
You can select a video codec by using the -c:v
option.
For example, use VP8 for a video:
ffmpeg -i input.mp4 -c:v libvpx -b:v 5M output.webm
You can choose an audio codec instead of using the default one by passing the -c:a
option.
For example, use Vorbis for an audio:
ffmpeg -i input.mp4 -c:a libvorbis output.webm
Combine the commands above into a single one:
ffmpeg -i input.mp4 -c:v libvpx -b:v 5M -c:a libvorbis output.webm
Step 3 - Working with streams
In this step you will learn how to work with streams in FFmpeg.
Step 3.1 - Inspecting streams in a file
Video files can contain multiple video and audio streams. Many ffmpeg
options operate on streams,
and you can choose between them using their ordinal number in a file.
Run the following command to print streams that are included in a file:
- Replace
input.mp4
with your file.
ffmpeg -i input.mp4
Example output:
The video file contains two streams. The first one is a video stream using H.264 codec, and the second one is an audio stream using AAC codec.
Input #0, mov,mp4,m4a,3gp,3g2,mj2, from 'input.mp4': Metadata: major_brand : mp42 minor_version : 0 compatible_brands: mp41isom Duration: 00:00:05.76, start: 0.000000, bitrate: 4134 kb/s Stream #0:0[0x1](und): Video: h264 (Constrained Baseline) (avc1 / 0x31637661), yuv420p(progressive), 1920x1080 [SAR 1:1 DAR 16:9], 4051 kb/s, 30 fps, 30 tbr, 30k tbn (default) Metadata: handler_name : VideoHandler vendor_id : [0][0][0][0] encoder : AVC Coding Stream #0:1[0x2](und): Audio: aac (LC) (mp4a / 0x6134706D), 48000 Hz, stereo, fltp, 119 kb/s (default) Metadata: handler_name : SoundHandler vendor_id : [0][0][0][0]
Take a look at the output lines that start with a Stream.
Input files and streams are numbered starting from zero, e.g. 0 is the first stream, 1 is the second and so on.
String like Stream #0:1
can be broken into components as follows:
Stream #0:1
▲ ▲
│ │
│ │ Stream number
│ └───────────────
│ Input file
└────────────
The first number specifies the input file; it will always be zero if only one input file was passed to ffmpeg
.
Step 3.2 - Removing audio streams
Let's start with a simple example. Suppose you have a video file which contains the audio you want to remove. This can be done in FFmpeg very quickly and without loss of quality of your video:
ffmpeg -i input.mp4 -c:v copy -an output.mp4
The above command uses the -i
option to specify that input file is input.mp4
,
and that video stream should be copied without further processing (-c:v copy
, where v
is a video).
And finally, the -an
option tells FFmpeg to drop all audio streams from the result, which is written to output.mp4
.
The great thing about the command above is that it's very fast even for large files, and it's lossless.
Now, you can open output.mp4
in your favorite media player and check that audio tracks are indeed deleted.
Alternatively, run the command below and inspect the lines that start with "Stream":
ffmpeg -i output.mp4
Output for input.mp4
, audio stream is present
Input #0, mov,mp4,m4a,3gp,3g2,mj2, from 'input.mp4':
Metadata:
major_brand : mp42
minor_version : 0
compatible_brands: mp41isom
Duration: 00:00:05.76, start: 0.000000, bitrate: 4134 kb/s
Stream #0:0[0x1](und): Video: h264 (Constrained Baseline) (avc1 / 0x31637661), yuv420p(progressive), 1920x1080 [SAR 1:1 DAR 16:9], 4051 kb/s, 30 fps, 30 tbr, 30k tbn (default)
Metadata:
handler_name : VideoHandler
vendor_id : [0][0][0][0]
encoder : AVC Coding
Stream #0:1[0x2](und): Audio: aac (LC) (mp4a / 0x6134706D), 48000 Hz, stereo, fltp, 119 kb/s (default)
Metadata:
handler_name : SoundHandler
vendor_id : [0][0][0][0]
Output for output.mp4
, audio stream is absent
Input #0, mov,mp4,m4a,3gp,3g2,mj2, from 'output.mp4':
Metadata:
major_brand : isom
minor_version : 512
compatible_brands: isomiso2avc1mp41
encoder : Lavf62.0.102
Duration: 00:00:05.70, start: 0.000000, bitrate: 4054 kb/s
Stream #0:0[0x1](und): Video: h264 (Constrained Baseline) (avc1 / 0x31637661), yuv420p(progressive), 1920x1080 [SAR 1:1 DAR 16:9], 4051 kb/s, 30 fps, 30 tbr, 30k tbn (default)
Metadata:
handler_name : VideoHandler
vendor_id : [0][0][0][0]
encoder : AVC Coding
Step 3.3 - Extracting streams
What if you want to extract the audio stream from a video file and save it into a separate file?
No problem, use the command below to extract all audio streams from input.mp4
:
ffmpeg -i input.mp4 -map 0:a -c copy output.mp4
You can use Matroska (.mkv
) container instead:
ffmpeg -i input.mp4 -map 0:a -c copy output.mkv
If you want to extract only a specific stream, inspect your file as shown above and determine the stream number you want to extract.
For example, to extract the second stream from input.mp4
(streams are numbered from zero) use the following command:
ffmpeg -i input.mp4 -map 0:1 -c copy output.mp4
You can encode the audio stream as an mp3 file. In contrast to the previous commands, this one involves transcoding. The audio stream is decoded and encoded again.
Note:
-map 0🅰️0
means the first audio stream in a file.In other words, the difference between
-map 0:1
and-map 0🅰️0
is that the first option uses an ordinal number among all streams in a file, those numbers are printed if you run:ffmpeg -i input.mp4
. And the second option considers only audio streams, and selects the first one.
ffmpeg -i input.mp4 -map 0:a:0 output.mp3
Step 3.4 - Combining video and audio streams into a single file
You can take a video stream from one file and an audio stream from another, and combine them.
The command below takes the video stream from video.mp4
and adds an audio from audio.mp3
:
ffmpeg -i video.mp4 -i audio.mp3 -map 0:v:0 -map 1:a:0 -c:v copy output.mp4
If the audio is longer than the video you may need to cut it. Use the -shortest
flag to achieve this:
ffmpeg -i video.mp4 -i audio.mp3 -map 0:v:0 -map 1:a:0 -c:v copy -shortest output.mp4
Step 4 - Resizing video
You can change the dimensions of your video file.
Note: Make sure that the new width or height that you specify are divisible by 2. Or you will get an error:
width not divisible by 2 # Or height not divisible by 2
For example, change the width to 640px and adjust the height appropriately to maintain the original aspect ratio:
If the original aspect ratio is 16:9, which is common, the height of
output.mp4
will be 360px.The height is specified as
-2
. This makes it always divisible by 2 and eliminates the error mentioned above.
ffmpeg -i input.mp4 -vf "scale=640:-2" output.mp4
Change the height to 400px, change the width as needed while maintaining aspect ratio:
ffmpeg -i input.mp4 -vf "scale=-2:400" output.mp4
Step 5 - Cropping video
To crop a video, you need to find a rectangle which you want to keep in the input video. You need to define its position and dimensions. Position is specified using coordinates of its top left corner. Dimensions are specified as width and height of the video you want to get in the result.
The origin of coordinates (x=0, y=0) is in the top left corner of the input video.
For example, to crop the area with a size of 800x450 that is positioned 50px away from the left edge and 100px from the top edge:
ffmpeg -i input.mp4 -vf "crop=w=800:h=450:x=50:y=100" output.mp4
The question is how to find the best position and size for a cropped rectangle. There are two options.
The first one is to fill the area around the edges for a preview of the future crop.
For example, to draw a border with a size of 100px around the left and right edges, and 300px around the top and bottom edges, run:
ffmpeg -i input.mp4 -vf "fillborders=left=100:right=100:top=300:bottom=300:mode=fixed:color=blue" output.mp4
Now, inspect the output.mp4
. The area that is filled with a blue color is the area you want to delete.
If you're not happy with a result, adjust the values above.
Once you have found the best values for the crop, you need to convert the numbers that you used in the fillborders
command above
to numbers that the crop
command can understand. For the above fillborders
command, the result will be:
in_w
andin_h
are the width and height of the original video respectively.100
is the width to crop around the left and right edges.300
is the height to crop around the top and bottom edges.
ffmpeg -i input.mp4 -vf "crop=w=in_w-2*100:h=in_h-300*2:x=100:y=300" output.mp4
The second method is to draw the outline of a rectangle which you want to crop, on top of the original video. Once you're satisfied with the size and position of the rectangle, you can replace it with an actual crop command.
Copy the crop command above, and replace crop
with drawbox
. Add the color
parameter. You will get:
Parameters have the same meaning as in the crop command. Tweak commands as many times as necessary.
ffmpeg -i input.mp4 -vf "drawbox=w=800:h=450:x=50:y=100:color=blue" output.mp4
After some back and forth, you have found the area you want to crop. Now, replace back drawbox
with crop
,
and remove the color
parameter. The result will be like this:
ffmpeg -i input.mp4 -vf "crop=w=600:h=300:x=100:y=225" output.mp4
Step 6 - Trimming video
Step 6.1 - Trim video at the start:
The command below cuts away the first 10 seconds of input.mp4
and saves the rest to output.mp4
.
The -ss
option is used for seeking. To seek accurately, you need to transcode, hence no -c copy
option was used.
ffmpeg -ss 10s -i input.mp4 output.mp4
Step 6.2 - Trim video at the end:
To do this, you need to specify the new duration of your video. The command below will cut off the end of the video after 5 minutes and 12 seconds.
ffmpeg -t 05:12 -i input.mp4 output.mp4
If you need to trim the fixed amount from the end of the video, the command above is not very convenient. Because you need to specify the new duration of the video, but the duration of the original video can be variable.
Instead of checking the duration of each video manually and subtracting the amount you want to trim, you can define a function in your Bash shell to do this automatically:
ffmpeg_rtrim() {
if (( $# != 3 )); then
echo "Usage: $FUNCNAME output input trimsec" >&2
return 1
fi
local output=$1
local input=$2
local trimsec=${3%.*}
local duration
duration=$(ffprobe -v error -show_entries format=duration -output_format default=nw=1:nk=1 "$input")
(( $? == 0 )) || return
sec=${duration%.*}
decimal=${duration#*.}
newdur="$(( sec - trimsec )).${decimal}s"
ffmpeg -t "$newdur" -i "$input" "$output"
}
For example, to trim the last 10 seconds of input.mp4
and save the result to output.mp4
, run:
ffmpeg_rtrim output.mp4 input.mp4 10
Step 7 - Working with video fragments
Step 7.1 - Join video fragments together
There are multiple ways to do it. I will show you two of them.
Note: Both methods described here use transcoding. For long video files, this process can take a while. If it's not an option for you, take a look at
concat
demuxer. That method requires more boilerplate; you will need to create a text file in a special format and list all the files that you want to join there.
For the first method to work, files should have the same resolution.
It uses a so called concat
protocol. When you use one of the protocols provided by FFmpeg, that means
that input files will be handled in a special way. For example, FFmpeg supports HTTP protocol, which can be used
to read the input file from the network.
To join the two-part video file run:
- Replace
part1.mp4
,part2.mp4
andoutput.mp4
with your filenames.
ffmpeg -y -i part1.mp4 -qscale:v 1 temp1.mpg
ffmpeg -y -i part2.mp4 -qscale:v 1 temp2.mpg
ffmpeg -i concat:"temp1.mpg|temp2.mpg" output.mp4
The result will be saved to output.mp4
.
Clean up the temp files:
rm temp1.mpg temp2.mpg
To join the three-part video file, run:
ffmpeg -y -i part1.mp4 -qscale:v 1 temp1.mpg
ffmpeg -y -i part2.mp4 -qscale:v 1 temp2.mpg
ffmpeg -y -i part3.mp4 -qscale:v 1 temp3.mpg
ffmpeg -i concat:"temp1.mpg|temp2.mpg|temp3.mpg" output.mp4
And so on.
If you're using Bash as your shell, you can define a shell function and use it to join arbitrary number of video fragments together:
ffmpeg_join() {
if (( $# < 3 )); then
echo "Usage: $FUNCNAME output part1 part2 [part3 ...]" >&2
return 1
fi
local output=$1
shift
local part concat temp
local -a temps
for part in "$@"; do
temp=$(mktemp)
ffmpeg -y -i "$part" -qscale:v 1 -f mpeg "$temp"
concat+="${temp}|"
temps+=("$temp")
done
ffmpeg -i "concat:$concat" "$output"
for temp in "${temps[@]}"; do
rm "$temp"
done
}
Equivalent of the previous command, to join together a three-part video file and save it to output.mp4
.
ffmpeg_join output.mp4 part1.mp4 part2.mp4 part3.mp4
Let's look at the second method, which uses concat
filter. Filters are a powerful mechanism built into FFmpeg,
that allows to alter audio and video streams in various ways.
Use the following command to join the two-part video. The parts should have the same resolution, and contain exactly one video and one audio stream:
a=1
specifies the number of audio streams. The number of video streams is not specified,
because it's one by default.
ffmpeg -i part1.mp4 -i part2.mp4 -filter_complex "concat=a=1" output.mp4
If you want to join the three-part video, you need specify the number of fragments
using n
option:
ffmpeg -i part1.mp4 -i part2.mp4 -i part3.mp4 -filter_complex "concat=a=1:n=3" output.mp4
Again, you can wrap it into a shell function if you want:
ffmpeg_join() {
if (( $# < 3 )); then
echo "Usage: $FUNCNAME output part1 part2 [part3 ...]" >&2
return 1
fi
local output=$1
shift
local -a args
local part
for part in "$@"; do
args+=("-i" "$part")
done
ffmpeg "${args[@]}" -filter_complex "concat=a=1:n=$#" "$output"
}
Then, equivalent of the previous command:
ffmpeg_join output.mp4 part1.mp4 part2.mp4 part3.mp4
Or use pathname expansion:
ffmpeg_join output.mp4 part[1-9].mp4
FFmpeg's FAQ and Wiki contain additional information regarding the file concatenation, which you may find useful.
Step 7.2 - Adding an intro image to a video
You can show an image at the beginning of your video, before the actual video starts. This image can contain a video title, for example.
Prepare an image in your favorite graphics editor. The resolution of the image should be equal to the resolution of your video.
The command below will create an intro video from the image:
- Replace
intro.png
with a path to your image. The resulting intro will be saved asintro.mp4
. - Duration of an intro will be 3 seconds. You can specify your own duration using the
-t
option. anullsrc
is used to create silent audio, to make it easier toconcat
an intro and the main video.
ffmpeg -f lavfi -i anullsrc=r=48000 -loop 1 -t 3 -i intro.png -pix_fmt yuv420p -shortest intro.mp4
Now, you need to combine the intro video created previously and your main video. To do this, execute:
ffmpeg -i intro.mp4 -i main.mp4 -filter_complex concat=a=1 output.mp4
The two commands above can be combined into a single one. The resulting command will create an intro
from the image with a duration of 3 seconds, and combine it with the main video file.
It will work properly, even if main.mp4
contains multiple audio streams.
- Replace
intro.png
andmain.mp4
with your intro image and the main video respectively. - The intro duration needs to be specified two times, in
-t 3
anddelays=3s
. The first option is used to produce a video stream from the image with a duration of 3 seconds. The second one is needed to offset the audio streams inmain.mp4
by the duration of an intro.
ffmpeg -loop 1 -t 3 -i intro.png -i main.mp4 -af "adelay=delays=3s:all=1" -filter_complex concat -map 1:a output.mp4
Step 7.3 - Overlay video on top of another
This effect is often used by streamers. A small overlay window shows the streamer's webcam, while the main window is used
for screen capture that shows gameplay. You can achieve the same, using FFmpeg's overlay
filter. It accepts the two videos,
the first is the main one, and the second is overlay.
The resolution of the overlay should be smaller, you can scale it using FFmpeg.
The duration of both files should generally be the same.
Once you have the two videos, you can execute the following command to overlay one on top of another:
- Overlay is placed in the top right corner of the main video.
-map 0:a -map 1:a
means that audio streams from both files are included in the result. You can switch between them using your media player.- Replace
main.mp4
andoverlay.mp4
with your main and overlay videos respectively.
ffmpeg -i main.mp4 -i overlay.mp4 -filter_complex overlay=x=main_w-overlay_w-10:y=10 -map 0:a -map 1:a output.mp4
If you want to include an audio from the main.mp4
file only, execute the following command:
ffmpeg -i main.mp4 -i overlay.mp4 -filter_complex overlay=x=main_w-overlay_w-10:y=10 -map 0:a output.mp4
If you want to include an audio only from overlay.mp4
, execute:
ffmpeg -i main.mp4 -i overlay.mp4 -filter_complex overlay=x=main_w-overlay_w-10:y=10 -map 1:a output.mp4
As I said before, the resolution of the overlay should be smaller than the main video. You can scale the overlay and place it on top of the main video using a single FFmpeg command:
It's similar to the first command in this step, except that the overlay width is scaled to 400px automatically.
ffmpeg -i main.mp4 -i overlay.mp4 -filter_complex "[1:v]scale=400:-2,[0:v]overlay=x=main_w-overlay_w-10:y=10" -map 0:a -map 1:a output.mp4
Step 8 - Adding text to a video
There are two methods to add a text to your video using FFmpeg.
The first one is to create a file with your text in a special .srt
format, which is used for subtitles.
Then the FFmpeg's subtitles
filter can be used to draw the text on top of your video.
In contrast to regular subtitles, text that is drawn by the subtitles
filter can't be disabled in a media player.
.srt
file looks like this:
It consists of the start and end time, and the text itself. The file below will show "Hello World!" for five seconds, starting at the beginning of the video. After 7 seconds of the video elapsed, the two lines of text will be shown for 3 seconds.
The Wikipedia article on
.srt
file format provides additional information.
1
00:00:0,000 --> 00:00:05,000
Hello World!
2
00:00:07,000 --> 00:00:10,000
This is just an example.
Multiline text can be shown.
To add a text in the .srt
file format to your video, run the following command:
- Replace
text.srt
with a filename that contains your text. - Replace
input.mp4
with a filename of the original video. - Replace
output.mp4
with a desired filename for a resulting video with text.
ffmpeg -i input.mp4 -vf subtitles=text.srt output.mp4
- You can specify a font and font color. Replace
Comic Sans MS
with your font of choice. - Warning: Font color is specified in the reverse order! Instead of the usual RGB, it's specified as BGR.
Font color starts after
&H
. For example, RGB color like#32a852
will be written as&H52a832
.
ffmpeg -i input.mp4 -vf "subtitles=text.srt:force_style='Fontname=Comic Sans MS,PrimaryColour=&H52a832'" output.mp4
You can change the font size too, by specifying the Fontsize
property:
ffmpeg -i input.mp4 -vf "subtitles=text.srt:force_style='Fontsize=15,Fontname=Comic Sans MS,PrimaryColour=&H52a832'" output.mp4
Let's see how to add a text to your video using the second method.
This time we will use FFmpeg's drawtext
filter.
To place a text in the center of your video, run the following command:
- Replace
Hello World
with a desired text andComic Sans MS
with a font of choice. - You need to tweak the font size, depending on the amount of text and video resolution. The default font size that is used by FFmpeg is really tiny, it's very hard to notice that text is present at all.
- FFmpeg documentation describes how you can specify a font color.
x=(w-tw)/2
andy=(h-th)/2
are used to center text horizontally and vertically respectively.
ffmpeg -i input.mp4 -vf "drawtext=text=Hello World:x=(w-tw)/2:y=(h-th)/2:fontsize=150:fontcolor=white:font=Comic Sans MS" output.mp4
When we used the subtitles
filter, it was possible to specify the time when the text is shown and not shown.
The same effect can be achieved using the drawtext
filter and its enable
option. The text is shown only if the enable
condition is true.
For example, to show the text only for 3 seconds:
lt(t,3)
is an equivalent oft < 3
, wheret
is the current time in seconds.
ffmpeg -i input.mp4 -vf "drawtext=enable='lt(t,3)':text=Hello World:x=(w-tw)/2:y=(h-th)/2:fontsize=150:fontcolor=white:font=Comic Sans MS" output.mp4
You can combine operators to create more complex conditions, when to show the text.
For example, the enable
option below specifies: show the text for 3 seconds, then hide it, and show again between 6-10 seconds of the video.
+
is used as a logicalOR
operator, which is a rather obscure feature of FFmpeg's expression evaluation. Read the manual page about features that are supported in expressions.
ffmpeg -i input.mp4 -vf "drawtext=enable='lt(t,3)+between(t,6,10)':text=Hello World:x=(w-tw)/2:y=(h-th)/2:fontsize=150:fontcolor=white:font=Comic Sans MS" output.mp4
Instead of specifying your text in the command itself, which is not convenient and may require a complex escaping of special characters,
you can put it in a file. For example, create a file named text.txt
and place the following text there:
Warning: Use the unix style line endings for this file, that is line feeds characters (LF) only.
If you try to use Windows style line endings with drawtext
filter, the text will not be shown.
It looks like a bug in FFmpeg.
This is just an example.
Multiline text can be shown.
Now, you can use the following command to add a text from text.txt
to your video:
textfile=text.txt
is used to specify a file with a text to be shown.text_align=C
is used to center all lines of text.- Otherwise the meaning of the command is the same as in the previous example.
ffmpeg -i input.mp4 -vf "drawtext=textfile=text.txt:x=(w-tw)/2:y=(h-th)/2:fontsize=80:fontcolor=white:font=Comic Sans MS:text_align=C" output.mp4
As you can see, FFmpeg's command-line arguments can quickly become unwieldy.
They can be very long and require complex escaping of special characters.
You can ask FFmpeg to read the option value from a file, by preceding the option name with a forward slash, for example: -/vf
.
Create a file with a name drawtext.filter
and put the following content there:
drawtext =
textfile = text.txt:
x = (w-tw)/2:
y = (h-th)/2:
fontsize = 80:
fontcolor = white:
font = Comic Sans MS:
text_align = C
It's an equivalent of the previous long value of the -vf
option.
Now, you can execute the drawtext
filter as follows:
ffmpeg -i input.mp4 -/vf drawtext.filter output.mp4
Much simpler.
Conclusion
FFmpeg is a good tool to have in your toolbox. Many of the video editing tasks can be done without leaving your terminal. It especially shines for batch video editing. To learn more, refer to the official documentation.