How do I ...?
This page collects examples on how to perform simple tasks in VapourSynth. It's mostly about how to shuffle clips around and how to convert between formats, not about actual filtering.
Many of the things here can also be found on their respective documentation pages (e.g. VapourSynth's Python Reference and Function Reference), so go there if you need more details on any function. The point of this page is to make the barrier to entry lower.
Some of the entries here list more than one way to achieve a certain goal. For example, they may show both a way using only standard VapourSynth functions, and a way using JET wrappers. Apart from just providing multiple options, this is also done to show that many of the wrappers around simple operations are not magic, and really just call standard functions under the hood. In the end, which method you use is up to you. Unless otherwise stated, there isn't any relevant difference between them, except for one option being easier to write than the other.
How do I cut off frames at the beginning/end of a clip?
Clips can be cut by simply slicing them like Python lists:
Just like everything else, cutting clips can also be done via a filter invocation. There's no real use for this (unless you're doing fancy things like passing around filter functions as objects, in which case you probably don't need to read this page), except for knowing that slicing is not magic.
Note that the Trim filter, unlike slicing, is inclusive.
How do I cut out a section of frames from a clip?
See above.
How do I join multiple clips together?
Clips can be joined by simply using the +
operator in Python:
core.std.Splice
:
How do I stack two clips on top of one another?
This can be done with core.std.StackVertical
.
But chances are that you are asking this because you want to compare the clips with one another.
Unless you want to check if the clips are synced, chances are you want to use multiple output nodes to compare them instead.
How do I interleave two clips?
This can be done with core.std.Interleave
.
But chances are that you are asking this because you want to compare the clips with one another.
Unless you want to check if the clips are synced, chances are you want to use multiple output nodes to compare them instead.
How do I compare multiple clips?
Set the clips you want to compare as outputs. Then, open the script in vs-preview (see the Setup page) and use the number keys to switch between outputs.
How do I name my outputs?
Note that the names will only show up in vs-preview and not in other previewers.
How do I preview a VFR clip with the correct frame rate(s)?
Pass a timecodes file to set_output
:
You can also pass a Timecodes
object (which you could generate at runtime, or modify):
You can generate a Timecodes
object from a clip's per-frame _DurationNum
and _DurationDen
properties,
but note that this is very slow since it needs to go through the entire clip.
One useful method is to generate them once and then save them to a file.
How do I get the luma/chroma of a clip?
get_r
, get_g
, get_b
functions.
If you only want to see the individual planes of a clip, and not process them, you may want to use vs-preview's "Split Planes" plugin instead (see the Setup page).
How do I combine luma and chroma into a clip, or replace planes of a clip?
How do I change a clip's bit depth?
Note that the vanilla VS version does not dither by default, while vstools.depth
does dither when necessary.
With both versions the dither type can be set in a parameter.
How do I retag a clip's color matrix/color range/etc?
Retagging only changes the metadata (here in the form of frame properties) without changing any of the pixel values. (But of course filters called on this clip may behave differently based on the metadata, which is the entire point. In particular your clip will display differently in vs-preview, even though the pixel values are the same.)
How do I convert a clip's color matrix/color range/etc?
Tag your clip as the source matrix/range/etc and use the core.resize
function to convert it to the target matrix/range/etc.
Converting color matrix/range/etc will change the pixel values as well as the metadata, so the resulting clip will look the same in a previewer (except for subtle differences due to dithering, etc) even though the pixel values are different.
Also note that converting color matrix, transfer, or primaries (but not range or chroma location)
requires upscaling chroma to the luma's size.
The above code assumes a YUV444 clip;
it will work with YUV420 clips, but the output will not be good since it uses Point to resize.
However, you shouldn't simply replace Point
with another scaler like Lanczos
,
since that scaler would be used for both upscaling and downscaling.
It's better to explicitly upscale to YUV444 first (using, say, Lanczos
), convert the color space,
and then eventually downscale back to YUV420 again (using, say, Hermite, i.e. Bicubic
with b=c=0).
Warning
Do not use the matrix_in
/range_in
/etc family of arguments to convert color spaces.
Frame properties, when present, take precedence over these arguments, which can lead to very unexpected behavior.
Hence you should instead be overwriting the frame properties, as done in the snippet above.
Warning
Color range needs special treatment here, as shown in the above snippet.
The meaning of the values 0
and 1
is flipped between the _ColorRange
frame property and
the core.resize
function.
In the frame property, 0
means full and 1
means limited (docs),
but in core.resize
it's the other way around (docs).
Alternatively, you can use fmtconv.
How do I apply a filter to only some frames in the clip?
Unless the filter is a temporal one and you specifically want it to only get your selected frames as an input, the simplest way is to apply the filter to the entire clip and then replace the desired frames afterwards:
For more convenience, you can use the replace_ranges
function to avoid having to manually slice all the clips.
This can also be more performant when you have many ranges to replace.
Don't worry, even though you give the entire clip
as an input to your filter,
the filter will not actually run on the frames not added to partially_filtered
when you set partially_filtered
(or another clip based on it) as an output.
Filters are only run on frames when the frame is requested.
How do I decide at runtime whether to apply a filter or not?
Unless you want to write your own plugin, the way to do this is with FrameEval
:
For example, to blur all frames whose average luma is larger than 0.5 (assume the clip is a float clip):
Try not to instantiate filters inside of the per-frame function, if possible.
Note how the above snippet creates the blurred
and stats
clips outside of the function,
and only references them inside the function.
This makes the filter only be applied once, instead of once for every frame.
Of course you cannot do this if the filter parameters need to vary per frame.
In that case, you need to be very careful when your filter's instantiation is very resource-heavy.
How do I apply a filter to only a certain section of the picture?
In general this depends very strongly on what filter you're using and what you want to achieve. One common answer, however, is to apply the filter to the entire frame and do a masked merge with the original clip.
For example, to only blur a certain rectangle in the frame:
core.std.BlankClip
followed by core.std.AddBorders
.
There's no real reason to do this except to understand how squaremask
might work internally.
- Manually building a mask with core.akarin.Expr
, using an expression that computes the mask value based on the position.
- Building a mask using certain filters (e.g. edge masks) or manual expressions based on the pixel values
- Manually drawing a mask in an image editor and importing it from a file
- Drawing a mask using subtitle drawings in Aegisub, and rendering the resulting
subtitle line using the core.sub
plugin
How do I access or modify a frame's content in Python?
Unless you know what you're doing, chances are that you shouldn't be doing this.
Modifying frame contents in Python is slow and not the way VapourSynth is intended to be used.
You should instead see if there is a plugin that applies the filter you want to apply,
or write such a plugin if there isn't.
If you want to apply some custom formula to a frame's pixels,
you can use the core.std.Expr
or the more powerful third-party core.akarin.Expr
functions.
That said, accessing frame data from Python can be useful when you're trying out some new filter idea and want to prototype using tools like numpy.
To read a frame's contents into a numpy array:
To modify a frame's contents:
How do I remove artifacts from a video without being too destructive?
Very carefully.