Skip to content

ivtc

Functions:

  • jivtc

    This function should only be used when a normal ivtc or ivtc + bobber leaves chroma blend to every fourth frame.

  • sivtc

    Simplest form of a fieldmatching function.

  • vdecimate

    VDecimate is a decimation filter. It drops one in every cycle frames - the one that is most likely to be a

  • vfm

    VFM is a field matching filter that recovers the original progressive frames

jivtc

jivtc(
    clip: VideoNode,
    pattern: int,
    tff: FieldBasedLike | bool | None = None,
    chroma_only: bool = True,
    postprocess: VSFunctionKwArgs = deblend,
    postdecimate: IVTCycles | None = CYCLE_05,
    ivtc_cycle: IVTCycles = CYCLE_10,
    final_ivtc_cycle: IVTCycles = CYCLE_08,
    **kwargs: Any
) -> VideoNode

This function should only be used when a normal ivtc or ivtc + bobber leaves chroma blend to every fourth frame. You can disable chroma_only to use it for luma as well, but it is not recommended.

Parameters:

  • clip

    (VideoNode) –

    Clip to process. Has to be 60i.

  • pattern

    (int) –

    First frame of any clean-combed-combed-clean-clean sequence.

  • tff

    (FieldBasedLike | bool | None, default: None ) –

    Set top field first (True) or bottom field first (False).

  • chroma_only

    (bool, default: True ) –

    Decide whether luma too will be processed.

  • postprocess

    (VSFunctionKwArgs, default: deblend ) –

    Function to run after second decimation. Should be either a bobber or a deblender.

  • postdecimate

    (IVTCycles | None, default: CYCLE_05 ) –

    If the postprocess function doesn't decimate itself, put True.

Returns:

  • VideoNode

    Inverse Telecined clip.

Source code in vsdeinterlace/ivtc.py
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
def jivtc(
    clip: vs.VideoNode,
    pattern: int,
    tff: FieldBasedLike | bool | None = None,
    chroma_only: bool = True,
    postprocess: VSFunctionKwArgs = deblend,
    postdecimate: IVTCycles | None = IVTCycles.CYCLE_05,
    ivtc_cycle: IVTCycles = IVTCycles.CYCLE_10,
    final_ivtc_cycle: IVTCycles = IVTCycles.CYCLE_08,
    **kwargs: Any,
) -> vs.VideoNode:
    """
    This function should only be used when a normal ivtc or ivtc + bobber leaves chroma blend to every fourth frame.
    You can disable chroma_only to use it for luma as well, but it is not recommended.

    Args:
        clip: Clip to process. Has to be 60i.
        pattern: First frame of any clean-combed-combed-clean-clean sequence.
        tff: Set top field first (True) or bottom field first (False).
        chroma_only: Decide whether luma too will be processed.
        postprocess: Function to run after second decimation. Should be either a bobber or a deblender.
        postdecimate: If the postprocess function doesn't decimate itself, put True.

    Returns:
        Inverse Telecined clip.
    """

    tff = FieldBased.from_param_or_video(tff, clip, True, jivtc).is_tff

    UnsupportedFramerateError.check(clip, (30000, 1001), jivtc)

    ivtced = clip.std.SeparateFields(tff).std.DoubleWeave(tff)
    ivtced = ivtc_cycle.decimate(ivtced, pattern)

    pprocess = postprocess(clip if postdecimate else ivtced, **kwargs)

    if postdecimate:
        pprocess = postdecimate.decimate(pprocess, pattern)

    inter = core.std.Interleave([ivtced, pprocess])
    final = final_ivtc_cycle.decimate(inter, pattern)

    final = join(ivtced, final) if chroma_only else final

    return FieldBased.ensure_presence(final, FieldBased.PROGRESSIVE)

sivtc

sivtc(
    clip: VideoNode,
    pattern: int = 0,
    tff: FieldBasedLike | bool | None = None,
    ivtc_cycle: IVTCycles = CYCLE_10,
) -> VideoNode

Simplest form of a fieldmatching function.

This is essentially a stripped-down JIVTC offering JUST the basic fieldmatching and decimation part. As such, you may need to combine multiple instances if patterns change throughout the clip.

Parameters:

  • clip

    (VideoNode) –

    Clip to process.

  • pattern

    (int, default: 0 ) –

    First frame of any clean-combed-combed-clean-clean sequence.

  • tff

    (FieldBasedLike | bool | None, default: None ) –

    Top-Field-First.

Returns:

  • VideoNode

    IVTC'd clip.

Source code in vsdeinterlace/ivtc.py
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
def sivtc(
    clip: vs.VideoNode,
    pattern: int = 0,
    tff: FieldBasedLike | bool | None = None,
    ivtc_cycle: IVTCycles = IVTCycles.CYCLE_10,
) -> vs.VideoNode:
    """
    Simplest form of a fieldmatching function.

    This is essentially a stripped-down JIVTC offering JUST the basic fieldmatching and decimation part.
    As such, you may need to combine multiple instances if patterns change throughout the clip.

    Args:
        clip: Clip to process.
        pattern: First frame of any clean-combed-combed-clean-clean sequence.
        tff: Top-Field-First.

    Returns:
        IVTC'd clip.
    """

    tff = FieldBased.from_param_or_video(tff, clip, True, sivtc).is_tff

    ivtc = clip.std.SeparateFields(tff).std.DoubleWeave(tff)
    ivtc = ivtc_cycle.decimate(ivtc, pattern)

    return FieldBased.PROGRESSIVE.apply(ivtc)

vdecimate

vdecimate(
    clip: VideoNode,
    cycle: int = 5,
    chroma: bool = True,
    dupthresh: float = 1.1,
    scthresh: float = 15,
    block: int | tuple[int, int] = 16,
    clip2: VideoNode | None = None,
    ovr: str | bytes | bytearray | None = None,
    dryrun: bool = False,
) -> VideoNode

VDecimate is a decimation filter. It drops one in every cycle frames - the one that is most likely to be a duplicate.

Parameters:

  • clip

    (VideoNode) –

    Input clip.

  • cycle

    (int, default: 5 ) –

    Size of a cycle, in frames. One in every cycle frames will be decimated. Defaults to 5.

  • chroma

    (bool, default: True ) –

    Controls whether the chroma is considered when calculating frame difference metrics. Defaults to True.

  • dupthresh

    (float, default: 1.1 ) –

    This sets the threshold for duplicate detection. If the difference metric for a frame is less than or equal to this value then it is declared a duplicate. This value is a percentage of maximum change for a block defined by the blockx and blocky values, so 1.1 means 1.1% of maximum possible change. Defaults to 1.1.

  • scthresh

    (float, default: 15 ) –

    Sets the threshold for detecting scene changes. This value is a percentage of maximum change for the luma plane. Good values are between 10 and 15. Defaults to 15.

  • block

    (int | tuple[int, int], default: 16 ) –

    Sets the size of the blocks used for metric calculations. Larger blocks give better noise suppression, but also give worse detection of small movements. Possible values are any power of 2 between 4 and 512. Defaults to 16.

  • clip2

    (VideoNode | None, default: None ) –

    Clip that VDecimate will use to create the output frames. If clip2 is used, VDecimate will perform all calculations based on clip, but will decimate frames from clip2. This can be used to work around VDecimate's video format limitations. Defaults to None.

  • ovr

    (str | bytes | bytearray | None, default: None ) –

    Text file containing overrides. This can be used to manually choose what frames get dropped. The frame numbers apply to the undecimated input clip, of course The decimation pattern must contain cycle characters If the overrides mark more than one frame per cycle, the first frame marked for decimation in the cycle will be dropped. Lines starting with # are ignored.

    • Drop a specific frame: 314 -
    • Drop every fourth frame, starting at frame 1001, up to frame 5403: 1001,5403 +++-

    Defaults to None.

  • dryrun

    (bool, default: False ) –

    If True, VDecimate will not drop any frames. Instead, it will attach the following properties to everyframe:

    • VDecimateDrop: 1 if VDecimate would normally drop the frame, 0 otherwise.
    • VDecimateMaxBlockDiff: This is the highest absolute difference between the current frame and the previous frame found in any blockx blocky block.
    • VDecimateTotalDiff: This is the absolute difference between the current frame and the previous frame.

    Defaults to False.

Returns:

  • VideoNode

    Decimated clip.

Source code in vsdeinterlace/ivtc.py
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
def vdecimate(
    clip: vs.VideoNode,
    cycle: int = 5,
    chroma: bool = True,
    dupthresh: float = 1.1,
    scthresh: float = 15,
    block: int | tuple[int, int] = 16,
    clip2: vs.VideoNode | None = None,
    ovr: str | bytes | bytearray | None = None,
    dryrun: bool = False,
) -> vs.VideoNode:
    """
    VDecimate is a decimation filter. It drops one in every `cycle` frames - the one that is most likely to be a
    duplicate.

    Args:
        clip: Input clip.
        cycle: Size of a cycle, in frames. One in every `cycle` frames will be decimated. Defaults to 5.
        chroma: Controls whether the chroma is considered when calculating frame difference metrics. Defaults to True.
        dupthresh: This sets the threshold for duplicate detection. If the difference metric for a frame is less than
            or equal to this value then it is declared a duplicate. This value is a percentage of maximum change for a
            block defined by the `blockx` and `blocky` values, so 1.1 means 1.1% of maximum possible change. Defaults to
            1.1.
        scthresh: Sets the threshold for detecting scene changes. This value is a percentage of maximum change for the
            luma plane. Good values are between 10 and 15. Defaults to 15.
        block: Sets the size of the blocks used for metric calculations. Larger blocks give better noise suppression,
            but also give worse detection of small movements. Possible values are any power of 2 between 4 and 512.
            Defaults to 16.
        clip2: Clip that VDecimate will use to create the output frames. If `clip2` is used, VDecimate will perform all
            calculations based on `clip`, but will decimate frames from `clip2`. This can be used to work around
            VDecimate's video format limitations. Defaults to None.
        ovr: Text file containing overrides. This can be used to manually choose what frames get dropped.
            The frame numbers apply to the undecimated input clip, of course The decimation pattern must contain `cycle`
            characters If the overrides mark more than one frame per cycle, the first frame marked for decimation in the
            cycle will be dropped. Lines starting with # are ignored.

               - Drop a specific frame: 314 -
               - Drop every fourth frame, starting at frame 1001, up to frame 5403: 1001,5403 +++-

            Defaults to None.
        dryrun: If True, VDecimate will not drop any frames.
            Instead, it will attach the following properties to everyframe:

               - VDecimateDrop: 1 if VDecimate would normally drop the frame, 0 otherwise.
               - VDecimateMaxBlockDiff: This is the highest absolute difference between the current frame and the
                   previous frame found in any `blockx` `blocky` block.
               - VDecimateTotalDiff: This is the absolute difference between the current frame and the previous frame.

            Defaults to False.

    Returns:
         Decimated clip.
    """

    nblock = normalize_seq(block, 2)

    if clip2 is None and (clip.format.sample_type is not vs.SampleType.INTEGER or clip.format.bits_per_sample > 16):
        new_bits = min(clip.format.bits_per_sample, 16)

        clip2 = clip
        clip = clip.resize.Bilinear(
            format=clip.format.replace(sample_type=vs.SampleType.INTEGER, bits_per_sample=new_bits)
        )

    return core.vivtc.VDecimate(clip, cycle, chroma, dupthresh, scthresh, nblock[0], nblock[1], clip2, ovr, dryrun)

vfm

vfm(
    clip: VideoNode,
    tff: FieldBasedLike | bool | None = None,
    field: int = 2,
    mode: VFMMode = TWO_WAY_MATCH_THIRD_COMBED,
    mchroma: bool = True,
    cthresh: int = 9,
    mi: int = 80,
    chroma: bool = True,
    block: int | tuple[int, int] = 16,
    y: tuple[int, int] = (16, 16),
    scthresh: float = 12,
    micmatch: int = 1,
    micout: bool = False,
    clip2: VideoNode | None = None,
    postprocess: VideoNode | VSFunctionNoArgs | None = None,
) -> VideoNode

VFM is a field matching filter that recovers the original progressive frames from a telecined stream. VFM's output will contain duplicated frames, which is why it must be further processed by a decimation filter, like VDecimate.

Usage Example
# Run vsaa.NNEDI3 on leftover combed frames
vfm(clip, postprocess=NNEDI3(double_rate=False).deinterlace)

Parameters:

  • clip

    (VideoNode) –

    Input clip.

  • tff

    (FieldBasedLike | bool | None, default: None ) –

    Sets the field order of the clip. Normally the field order is obtained from the _FieldBased frame property. This parameter is only used for those frames where the _FieldBased property has an invalid value or doesn't exist. If the field order is wrong, VFM's output will be visibly wrong in mode 0.

  • field

    (int, default: 2 ) –

    Sets the field to match from. This is the field that VFM will take from the current frame in case of p or n matches. It is recommended to make this the same as the field order, unless you experience matching failures with that setting. In certain circumstances changing the field that is used to match from can have a large impact on matching performance. 0 and 1 will disregard the _FieldBased frame property. 2 and 3 will adapt to the field order obtained from the _FieldBased property. Defaults to 2.

  • mode

    (VFMMode, default: TWO_WAY_MATCH_THIRD_COMBED ) –

    Sets the matching mode or strategy to use. Plain 2-way matching (option 0) is the safest of all the options in the sense that it won't risk creating jerkiness due to duplicate frames when possible, but if there are bad edits or blended fields it will end up outputting combed frames when a good match might actually exist. 3-way matching + trying the 4th/5th matches if all 3 of the original matches are detected as combed (option 5) is the most risky in terms of creating jerkiness, but will almost always find a good frame if there is one. The other settings (options 1, 2, 3, and 4) are all somewhere in between options 0 and 5 in terms of risking jerkiness and creating duplicate frames vs. finding good matches in sections with bad edits, orphaned fields, blended fields, etc. Note that the combed condition here is not the same as the _Combed frame property. Instead it's a combination of relative and absolute threshold comparisons and can still lead to the match being changed even when the _Combed flag is not set on the original frame. Defaults to VFMMode.TWO_WAY_MATCH_THIRD_COMBED.

  • mchroma

    (bool, default: True ) –

    Sets whether or not chroma is included during the match comparisons. In most cases it is recommended to leave this enabled. Only if your clip has bad chroma problems such as heavy rainbowing or other artifacts should you set this to false. Setting this to false could also be used to speed things up at the cost of some accuracy. Defaults to True.

  • cthresh

    (int, default: 9 ) –

    This is the area combing threshold used for combed frame detection. This essentially controls how "strong" or "visible" combing must be to be detected. Larger values mean combing must be more visible and smaller values mean combing can be less visible or strong and still be detected. Valid settings are from -1 (every pixel will be detected as combed) to 255 (no pixel will be detected as combed). This is basically a pixel difference value. A good range is between 8 to 12. Defaults to 9.

  • mi

    (int, default: 80 ) –

    The number of combed pixels inside any of the blockx by blocky size blocks on the frame for the frame to be detected as combed. While cthresh controls how "visible" the combing must be, this setting controls "how much" combing there must be in any localized area (a window defined by the blockx and blocky settings) on the frame. The minimum is 0, the maximum is blocky * blockx (at which point no frames will ever be detected as combed). Defaults to 80.

  • chroma

    (bool, default: True ) –

    Sets whether or not chroma is considered in the combed frame decision. Only disable this if your source has chroma problems (rainbowing, etc) that are causing problems for the combed frame detection with chroma enabled. Actually, using chroma=false is usually more reliable, except in case there is chroma-only combing in the source. Defaults to True.

  • block

    (int | tuple[int, int], default: 16 ) –

    Sets the size of the window used during combed frame detection. This has to do with the size of the area in which mi number of pixels are required to be detected as combed for a frame to be declared combed. See the mi parameter description for more info. Possible values are any power of 2 between 4 and 512. Defaults to 16.

  • y

    (tuple[int, int], default: (16, 16) ) –

    The rows from y0 to y1 will be excluded from the field matching decision. This can be used to ignore subtitles, a logo, or other things that may interfere with the matching. Set y0 equal to y1 to disable. Defaults to (16, 16).

  • scthresh

    (float, default: 12 ) –

    Sets the scenechange threshold as a percentage of maximum change on the luma plane. Good values are in the 8 to 14 range. Defaults to 12.

  • micmatch

    (int, default: 1 ) –

    When micmatch is greater than 0, tfm will take into account the mic values of matches when deciding what match to use as the final match. Only matches that could be used within the current matching mode are considered. micmatch has 3 possible settings:

    • 0: disabled. Modes 1, 2 and 3 effectively become identical to mode 0. Mode 5 becomes identical to mode 4.
    • 1: micmatching will be used only around scene changes. See the scthresh parameter.
    • 2: micmatching will be used everywhere.

    Defaults to 1.

  • micout

    (bool, default: False ) –

    If true, VFM will calculate the mic values for all possible matches (p/c/n/b/u). Otherwise, only the mic values for the matches allowed by mode will be calculated. Defaults to False.

  • clip2

    (VideoNode | None, default: None ) –

    Clip that VFM will use to create the output frames. If clip2 is used, VFM will perform all calculations based on clip, but will copy the chosen fields from clip2. This can be used to work around VFM's video format limitations. Defaults to None.

  • postprocess

    (VideoNode | VSFunctionNoArgs | None, default: None ) –

    Optional function or clip to process combed frames. If a function is passed, it should take a clip as input and return a clip as output. If a clip is passed, it will be used as the postprocessed clip. The output of the clip or function must have the same framerate as the input clip. Defaults to None.

Returns:

  • VideoNode

    Field matched clip with progressive frames.

Source code in vsdeinterlace/ivtc.py
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
def vfm(
    clip: vs.VideoNode,
    tff: FieldBasedLike | bool | None = None,
    field: int = 2,
    mode: VFMMode = VFMMode.TWO_WAY_MATCH_THIRD_COMBED,
    mchroma: bool = True,
    cthresh: int = 9,
    mi: int = 80,
    chroma: bool = True,
    block: int | tuple[int, int] = 16,
    y: tuple[int, int] = (16, 16),
    scthresh: float = 12,
    micmatch: int = 1,
    micout: bool = False,
    clip2: vs.VideoNode | None = None,
    postprocess: vs.VideoNode | VSFunctionNoArgs | None = None,
) -> vs.VideoNode:
    """
    VFM is a field matching filter that recovers the original progressive frames
    from a telecined stream. VFM's output will contain duplicated frames, which
    is why it must be further processed by a decimation filter, like VDecimate.

    Usage Example:
        ```python
        # Run vsaa.NNEDI3 on leftover combed frames
        vfm(clip, postprocess=NNEDI3(double_rate=False).deinterlace)
        ```

    Args:
        clip: Input clip.
        tff: Sets the field order of the clip. Normally the field order is obtained from the `_FieldBased` frame
            property. This parameter is only used for those frames where the `_FieldBased` property has an invalid
            value or doesn't exist. If the field order is wrong, VFM's output will be visibly wrong in mode 0.
        field: Sets the field to match from. This is the field that VFM will take from the current frame in case of p
            or n matches. It is recommended to make this the same as the field order, unless you experience matching
            failures with that setting. In certain circumstances changing the field that is used to match from can have
            a large impact on matching performance. 0 and 1 will disregard the `_FieldBased` frame property. 2 and 3
            will adapt to the field order obtained from the `_FieldBased` property. Defaults to 2.
        mode: Sets the matching mode or strategy to use. Plain 2-way matching (option 0) is the safest of all the
            options in the sense that it won't risk creating jerkiness due to duplicate frames when possible, but if
            there are bad edits or blended fields it will end up outputting combed frames when a good match might
            actually exist. 3-way matching + trying the 4th/5th matches if all 3 of the original matches are detected as
            combed (option 5) is the most risky in terms of creating jerkiness, but will almost always find a good frame
            if there is one. The other settings (options 1, 2, 3, and 4) are all somewhere in between options 0 and 5 in
            terms of risking jerkiness and creating duplicate frames vs. finding good matches in sections with bad
            edits, orphaned fields, blended fields, etc. Note that the combed condition here is not the same as the
            `_Combed` frame property. Instead it's a combination of relative and absolute threshold comparisons and
            can still lead to the match being changed even when the `_Combed` flag is not set on the original frame.
            Defaults to VFMMode.TWO_WAY_MATCH_THIRD_COMBED.
        mchroma: Sets whether or not chroma is included during the match comparisons. In most cases it is recommended
            to leave this enabled. Only if your clip has bad chroma problems such as heavy rainbowing or other artifacts
            should you set this to false. Setting this to false could also be used to speed things up at the cost of
            some accuracy. Defaults to True.
        cthresh: This is the area combing threshold used for combed frame detection. This essentially controls how
            "strong" or "visible" combing must be to be detected. Larger values mean combing must be more visible and
            smaller values mean combing can be less visible or strong and still be detected. Valid settings are from -1
            (every pixel will be detected as combed) to 255 (no pixel will be detected as combed). This is basically a
            pixel difference value. A good range is between 8 to 12. Defaults to 9.
        mi: The number of combed pixels inside any of the `blockx` by `blocky` size blocks on the frame for the frame
            to be detected as combed. While `cthresh` controls how "visible" the combing must be, this setting controls
            "how much" combing there must be in any localized area (a window defined by the `blockx` and `blocky`
            settings) on the frame. The minimum is 0, the maximum is `blocky` * `blockx` (at which point no frames will
            ever be detected as combed). Defaults to 80.
        chroma: Sets whether or not chroma is considered in the combed frame decision. Only disable this if your source
            has chroma problems (rainbowing, etc) that are causing problems for the combed frame detection with `chroma`
            enabled. Actually, using chroma=false is usually more reliable, except in case there is chroma-only combing
            in the source. Defaults to True.
        block: Sets the size of the window used during combed frame detection. This has to do with the size of the area
            in which `mi` number of pixels are required to be detected as combed for a frame to be declared combed. See
            the `mi` parameter description for more info. Possible values are any power of 2 between 4 and 512. Defaults
            to 16.
        y: The rows from `y0` to `y1` will be excluded from the field matching decision. This can be used to ignore
            subtitles, a logo, or other things that may interfere with the matching. Set `y0` equal to `y1` to disable.
            Defaults to (16, 16).
        scthresh: Sets the scenechange threshold as a percentage of maximum change on the luma plane. Good values are
            in the 8 to 14 range. Defaults to 12.
        micmatch: When micmatch is greater than 0, tfm will take into account the mic values of matches when deciding
            what match to use as the final match. Only matches that could be used within the current matching mode are
            considered. micmatch has 3 possible settings:

               - 0: disabled. Modes 1, 2 and 3 effectively become identical to mode 0. Mode 5 becomes identical to mode
                4.
               - 1: micmatching will be used only around scene changes. See the `scthresh` parameter.
               - 2: micmatching will be used everywhere.

            Defaults to 1.
        micout: If true, VFM will calculate the mic values for all possible matches (p/c/n/b/u). Otherwise, only the
            mic values for the matches allowed by `mode` will be calculated. Defaults to False.
        clip2: Clip that VFM will use to create the output frames. If `clip2` is used, VFM will perform all
            calculations based on `clip`, but will copy the chosen fields from `clip2`. This can be used to work around
            VFM's video format limitations. Defaults to None.
        postprocess: Optional function or clip to process combed frames. If a function is passed, it should take a clip
            as input and return a clip as output. If a clip is passed, it will be used as the postprocessed clip. The
            output of the clip or function must have the same framerate as the input clip. Defaults to None.

    Returns:
        Field matched clip with progressive frames.
    """

    tff = FieldBased.from_param_or_video(tff, clip, True, vfm).is_tff

    nblock = normalize_seq(block, 2)

    if clip2 is None and clip.format not in (vs.YUV420P8, vs.YUV422P8, vs.YUV440P8, vs.YUV444P8, vs.GRAY8):
        new_family = vs.GRAY if clip.format.color_family is vs.GRAY else vs.YUV
        new_subsampling_w = min(clip.format.subsampling_w, 2)
        new_subsampling_h = min(clip.format.subsampling_h, 2)

        clip2 = clip
        clip = clip.resize.Bilinear(
            format=clip.format.replace(
                color_family=new_family,
                sample_type=vs.SampleType.INTEGER,
                bits_per_sample=8,
                subsampling_w=new_subsampling_w,
                subsampling_h=new_subsampling_h,
            )
        )

    fieldmatch = core.vivtc.VFM(
        clip,
        tff,
        field,
        mode,
        mchroma,
        cthresh,
        mi,
        chroma,
        nblock[0],
        nblock[1],
        y[0],
        y[1],
        scthresh,
        micmatch,
        micout,
        clip2,
    )

    if postprocess:
        if callable(postprocess):
            postprocess = postprocess(fallback(clip2, clip))

        FramerateMismatchError.check(
            vfm, clip, postprocess, message="The post-processed clip must be the same framerate as the input clip!"
        )

        fieldmatch = core.akarin.Select([fieldmatch, postprocess], fieldmatch, "x._Combed")

    return fieldmatch