Skip to content

nnedi3

This module implements wrappers for the Neural Network Edge Directed Interpolation (3rd gen.)

Classes:

  • NNEDI3

    Base class for NNEDI3 interpolating methods.

  • Nnedi3

    Full implementation of the NNEDI3 anti-aliaser

  • Nnedi3DR

    Concrete implementation of NNEDI3 used as a double-rater.

  • Nnedi3SS

    Concrete implementation of NNEDI3 used as a supersampler.

NNEDI3 dataclass

NNEDI3(
    nsize: int = 0,
    nns: int = 4,
    qual: int = 2,
    etype: int = 0,
    pscrn: int = 1,
    opencl: bool = False,
    *,
    field: int = 0,
    drop_fields: bool = True,
    transpose_first: bool = False,
    shifter: KernelT = Catrom,
    scaler: ScalerT | None = None
)

Bases: _FullInterpolate, Interpolater

Base class for NNEDI3 interpolating methods.

Methods:

Attributes:

  • drop_fields (bool) –

    Whether to discard the unused field based on the field setting.

  • etype (int) –

    The set of weights used in the predictor neural network. Possible values:

  • field (int) –

    Controls the mode of operation and which field is kept in the resized image.

  • nns (int) –

    Number of neurons in the predictor neural network. Possible values:

  • nsize (int) –

    Size of the local neighbourhood around each pixel used by the predictor neural network.

  • opencl (bool) –

    Enables the use of the OpenCL variant.

  • pscrn (int) –

    The prescreener used to decide which pixels should be processed by the predictor neural network,

  • qual (int) –

    The number of different neural network predictions that are blended together to compute the final output value.

  • scaler (ScalerT | None) –

    Scaler used for additional scaling operations. If None, default to shifter

  • shifter (KernelT) –

    Kernel used for shifting operations. Default to Catrom.

  • transpose_first (bool) –

    Transpose the clip before any operation.

drop_fields class-attribute instance-attribute

drop_fields: bool = True

Whether to discard the unused field based on the field setting.

etype class-attribute instance-attribute

etype: int = 0

The set of weights used in the predictor neural network. Possible values: - 0: Weights trained to minimise absolute error. - 1: Weights trained to minimise squared error.

field class-attribute instance-attribute

field: int = 0

Controls the mode of operation and which field is kept in the resized image. - 0: Same rate, keeps the bottom field. - 1: Same rate, keeps the top field. - 2: Double rate (alternates each frame), starts with the bottom field. - 3: Double rate (alternates each frame), starts with the top field.

nns class-attribute instance-attribute

nns: int = 4

Number of neurons in the predictor neural network. Possible values: - 0: 16 - 1: 32 - 2: 64 - 3: 128 - 4: 256

nsize class-attribute instance-attribute

nsize: int = 0

Size of the local neighbourhood around each pixel used by the predictor neural network. Possible settings: - 0: 8x6 - 1: 16x6 - 2: 32x6 - 3: 48x6 - 4: 8x4 - 5: 16x4 - 6: 32x4

opencl class-attribute instance-attribute

opencl: bool = False

Enables the use of the OpenCL variant. Note that this will only work if full interpolation can be performed.

pscrn class-attribute instance-attribute

pscrn: int = 1

The prescreener used to decide which pixels should be processed by the predictor neural network, and which can be handled by simple cubic interpolation. Since most pixels can be handled by cubic interpolation, using the prescreener generally results in much faster processing. Possible values: - 0: No prescreening. No pixels will be processed with cubic interpolation. This is really slow. - 1: Old prescreener. - 2: New prescreener level 0. - 3: New prescreener level 1. - 4: New prescreener level 2.

The new prescreener is not available with float input.

qual class-attribute instance-attribute

qual: int = 2

The number of different neural network predictions that are blended together to compute the final output value. Each neural network was trained on a different set of training data. Blending the results of these different networks improves generalisation to unseen data. Possible values are 1 and 2.

scaler class-attribute instance-attribute

scaler: ScalerT | None = None

Scaler used for additional scaling operations. If None, default to shifter

shifter class-attribute instance-attribute

shifter: KernelT = Catrom

Kernel used for shifting operations. Default to Catrom.

transpose_first class-attribute instance-attribute

transpose_first: bool = False

Transpose the clip before any operation.

copy

copy(**kwargs: Any) -> Self

Returns a new Antialiaser class replacing specified fields with new values

Source code
137
138
139
def copy(self, **kwargs: Any) -> Self:
    """Returns a new Antialiaser class replacing specified fields with new values"""
    return replace(self, **kwargs)

full_interpolate

full_interpolate(
    clip: VideoNode, double_y: bool, double_x: bool, **kwargs: Any
) -> ConstantFormatVideoNode
Source code
112
113
114
115
116
117
def full_interpolate(
    self, clip: vs.VideoNode, double_y: bool, double_x: bool, **kwargs: Any
) -> ConstantFormatVideoNode:
    return clip.sneedif.NNEDI3(
        self.field, double_y, double_x, transpose_first=self.transpose_first, **self.get_aa_args(clip) | kwargs
    )

get_aa_args

get_aa_args(clip: VideoNode, **kwargs: Any) -> dict[str, Any]
Source code
 92
 93
 94
 95
 96
 97
 98
 99
100
101
102
103
104
def get_aa_args(self, clip: vs.VideoNode, **kwargs: Any) -> dict[str, Any]:
    assert clip.format

    if (pscrn := self.pscrn) > 1 and clip.format.sample_type == vs.FLOAT:
        from warnings import warn
        warn(
            f"{self.__class__.__name__}: The new prescreener {self.pscrn} is not available with float input. "
            "Falling back to old prescreener...",
            Warning
        )
        pscrn = 1

    return dict(nsize=self.nsize, nns=self.nns, qual=self.qual, etype=self.etype, pscrn=pscrn) | kwargs

interpolate

interpolate(
    clip: VideoNode, double_y: bool, **kwargs: Any
) -> ConstantFormatVideoNode
Source code
106
107
108
109
110
def interpolate(self, clip: vs.VideoNode, double_y: bool, **kwargs: Any) -> ConstantFormatVideoNode:
    interpolated = clip.znedi3.nnedi3(
        self.field, double_y or not self.drop_fields, **self.get_aa_args(clip) | kwargs
    )
    return self.shift_interpolate(clip, interpolated, double_y)

is_full_interpolate_enabled

is_full_interpolate_enabled(x: bool, y: bool) -> bool
Source code
89
90
def is_full_interpolate_enabled(self, x: bool, y: bool) -> bool:
    return self.opencl and x and y

shift_interpolate

shift_interpolate(
    clip: VideoNode, inter: VideoNode, double_y: bool
) -> ConstantFormatVideoNode

Applies a post-shifting interpolation operation to the interpolated clip.

Parameters:

  • clip

    (VideoNode) –

    Source clip.

  • inter

    (VideoNode) –

    Interpolated clip.

  • double_y

    (bool) –

    Whether the height has been doubled

Returns:

  • ConstantFormatVideoNode

    Shifted clip.

Source code
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
def shift_interpolate(
    self,
    clip: vs.VideoNode,
    inter: vs.VideoNode,
    double_y: bool,
) -> ConstantFormatVideoNode:
    """
    Applies a post-shifting interpolation operation to the interpolated clip.

    :param clip:        Source clip.
    :param inter:       Interpolated clip.
    :param double_y:    Whether the height has been doubled
    :return:            Shifted clip.
    """
    assert check_variable(clip, self.__class__)
    assert check_variable(inter, self.__class__)

    if not double_y and not self.drop_fields:
        shift = (self._shift * int(not self.field), 0)

        inter = self._scaler.scale(inter, clip.width, clip.height, shift)

        return self._post_interpolate(clip, inter, double_y)  # type: ignore[arg-type]

    return inter

Nnedi3 dataclass

Nnedi3(
    nsize: int = 0,
    nns: int = 4,
    qual: int = 2,
    etype: int = 0,
    pscrn: int = 1,
    opencl: bool = False,
    *,
    field: int = 0,
    drop_fields: bool = True,
    transpose_first: bool = False,
    shifter: KernelT = Catrom,
    scaler: ScalerT | None = None
)

Bases: Nnedi3DR, Nnedi3SS, Antialiaser

Full implementation of the NNEDI3 anti-aliaser

Methods:

Attributes:

  • drop_fields (bool) –

    Whether to discard the unused field based on the field setting.

  • etype (int) –

    The set of weights used in the predictor neural network. Possible values:

  • field (int) –

    Controls the mode of operation and which field is kept in the resized image.

  • merge_func (Callable[[VideoNode, VideoNode], ConstantFormatVideoNode]) –

    Function used to merge the clips after the double-rate operation.

  • nns (int) –

    Number of neurons in the predictor neural network. Possible values:

  • nsize (int) –

    Size of the local neighbourhood around each pixel used by the predictor neural network.

  • opencl (bool) –

    Enables the use of the OpenCL variant.

  • pscrn (int) –

    The prescreener used to decide which pixels should be processed by the predictor neural network,

  • qual (int) –

    The number of different neural network predictions that are blended together to compute the final output value.

  • scaler (ScalerT | None) –

    Scaler used for additional scaling operations. If None, default to shifter

  • shifter (KernelT) –

    Kernel used for shifting operations. Default to Catrom.

  • transpose_first (bool) –

    Transpose the clip before any operation.

drop_fields class-attribute instance-attribute

drop_fields: bool = True

Whether to discard the unused field based on the field setting.

etype class-attribute instance-attribute

etype: int = 0

The set of weights used in the predictor neural network. Possible values: - 0: Weights trained to minimise absolute error. - 1: Weights trained to minimise squared error.

field class-attribute instance-attribute

field: int = 0

Controls the mode of operation and which field is kept in the resized image. - 0: Same rate, keeps the bottom field. - 1: Same rate, keeps the top field. - 2: Double rate (alternates each frame), starts with the bottom field. - 3: Double rate (alternates each frame), starts with the top field.

merge_func class-attribute instance-attribute

merge_func: Callable[[VideoNode, VideoNode], ConstantFormatVideoNode] = Merge

Function used to merge the clips after the double-rate operation.

nns class-attribute instance-attribute

nns: int = 4

Number of neurons in the predictor neural network. Possible values: - 0: 16 - 1: 32 - 2: 64 - 3: 128 - 4: 256

nsize class-attribute instance-attribute

nsize: int = 0

Size of the local neighbourhood around each pixel used by the predictor neural network. Possible settings: - 0: 8x6 - 1: 16x6 - 2: 32x6 - 3: 48x6 - 4: 8x4 - 5: 16x4 - 6: 32x4

opencl class-attribute instance-attribute

opencl: bool = False

Enables the use of the OpenCL variant. Note that this will only work if full interpolation can be performed.

pscrn class-attribute instance-attribute

pscrn: int = 1

The prescreener used to decide which pixels should be processed by the predictor neural network, and which can be handled by simple cubic interpolation. Since most pixels can be handled by cubic interpolation, using the prescreener generally results in much faster processing. Possible values: - 0: No prescreening. No pixels will be processed with cubic interpolation. This is really slow. - 1: Old prescreener. - 2: New prescreener level 0. - 3: New prescreener level 1. - 4: New prescreener level 2.

The new prescreener is not available with float input.

qual class-attribute instance-attribute

qual: int = 2

The number of different neural network predictions that are blended together to compute the final output value. Each neural network was trained on a different set of training data. Blending the results of these different networks improves generalisation to unseen data. Possible values are 1 and 2.

scaler class-attribute instance-attribute

scaler: ScalerT | None = None

Scaler used for additional scaling operations. If None, default to shifter

shifter class-attribute instance-attribute

shifter: KernelT = Catrom

Kernel used for shifting operations. Default to Catrom.

transpose_first class-attribute instance-attribute

transpose_first: bool = False

Transpose the clip before any operation.

aa

aa(
    clip: VideoNode, y: bool = True, x: bool = True, /, **kwargs: Any
) -> ConstantFormatVideoNode
aa(
    clip: VideoNode, dir: AADirection = BOTH, /, **kwargs: Any
) -> ConstantFormatVideoNode
aa(
    clip: VideoNode,
    y_or_dir: bool | AADirection = True,
    x: bool = True,
    /,
    **kwargs: Any,
) -> ConstantFormatVideoNode
Source code
324
325
326
327
328
329
330
331
332
333
334
def aa(
    self, clip: vs.VideoNode, y_or_dir: bool | AADirection = True, x: bool = True, /, **kwargs: Any
) -> ConstantFormatVideoNode:
    if isinstance(y_or_dir, AADirection):
        y, x = y_or_dir.to_yx()
    else:
        y = y_or_dir

    clip = self._preprocess_clip(clip)

    return self._do_aa(clip, y, x, **kwargs)

copy

copy(**kwargs: Any) -> Self

Returns a new Antialiaser class replacing specified fields with new values

Source code
137
138
139
def copy(self, **kwargs: Any) -> Self:
    """Returns a new Antialiaser class replacing specified fields with new values"""
    return replace(self, **kwargs)

draa

draa(
    clip: VideoNode, y: bool = True, x: bool = True, /, **kwargs: Any
) -> ConstantFormatVideoNode
draa(
    clip: VideoNode, dir: AADirection = BOTH, /, **kwargs: Any
) -> ConstantFormatVideoNode
draa(
    clip: VideoNode,
    y_or_dir: bool | AADirection = True,
    x: bool = True,
    /,
    **kwargs: Any,
) -> ConstantFormatVideoNode
Source code
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
def draa(
    self, clip: vs.VideoNode, y_or_dir: bool | AADirection = True, x: bool = True, /, **kwargs: Any
) -> ConstantFormatVideoNode:
    if isinstance(y_or_dir, AADirection):
        y, x = y_or_dir.to_yx()
    else:
        y = y_or_dir

    clip = self._preprocess_clip(clip)

    original_field = int(self.field)

    self.field = 0
    aa0 = super()._do_aa(clip, y, x, **kwargs)

    self.field = 1
    aa1 = super()._do_aa(clip, y, x, **kwargs)

    self.field = original_field

    return self.merge_func(aa0, aa1)

full_interpolate

full_interpolate(
    clip: VideoNode, double_y: bool, double_x: bool, **kwargs: Any
) -> ConstantFormatVideoNode
Source code
112
113
114
115
116
117
def full_interpolate(
    self, clip: vs.VideoNode, double_y: bool, double_x: bool, **kwargs: Any
) -> ConstantFormatVideoNode:
    return clip.sneedif.NNEDI3(
        self.field, double_y, double_x, transpose_first=self.transpose_first, **self.get_aa_args(clip) | kwargs
    )

get_aa_args

get_aa_args(clip: VideoNode, **kwargs: Any) -> dict[str, Any]
Source code
 92
 93
 94
 95
 96
 97
 98
 99
100
101
102
103
104
def get_aa_args(self, clip: vs.VideoNode, **kwargs: Any) -> dict[str, Any]:
    assert clip.format

    if (pscrn := self.pscrn) > 1 and clip.format.sample_type == vs.FLOAT:
        from warnings import warn
        warn(
            f"{self.__class__.__name__}: The new prescreener {self.pscrn} is not available with float input. "
            "Falling back to old prescreener...",
            Warning
        )
        pscrn = 1

    return dict(nsize=self.nsize, nns=self.nns, qual=self.qual, etype=self.etype, pscrn=pscrn) | kwargs

get_sr_args

get_sr_args(clip: VideoNode, **kwargs: Any) -> dict[str, Any]

Retrieves arguments for single rating processing.

Parameters:

  • clip

    (VideoNode) –

    Source clip.

  • **kwargs

    (Any, default: {} ) –

    Additional arguments.

Returns:

Source code
287
288
289
290
291
292
293
294
295
def get_sr_args(self, clip: vs.VideoNode, **kwargs: Any) -> dict[str, Any]:
    """
    Retrieves arguments for single rating processing.

    :param clip:        Source clip.
    :param **kwargs:    Additional arguments.
    :return:            Passed keyword arguments.
    """
    return kwargs

get_ss_args

get_ss_args(clip: VideoNode, **kwargs: Any) -> dict[str, Any]

Retrieves arguments for super sampling processing.

Parameters:

  • clip

    (VideoNode) –

    Source clip.

  • **kwargs

    (Any, default: {} ) –

    Additional arguments.

Returns:

Source code
178
179
180
181
182
183
184
185
186
def get_ss_args(self, clip: vs.VideoNode, **kwargs: Any) -> dict[str, Any]:
    """
    Retrieves arguments for super sampling processing.

    :param clip:        Source clip.
    :param **kwargs:    Additional arguments.
    :return:            Passed keyword arguments.
    """
    return kwargs

interpolate

interpolate(
    clip: VideoNode, double_y: bool, **kwargs: Any
) -> ConstantFormatVideoNode
Source code
106
107
108
109
110
def interpolate(self, clip: vs.VideoNode, double_y: bool, **kwargs: Any) -> ConstantFormatVideoNode:
    interpolated = clip.znedi3.nnedi3(
        self.field, double_y or not self.drop_fields, **self.get_aa_args(clip) | kwargs
    )
    return self.shift_interpolate(clip, interpolated, double_y)

is_full_interpolate_enabled

is_full_interpolate_enabled(x: bool, y: bool) -> bool
Source code
89
90
def is_full_interpolate_enabled(self, x: bool, y: bool) -> bool:
    return self.opencl and x and y

kernel_radius

kernel_radius() -> int
Source code
123
124
125
126
127
128
129
130
131
132
133
@inject_self.cached.property
def kernel_radius(self) -> int:
    match self.nsize:
        case 1 | 5:
            return 16
        case 2 | 6:
            return 32
        case 3:
            return 48
        case _:
            return 8

scale

scale(
    clip: VideoNode,
    width: int | None = None,
    height: int | None = None,
    shift: tuple[TopShift, LeftShift] = (0, 0),
    **kwargs: Any
) -> VideoNode

Scale the given clip using super sampling method.

Parameters:

  • clip

    (VideoNode) –

    The input clip to be scaled.

  • width

    (int | None, default: None ) –

    The target width for scaling. If None, the width of the input clip will be used.

  • height

    (int | None, default: None ) –

    The target height for scaling. If None, the height of the input clip will be used.

  • shift

    (tuple[TopShift, LeftShift], default: (0, 0) ) –

    A tuple representing the shift values for the y and x axes.

  • **kwargs

    (Any, default: {} ) –

    Additional arguments to be passed to the interpolate or full_interpolate methods.

Returns:

  • VideoNode

    The scaled clip.

Source code
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
@inject_self.cached
def scale(
    self,
    clip: vs.VideoNode,
    width: int | None = None,
    height: int | None = None,
    shift: tuple[TopShift, LeftShift] = (0, 0),
    **kwargs: Any
) -> vs.VideoNode:
    """
    Scale the given clip using super sampling method.

    :param clip:        The input clip to be scaled.
    :param width:       The target width for scaling. If None, the width of the input clip will be used.
    :param height:      The target height for scaling. If None, the height of the input clip will be used.
    :param shift:       A tuple representing the shift values for the y and x axes.
    :param **kwargs:    Additional arguments to be passed to the `interpolate` or `full_interpolate` methods.

    :return:            The scaled clip.
    """
    assert check_progressive(clip, self.scale)

    clip = self._preprocess_clip(clip)
    width, height = self._wh_norm(clip, width, height)

    if (clip.width, clip.height) == (width, height):
        return clip

    kwargs = self.get_aa_args(clip, **kwargs) | self.get_ss_args(clip, **kwargs) | kwargs

    divw, divh = (ceil(size) for size in (width / clip.width, height / clip.height))

    mult_x, mult_y = (int(log2(divs)) for divs in (divw, divh))

    cdivw, cdivh = 1 << clip.format.subsampling_w, 1 << clip.format.subsampling_h

    upscaled = clip

    def _transpose(before: bool, is_width: int, y: int, x: int) -> None:
        nonlocal upscaled

        before = self.transpose_first if before else not self.transpose_first

        if ((before or not y) if is_width else (before and x)):
            upscaled = upscaled.std.Transpose()

    for (y, x) in zip_longest([True] * mult_y, [True] * mult_x, fillvalue=False):
        if isinstance(self, _FullInterpolate) and self.is_full_interpolate_enabled(x, y):
            upscaled = self.full_interpolate(upscaled, y, x, **kwargs)
        else:
            for isx, val in enumerate([y, x]):
                if val:
                    _transpose(True, isx, y, x)

                    upscaled = self.interpolate(upscaled, True, **kwargs)

                    _transpose(False, isx, y, x)

        topshift = leftshift = cleftshift = ctopshift = 0.0

        if y and self._shift:
            topshift = ctopshift = self._shift

            if cdivw == 2 and cdivh == 2:
                ctopshift -= 0.125
            elif cdivw == 1 and cdivh == 2:
                ctopshift += 0.125

        cresshift = 0.0

        if x and self._shift:
            leftshift = cleftshift = self._shift

            if cdivw in {4, 2} and cdivh in {4, 2, 1}:
                cleftshift = self._shift + 0.5

                if cdivw == 4 and cdivh == 1:
                    cresshift = 0.125 * 1
                elif cdivw == 2 and cdivh == 2:
                    cresshift = 0.125 * 2
                elif cdivw == 2 and cdivh == 1:
                    cresshift = 0.125 * 3

                cleftshift -= cresshift

        if isinstance(self._shifter, NoShift):
            if upscaled.format.subsampling_h or upscaled.format.subsampling_w:
                upscaled = Catrom.shift(upscaled, 0, [0, cleftshift + cresshift])
        else:
            upscaled = self._shifter.shift(
                upscaled, [topshift, ctopshift], [leftshift, cleftshift]
            )

    return self._scaler.scale(upscaled, width, height, shift)

shift_interpolate

shift_interpolate(
    clip: VideoNode, inter: VideoNode, double_y: bool
) -> ConstantFormatVideoNode

Applies a post-shifting interpolation operation to the interpolated clip.

Parameters:

  • clip

    (VideoNode) –

    Source clip.

  • inter

    (VideoNode) –

    Interpolated clip.

  • double_y

    (bool) –

    Whether the height has been doubled

Returns:

  • ConstantFormatVideoNode

    Shifted clip.

Source code
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
def shift_interpolate(
    self,
    clip: vs.VideoNode,
    inter: vs.VideoNode,
    double_y: bool,
) -> ConstantFormatVideoNode:
    """
    Applies a post-shifting interpolation operation to the interpolated clip.

    :param clip:        Source clip.
    :param inter:       Interpolated clip.
    :param double_y:    Whether the height has been doubled
    :return:            Shifted clip.
    """
    assert check_variable(clip, self.__class__)
    assert check_variable(inter, self.__class__)

    if not double_y and not self.drop_fields:
        shift = (self._shift * int(not self.field), 0)

        inter = self._scaler.scale(inter, clip.width, clip.height, shift)

        return self._post_interpolate(clip, inter, double_y)  # type: ignore[arg-type]

    return inter

Nnedi3DR dataclass

Nnedi3DR(
    nsize: int = 0,
    nns: int = 4,
    qual: int = 2,
    etype: int = 0,
    pscrn: int = 1,
    opencl: bool = False,
    *,
    field: int = 0,
    drop_fields: bool = True,
    transpose_first: bool = False,
    shifter: KernelT = Catrom,
    scaler: ScalerT | None = None
)

Bases: Nnedi3SR, DoubleRater

Concrete implementation of NNEDI3 used as a double-rater.

Methods:

Attributes:

  • drop_fields (bool) –

    Whether to discard the unused field based on the field setting.

  • etype (int) –

    The set of weights used in the predictor neural network. Possible values:

  • field (int) –

    Controls the mode of operation and which field is kept in the resized image.

  • merge_func (Callable[[VideoNode, VideoNode], ConstantFormatVideoNode]) –

    Function used to merge the clips after the double-rate operation.

  • nns (int) –

    Number of neurons in the predictor neural network. Possible values:

  • nsize (int) –

    Size of the local neighbourhood around each pixel used by the predictor neural network.

  • opencl (bool) –

    Enables the use of the OpenCL variant.

  • pscrn (int) –

    The prescreener used to decide which pixels should be processed by the predictor neural network,

  • qual (int) –

    The number of different neural network predictions that are blended together to compute the final output value.

  • scaler (ScalerT | None) –

    Scaler used for additional scaling operations. If None, default to shifter

  • shifter (KernelT) –

    Kernel used for shifting operations. Default to Catrom.

  • transpose_first (bool) –

    Transpose the clip before any operation.

drop_fields class-attribute instance-attribute

drop_fields: bool = True

Whether to discard the unused field based on the field setting.

etype class-attribute instance-attribute

etype: int = 0

The set of weights used in the predictor neural network. Possible values: - 0: Weights trained to minimise absolute error. - 1: Weights trained to minimise squared error.

field class-attribute instance-attribute

field: int = 0

Controls the mode of operation and which field is kept in the resized image. - 0: Same rate, keeps the bottom field. - 1: Same rate, keeps the top field. - 2: Double rate (alternates each frame), starts with the bottom field. - 3: Double rate (alternates each frame), starts with the top field.

merge_func class-attribute instance-attribute

merge_func: Callable[[VideoNode, VideoNode], ConstantFormatVideoNode] = Merge

Function used to merge the clips after the double-rate operation.

nns class-attribute instance-attribute

nns: int = 4

Number of neurons in the predictor neural network. Possible values: - 0: 16 - 1: 32 - 2: 64 - 3: 128 - 4: 256

nsize class-attribute instance-attribute

nsize: int = 0

Size of the local neighbourhood around each pixel used by the predictor neural network. Possible settings: - 0: 8x6 - 1: 16x6 - 2: 32x6 - 3: 48x6 - 4: 8x4 - 5: 16x4 - 6: 32x4

opencl class-attribute instance-attribute

opencl: bool = False

Enables the use of the OpenCL variant. Note that this will only work if full interpolation can be performed.

pscrn class-attribute instance-attribute

pscrn: int = 1

The prescreener used to decide which pixels should be processed by the predictor neural network, and which can be handled by simple cubic interpolation. Since most pixels can be handled by cubic interpolation, using the prescreener generally results in much faster processing. Possible values: - 0: No prescreening. No pixels will be processed with cubic interpolation. This is really slow. - 1: Old prescreener. - 2: New prescreener level 0. - 3: New prescreener level 1. - 4: New prescreener level 2.

The new prescreener is not available with float input.

qual class-attribute instance-attribute

qual: int = 2

The number of different neural network predictions that are blended together to compute the final output value. Each neural network was trained on a different set of training data. Blending the results of these different networks improves generalisation to unseen data. Possible values are 1 and 2.

scaler class-attribute instance-attribute

scaler: ScalerT | None = None

Scaler used for additional scaling operations. If None, default to shifter

shifter class-attribute instance-attribute

shifter: KernelT = Catrom

Kernel used for shifting operations. Default to Catrom.

transpose_first class-attribute instance-attribute

transpose_first: bool = False

Transpose the clip before any operation.

aa

aa(
    clip: VideoNode, y: bool = True, x: bool = True, /, **kwargs: Any
) -> ConstantFormatVideoNode
aa(
    clip: VideoNode, dir: AADirection = BOTH, /, **kwargs: Any
) -> ConstantFormatVideoNode
aa(
    clip: VideoNode,
    y_or_dir: bool | AADirection = True,
    x: bool = True,
    /,
    **kwargs: Any,
) -> ConstantFormatVideoNode
Source code
324
325
326
327
328
329
330
331
332
333
334
def aa(
    self, clip: vs.VideoNode, y_or_dir: bool | AADirection = True, x: bool = True, /, **kwargs: Any
) -> ConstantFormatVideoNode:
    if isinstance(y_or_dir, AADirection):
        y, x = y_or_dir.to_yx()
    else:
        y = y_or_dir

    clip = self._preprocess_clip(clip)

    return self._do_aa(clip, y, x, **kwargs)

copy

copy(**kwargs: Any) -> Self

Returns a new Antialiaser class replacing specified fields with new values

Source code
137
138
139
def copy(self, **kwargs: Any) -> Self:
    """Returns a new Antialiaser class replacing specified fields with new values"""
    return replace(self, **kwargs)

draa

draa(
    clip: VideoNode, y: bool = True, x: bool = True, /, **kwargs: Any
) -> ConstantFormatVideoNode
draa(
    clip: VideoNode, dir: AADirection = BOTH, /, **kwargs: Any
) -> ConstantFormatVideoNode
draa(
    clip: VideoNode,
    y_or_dir: bool | AADirection = True,
    x: bool = True,
    /,
    **kwargs: Any,
) -> ConstantFormatVideoNode
Source code
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
def draa(
    self, clip: vs.VideoNode, y_or_dir: bool | AADirection = True, x: bool = True, /, **kwargs: Any
) -> ConstantFormatVideoNode:
    if isinstance(y_or_dir, AADirection):
        y, x = y_or_dir.to_yx()
    else:
        y = y_or_dir

    clip = self._preprocess_clip(clip)

    original_field = int(self.field)

    self.field = 0
    aa0 = super()._do_aa(clip, y, x, **kwargs)

    self.field = 1
    aa1 = super()._do_aa(clip, y, x, **kwargs)

    self.field = original_field

    return self.merge_func(aa0, aa1)

full_interpolate

full_interpolate(
    clip: VideoNode, double_y: bool, double_x: bool, **kwargs: Any
) -> ConstantFormatVideoNode
Source code
112
113
114
115
116
117
def full_interpolate(
    self, clip: vs.VideoNode, double_y: bool, double_x: bool, **kwargs: Any
) -> ConstantFormatVideoNode:
    return clip.sneedif.NNEDI3(
        self.field, double_y, double_x, transpose_first=self.transpose_first, **self.get_aa_args(clip) | kwargs
    )

get_aa_args

get_aa_args(clip: VideoNode, **kwargs: Any) -> dict[str, Any]
Source code
 92
 93
 94
 95
 96
 97
 98
 99
100
101
102
103
104
def get_aa_args(self, clip: vs.VideoNode, **kwargs: Any) -> dict[str, Any]:
    assert clip.format

    if (pscrn := self.pscrn) > 1 and clip.format.sample_type == vs.FLOAT:
        from warnings import warn
        warn(
            f"{self.__class__.__name__}: The new prescreener {self.pscrn} is not available with float input. "
            "Falling back to old prescreener...",
            Warning
        )
        pscrn = 1

    return dict(nsize=self.nsize, nns=self.nns, qual=self.qual, etype=self.etype, pscrn=pscrn) | kwargs

get_sr_args

get_sr_args(clip: VideoNode, **kwargs: Any) -> dict[str, Any]

Retrieves arguments for single rating processing.

Parameters:

  • clip

    (VideoNode) –

    Source clip.

  • **kwargs

    (Any, default: {} ) –

    Additional arguments.

Returns:

Source code
287
288
289
290
291
292
293
294
295
def get_sr_args(self, clip: vs.VideoNode, **kwargs: Any) -> dict[str, Any]:
    """
    Retrieves arguments for single rating processing.

    :param clip:        Source clip.
    :param **kwargs:    Additional arguments.
    :return:            Passed keyword arguments.
    """
    return kwargs

interpolate

interpolate(
    clip: VideoNode, double_y: bool, **kwargs: Any
) -> ConstantFormatVideoNode
Source code
106
107
108
109
110
def interpolate(self, clip: vs.VideoNode, double_y: bool, **kwargs: Any) -> ConstantFormatVideoNode:
    interpolated = clip.znedi3.nnedi3(
        self.field, double_y or not self.drop_fields, **self.get_aa_args(clip) | kwargs
    )
    return self.shift_interpolate(clip, interpolated, double_y)

is_full_interpolate_enabled

is_full_interpolate_enabled(x: bool, y: bool) -> bool
Source code
89
90
def is_full_interpolate_enabled(self, x: bool, y: bool) -> bool:
    return self.opencl and x and y

shift_interpolate

shift_interpolate(
    clip: VideoNode, inter: VideoNode, double_y: bool
) -> ConstantFormatVideoNode

Applies a post-shifting interpolation operation to the interpolated clip.

Parameters:

  • clip

    (VideoNode) –

    Source clip.

  • inter

    (VideoNode) –

    Interpolated clip.

  • double_y

    (bool) –

    Whether the height has been doubled

Returns:

  • ConstantFormatVideoNode

    Shifted clip.

Source code
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
def shift_interpolate(
    self,
    clip: vs.VideoNode,
    inter: vs.VideoNode,
    double_y: bool,
) -> ConstantFormatVideoNode:
    """
    Applies a post-shifting interpolation operation to the interpolated clip.

    :param clip:        Source clip.
    :param inter:       Interpolated clip.
    :param double_y:    Whether the height has been doubled
    :return:            Shifted clip.
    """
    assert check_variable(clip, self.__class__)
    assert check_variable(inter, self.__class__)

    if not double_y and not self.drop_fields:
        shift = (self._shift * int(not self.field), 0)

        inter = self._scaler.scale(inter, clip.width, clip.height, shift)

        return self._post_interpolate(clip, inter, double_y)  # type: ignore[arg-type]

    return inter

Nnedi3SR dataclass

Nnedi3SR(
    nsize: int = 0,
    nns: int = 4,
    qual: int = 2,
    etype: int = 0,
    pscrn: int = 1,
    opencl: bool = False,
    *,
    field: int = 0,
    drop_fields: bool = True,
    transpose_first: bool = False,
    shifter: KernelT = Catrom,
    scaler: ScalerT | None = None
)

Bases: NNEDI3, SingleRater

Concrete implementation of NNEDI3 used as a single-rater.

Methods:

Attributes:

  • drop_fields (bool) –

    Whether to discard the unused field based on the field setting.

  • etype (int) –

    The set of weights used in the predictor neural network. Possible values:

  • field (int) –

    Controls the mode of operation and which field is kept in the resized image.

  • nns (int) –

    Number of neurons in the predictor neural network. Possible values:

  • nsize (int) –

    Size of the local neighbourhood around each pixel used by the predictor neural network.

  • opencl (bool) –

    Enables the use of the OpenCL variant.

  • pscrn (int) –

    The prescreener used to decide which pixels should be processed by the predictor neural network,

  • qual (int) –

    The number of different neural network predictions that are blended together to compute the final output value.

  • scaler (ScalerT | None) –

    Scaler used for additional scaling operations. If None, default to shifter

  • shifter (KernelT) –

    Kernel used for shifting operations. Default to Catrom.

  • transpose_first (bool) –

    Transpose the clip before any operation.

drop_fields class-attribute instance-attribute

drop_fields: bool = True

Whether to discard the unused field based on the field setting.

etype class-attribute instance-attribute

etype: int = 0

The set of weights used in the predictor neural network. Possible values: - 0: Weights trained to minimise absolute error. - 1: Weights trained to minimise squared error.

field class-attribute instance-attribute

field: int = 0

Controls the mode of operation and which field is kept in the resized image. - 0: Same rate, keeps the bottom field. - 1: Same rate, keeps the top field. - 2: Double rate (alternates each frame), starts with the bottom field. - 3: Double rate (alternates each frame), starts with the top field.

nns class-attribute instance-attribute

nns: int = 4

Number of neurons in the predictor neural network. Possible values: - 0: 16 - 1: 32 - 2: 64 - 3: 128 - 4: 256

nsize class-attribute instance-attribute

nsize: int = 0

Size of the local neighbourhood around each pixel used by the predictor neural network. Possible settings: - 0: 8x6 - 1: 16x6 - 2: 32x6 - 3: 48x6 - 4: 8x4 - 5: 16x4 - 6: 32x4

opencl class-attribute instance-attribute

opencl: bool = False

Enables the use of the OpenCL variant. Note that this will only work if full interpolation can be performed.

pscrn class-attribute instance-attribute

pscrn: int = 1

The prescreener used to decide which pixels should be processed by the predictor neural network, and which can be handled by simple cubic interpolation. Since most pixels can be handled by cubic interpolation, using the prescreener generally results in much faster processing. Possible values: - 0: No prescreening. No pixels will be processed with cubic interpolation. This is really slow. - 1: Old prescreener. - 2: New prescreener level 0. - 3: New prescreener level 1. - 4: New prescreener level 2.

The new prescreener is not available with float input.

qual class-attribute instance-attribute

qual: int = 2

The number of different neural network predictions that are blended together to compute the final output value. Each neural network was trained on a different set of training data. Blending the results of these different networks improves generalisation to unseen data. Possible values are 1 and 2.

scaler class-attribute instance-attribute

scaler: ScalerT | None = None

Scaler used for additional scaling operations. If None, default to shifter

shifter class-attribute instance-attribute

shifter: KernelT = Catrom

Kernel used for shifting operations. Default to Catrom.

transpose_first class-attribute instance-attribute

transpose_first: bool = False

Transpose the clip before any operation.

aa

aa(
    clip: VideoNode, y: bool = True, x: bool = True, /, **kwargs: Any
) -> ConstantFormatVideoNode
aa(
    clip: VideoNode, dir: AADirection = BOTH, /, **kwargs: Any
) -> ConstantFormatVideoNode
aa(
    clip: VideoNode,
    y_or_dir: bool | AADirection = True,
    x: bool = True,
    /,
    **kwargs: Any,
) -> ConstantFormatVideoNode
Source code
324
325
326
327
328
329
330
331
332
333
334
def aa(
    self, clip: vs.VideoNode, y_or_dir: bool | AADirection = True, x: bool = True, /, **kwargs: Any
) -> ConstantFormatVideoNode:
    if isinstance(y_or_dir, AADirection):
        y, x = y_or_dir.to_yx()
    else:
        y = y_or_dir

    clip = self._preprocess_clip(clip)

    return self._do_aa(clip, y, x, **kwargs)

copy

copy(**kwargs: Any) -> Self

Returns a new Antialiaser class replacing specified fields with new values

Source code
137
138
139
def copy(self, **kwargs: Any) -> Self:
    """Returns a new Antialiaser class replacing specified fields with new values"""
    return replace(self, **kwargs)

full_interpolate

full_interpolate(
    clip: VideoNode, double_y: bool, double_x: bool, **kwargs: Any
) -> ConstantFormatVideoNode
Source code
112
113
114
115
116
117
def full_interpolate(
    self, clip: vs.VideoNode, double_y: bool, double_x: bool, **kwargs: Any
) -> ConstantFormatVideoNode:
    return clip.sneedif.NNEDI3(
        self.field, double_y, double_x, transpose_first=self.transpose_first, **self.get_aa_args(clip) | kwargs
    )

get_aa_args

get_aa_args(clip: VideoNode, **kwargs: Any) -> dict[str, Any]
Source code
 92
 93
 94
 95
 96
 97
 98
 99
100
101
102
103
104
def get_aa_args(self, clip: vs.VideoNode, **kwargs: Any) -> dict[str, Any]:
    assert clip.format

    if (pscrn := self.pscrn) > 1 and clip.format.sample_type == vs.FLOAT:
        from warnings import warn
        warn(
            f"{self.__class__.__name__}: The new prescreener {self.pscrn} is not available with float input. "
            "Falling back to old prescreener...",
            Warning
        )
        pscrn = 1

    return dict(nsize=self.nsize, nns=self.nns, qual=self.qual, etype=self.etype, pscrn=pscrn) | kwargs

get_sr_args

get_sr_args(clip: VideoNode, **kwargs: Any) -> dict[str, Any]

Retrieves arguments for single rating processing.

Parameters:

  • clip

    (VideoNode) –

    Source clip.

  • **kwargs

    (Any, default: {} ) –

    Additional arguments.

Returns:

Source code
287
288
289
290
291
292
293
294
295
def get_sr_args(self, clip: vs.VideoNode, **kwargs: Any) -> dict[str, Any]:
    """
    Retrieves arguments for single rating processing.

    :param clip:        Source clip.
    :param **kwargs:    Additional arguments.
    :return:            Passed keyword arguments.
    """
    return kwargs

interpolate

interpolate(
    clip: VideoNode, double_y: bool, **kwargs: Any
) -> ConstantFormatVideoNode
Source code
106
107
108
109
110
def interpolate(self, clip: vs.VideoNode, double_y: bool, **kwargs: Any) -> ConstantFormatVideoNode:
    interpolated = clip.znedi3.nnedi3(
        self.field, double_y or not self.drop_fields, **self.get_aa_args(clip) | kwargs
    )
    return self.shift_interpolate(clip, interpolated, double_y)

is_full_interpolate_enabled

is_full_interpolate_enabled(x: bool, y: bool) -> bool
Source code
89
90
def is_full_interpolate_enabled(self, x: bool, y: bool) -> bool:
    return self.opencl and x and y

shift_interpolate

shift_interpolate(
    clip: VideoNode, inter: VideoNode, double_y: bool
) -> ConstantFormatVideoNode

Applies a post-shifting interpolation operation to the interpolated clip.

Parameters:

  • clip

    (VideoNode) –

    Source clip.

  • inter

    (VideoNode) –

    Interpolated clip.

  • double_y

    (bool) –

    Whether the height has been doubled

Returns:

  • ConstantFormatVideoNode

    Shifted clip.

Source code
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
def shift_interpolate(
    self,
    clip: vs.VideoNode,
    inter: vs.VideoNode,
    double_y: bool,
) -> ConstantFormatVideoNode:
    """
    Applies a post-shifting interpolation operation to the interpolated clip.

    :param clip:        Source clip.
    :param inter:       Interpolated clip.
    :param double_y:    Whether the height has been doubled
    :return:            Shifted clip.
    """
    assert check_variable(clip, self.__class__)
    assert check_variable(inter, self.__class__)

    if not double_y and not self.drop_fields:
        shift = (self._shift * int(not self.field), 0)

        inter = self._scaler.scale(inter, clip.width, clip.height, shift)

        return self._post_interpolate(clip, inter, double_y)  # type: ignore[arg-type]

    return inter

Nnedi3SS dataclass

Nnedi3SS(
    nsize: int = 0,
    nns: int = 4,
    qual: int = 2,
    etype: int = 0,
    pscrn: int = 1,
    opencl: bool = False,
    *,
    field: int = 0,
    drop_fields: bool = True,
    transpose_first: bool = False,
    shifter: KernelT = Catrom,
    scaler: ScalerT | None = None
)

Bases: NNEDI3, SuperSampler

Concrete implementation of NNEDI3 used as a supersampler.

Methods:

Attributes:

  • drop_fields (bool) –

    Whether to discard the unused field based on the field setting.

  • etype (int) –

    The set of weights used in the predictor neural network. Possible values:

  • field (int) –

    Controls the mode of operation and which field is kept in the resized image.

  • nns (int) –

    Number of neurons in the predictor neural network. Possible values:

  • nsize (int) –

    Size of the local neighbourhood around each pixel used by the predictor neural network.

  • opencl (bool) –

    Enables the use of the OpenCL variant.

  • pscrn (int) –

    The prescreener used to decide which pixels should be processed by the predictor neural network,

  • qual (int) –

    The number of different neural network predictions that are blended together to compute the final output value.

  • scaler (ScalerT | None) –

    Scaler used for additional scaling operations. If None, default to shifter

  • shifter (KernelT) –

    Kernel used for shifting operations. Default to Catrom.

  • transpose_first (bool) –

    Transpose the clip before any operation.

drop_fields class-attribute instance-attribute

drop_fields: bool = True

Whether to discard the unused field based on the field setting.

etype class-attribute instance-attribute

etype: int = 0

The set of weights used in the predictor neural network. Possible values: - 0: Weights trained to minimise absolute error. - 1: Weights trained to minimise squared error.

field class-attribute instance-attribute

field: int = 0

Controls the mode of operation and which field is kept in the resized image. - 0: Same rate, keeps the bottom field. - 1: Same rate, keeps the top field. - 2: Double rate (alternates each frame), starts with the bottom field. - 3: Double rate (alternates each frame), starts with the top field.

nns class-attribute instance-attribute

nns: int = 4

Number of neurons in the predictor neural network. Possible values: - 0: 16 - 1: 32 - 2: 64 - 3: 128 - 4: 256

nsize class-attribute instance-attribute

nsize: int = 0

Size of the local neighbourhood around each pixel used by the predictor neural network. Possible settings: - 0: 8x6 - 1: 16x6 - 2: 32x6 - 3: 48x6 - 4: 8x4 - 5: 16x4 - 6: 32x4

opencl class-attribute instance-attribute

opencl: bool = False

Enables the use of the OpenCL variant. Note that this will only work if full interpolation can be performed.

pscrn class-attribute instance-attribute

pscrn: int = 1

The prescreener used to decide which pixels should be processed by the predictor neural network, and which can be handled by simple cubic interpolation. Since most pixels can be handled by cubic interpolation, using the prescreener generally results in much faster processing. Possible values: - 0: No prescreening. No pixels will be processed with cubic interpolation. This is really slow. - 1: Old prescreener. - 2: New prescreener level 0. - 3: New prescreener level 1. - 4: New prescreener level 2.

The new prescreener is not available with float input.

qual class-attribute instance-attribute

qual: int = 2

The number of different neural network predictions that are blended together to compute the final output value. Each neural network was trained on a different set of training data. Blending the results of these different networks improves generalisation to unseen data. Possible values are 1 and 2.

scaler class-attribute instance-attribute

scaler: ScalerT | None = None

Scaler used for additional scaling operations. If None, default to shifter

shifter class-attribute instance-attribute

shifter: KernelT = Catrom

Kernel used for shifting operations. Default to Catrom.

transpose_first class-attribute instance-attribute

transpose_first: bool = False

Transpose the clip before any operation.

copy

copy(**kwargs: Any) -> Self

Returns a new Antialiaser class replacing specified fields with new values

Source code
137
138
139
def copy(self, **kwargs: Any) -> Self:
    """Returns a new Antialiaser class replacing specified fields with new values"""
    return replace(self, **kwargs)

full_interpolate

full_interpolate(
    clip: VideoNode, double_y: bool, double_x: bool, **kwargs: Any
) -> ConstantFormatVideoNode
Source code
112
113
114
115
116
117
def full_interpolate(
    self, clip: vs.VideoNode, double_y: bool, double_x: bool, **kwargs: Any
) -> ConstantFormatVideoNode:
    return clip.sneedif.NNEDI3(
        self.field, double_y, double_x, transpose_first=self.transpose_first, **self.get_aa_args(clip) | kwargs
    )

get_aa_args

get_aa_args(clip: VideoNode, **kwargs: Any) -> dict[str, Any]
Source code
 92
 93
 94
 95
 96
 97
 98
 99
100
101
102
103
104
def get_aa_args(self, clip: vs.VideoNode, **kwargs: Any) -> dict[str, Any]:
    assert clip.format

    if (pscrn := self.pscrn) > 1 and clip.format.sample_type == vs.FLOAT:
        from warnings import warn
        warn(
            f"{self.__class__.__name__}: The new prescreener {self.pscrn} is not available with float input. "
            "Falling back to old prescreener...",
            Warning
        )
        pscrn = 1

    return dict(nsize=self.nsize, nns=self.nns, qual=self.qual, etype=self.etype, pscrn=pscrn) | kwargs

get_ss_args

get_ss_args(clip: VideoNode, **kwargs: Any) -> dict[str, Any]

Retrieves arguments for super sampling processing.

Parameters:

  • clip

    (VideoNode) –

    Source clip.

  • **kwargs

    (Any, default: {} ) –

    Additional arguments.

Returns:

Source code
178
179
180
181
182
183
184
185
186
def get_ss_args(self, clip: vs.VideoNode, **kwargs: Any) -> dict[str, Any]:
    """
    Retrieves arguments for super sampling processing.

    :param clip:        Source clip.
    :param **kwargs:    Additional arguments.
    :return:            Passed keyword arguments.
    """
    return kwargs

interpolate

interpolate(
    clip: VideoNode, double_y: bool, **kwargs: Any
) -> ConstantFormatVideoNode
Source code
106
107
108
109
110
def interpolate(self, clip: vs.VideoNode, double_y: bool, **kwargs: Any) -> ConstantFormatVideoNode:
    interpolated = clip.znedi3.nnedi3(
        self.field, double_y or not self.drop_fields, **self.get_aa_args(clip) | kwargs
    )
    return self.shift_interpolate(clip, interpolated, double_y)

is_full_interpolate_enabled

is_full_interpolate_enabled(x: bool, y: bool) -> bool
Source code
89
90
def is_full_interpolate_enabled(self, x: bool, y: bool) -> bool:
    return self.opencl and x and y

kernel_radius

kernel_radius() -> int
Source code
123
124
125
126
127
128
129
130
131
132
133
@inject_self.cached.property
def kernel_radius(self) -> int:
    match self.nsize:
        case 1 | 5:
            return 16
        case 2 | 6:
            return 32
        case 3:
            return 48
        case _:
            return 8

scale

scale(
    clip: VideoNode,
    width: int | None = None,
    height: int | None = None,
    shift: tuple[TopShift, LeftShift] = (0, 0),
    **kwargs: Any
) -> VideoNode

Scale the given clip using super sampling method.

Parameters:

  • clip

    (VideoNode) –

    The input clip to be scaled.

  • width

    (int | None, default: None ) –

    The target width for scaling. If None, the width of the input clip will be used.

  • height

    (int | None, default: None ) –

    The target height for scaling. If None, the height of the input clip will be used.

  • shift

    (tuple[TopShift, LeftShift], default: (0, 0) ) –

    A tuple representing the shift values for the y and x axes.

  • **kwargs

    (Any, default: {} ) –

    Additional arguments to be passed to the interpolate or full_interpolate methods.

Returns:

  • VideoNode

    The scaled clip.

Source code
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
@inject_self.cached
def scale(
    self,
    clip: vs.VideoNode,
    width: int | None = None,
    height: int | None = None,
    shift: tuple[TopShift, LeftShift] = (0, 0),
    **kwargs: Any
) -> vs.VideoNode:
    """
    Scale the given clip using super sampling method.

    :param clip:        The input clip to be scaled.
    :param width:       The target width for scaling. If None, the width of the input clip will be used.
    :param height:      The target height for scaling. If None, the height of the input clip will be used.
    :param shift:       A tuple representing the shift values for the y and x axes.
    :param **kwargs:    Additional arguments to be passed to the `interpolate` or `full_interpolate` methods.

    :return:            The scaled clip.
    """
    assert check_progressive(clip, self.scale)

    clip = self._preprocess_clip(clip)
    width, height = self._wh_norm(clip, width, height)

    if (clip.width, clip.height) == (width, height):
        return clip

    kwargs = self.get_aa_args(clip, **kwargs) | self.get_ss_args(clip, **kwargs) | kwargs

    divw, divh = (ceil(size) for size in (width / clip.width, height / clip.height))

    mult_x, mult_y = (int(log2(divs)) for divs in (divw, divh))

    cdivw, cdivh = 1 << clip.format.subsampling_w, 1 << clip.format.subsampling_h

    upscaled = clip

    def _transpose(before: bool, is_width: int, y: int, x: int) -> None:
        nonlocal upscaled

        before = self.transpose_first if before else not self.transpose_first

        if ((before or not y) if is_width else (before and x)):
            upscaled = upscaled.std.Transpose()

    for (y, x) in zip_longest([True] * mult_y, [True] * mult_x, fillvalue=False):
        if isinstance(self, _FullInterpolate) and self.is_full_interpolate_enabled(x, y):
            upscaled = self.full_interpolate(upscaled, y, x, **kwargs)
        else:
            for isx, val in enumerate([y, x]):
                if val:
                    _transpose(True, isx, y, x)

                    upscaled = self.interpolate(upscaled, True, **kwargs)

                    _transpose(False, isx, y, x)

        topshift = leftshift = cleftshift = ctopshift = 0.0

        if y and self._shift:
            topshift = ctopshift = self._shift

            if cdivw == 2 and cdivh == 2:
                ctopshift -= 0.125
            elif cdivw == 1 and cdivh == 2:
                ctopshift += 0.125

        cresshift = 0.0

        if x and self._shift:
            leftshift = cleftshift = self._shift

            if cdivw in {4, 2} and cdivh in {4, 2, 1}:
                cleftshift = self._shift + 0.5

                if cdivw == 4 and cdivh == 1:
                    cresshift = 0.125 * 1
                elif cdivw == 2 and cdivh == 2:
                    cresshift = 0.125 * 2
                elif cdivw == 2 and cdivh == 1:
                    cresshift = 0.125 * 3

                cleftshift -= cresshift

        if isinstance(self._shifter, NoShift):
            if upscaled.format.subsampling_h or upscaled.format.subsampling_w:
                upscaled = Catrom.shift(upscaled, 0, [0, cleftshift + cresshift])
        else:
            upscaled = self._shifter.shift(
                upscaled, [topshift, ctopshift], [leftshift, cleftshift]
            )

    return self._scaler.scale(upscaled, width, height, shift)

shift_interpolate

shift_interpolate(
    clip: VideoNode, inter: VideoNode, double_y: bool
) -> ConstantFormatVideoNode

Applies a post-shifting interpolation operation to the interpolated clip.

Parameters:

  • clip

    (VideoNode) –

    Source clip.

  • inter

    (VideoNode) –

    Interpolated clip.

  • double_y

    (bool) –

    Whether the height has been doubled

Returns:

  • ConstantFormatVideoNode

    Shifted clip.

Source code
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
def shift_interpolate(
    self,
    clip: vs.VideoNode,
    inter: vs.VideoNode,
    double_y: bool,
) -> ConstantFormatVideoNode:
    """
    Applies a post-shifting interpolation operation to the interpolated clip.

    :param clip:        Source clip.
    :param inter:       Interpolated clip.
    :param double_y:    Whether the height has been doubled
    :return:            Shifted clip.
    """
    assert check_variable(clip, self.__class__)
    assert check_variable(inter, self.__class__)

    if not double_y and not self.drop_fields:
        shift = (self._shift * int(not self.field), 0)

        inter = self._scaler.scale(inter, clip.width, clip.height, shift)

        return self._post_interpolate(clip, inter, double_y)  # type: ignore[arg-type]

    return inter