Skip to content

bicubic

Classes:

  • AdobeBicubic

    Adobe's "Bicubic" interpolation preset resizer (b=0, c=0.75).

  • AdobeBicubicSharper

    Adobe's "Bicubic Sharper" interpolation preset resizer (b=0, c=1, blur=1.05).

  • AdobeBicubicSmoother

    Adobe's "Bicubic Smoother" interpolation preset resizer (b=0, c=0.625, blur=1.15).

  • BSpline

    BSpline resizer (b=1, c=0).

  • Bicubic

    Bicubic resizer.

  • BicubicAuto

    Bicubic resizer that follows the rule of b + 2c = ...

  • BicubicSharp

    BicubicSharp resizer (b=0, c=1).

  • Catrom

    Catrom resizer (b=0, c=0.5).

  • FFmpegBicubic

    FFmpeg's swscale default resizer (b=0, c=0.6).

  • Hermite

    Hermite resizer (b=0, c=0).

  • Mitchell

    Mitchell resizer (b=1/3, c=1/3).

  • Robidoux

    Robidoux resizer (b=0.37822, c=0.31089).

  • RobidouxSharp

    RobidouxSharp resizer (b=0.26201, c=0.36899).

  • RobidouxSoft

    RobidouxSoft resizer (b=0.67962, c=0.16019).

AdobeBicubic

AdobeBicubic(**kwargs: Any)

Bases: Bicubic

Adobe's "Bicubic" interpolation preset resizer (b=0, c=0.75).

Initialize the scaler with optional arguments.

Parameters:

  • **kwargs

    (Any, default: {} ) –

    Keyword arguments that configure the internal scaling behavior.

Methods:

  • bob

    Apply bob deinterlacing to a given clip using the selected resizer.

  • deinterlace

    Apply deinterlacing to a given clip using the selected resizer.

  • descale

    Descale a clip to the given resolution, with image borders handling and sampling grid alignment,

  • ensure_obj

    Ensure that the input is a scaler instance, resolving it if necessary.

  • from_param

    Resolve and return a scaler type from a given input (string, type, or instance).

  • get_bob_args
  • get_descale_args

    Generate and normalize argument dictionary for a descale operation.

  • get_params_args
  • get_resample_args

    Generate and normalize argument dictionary for a resample operation.

  • get_rescale_args

    Generate and normalize argument dictionary for a rescale operation.

  • get_scale_args

    Generate and normalize argument dictionary for a scale operation.

  • implemented_funcs

    Returns a set of function names that are implemented in the current class and the parent classes.

  • kernel_radius
  • multi

    Deprecated alias for supersample.

  • pretty_string

    Cached property returning a user-friendly string representation.

  • resample

    Resample a video clip to the given format.

  • rescale

    Rescale a clip to the given resolution from a previously descaled clip,

  • scale

    Scale a clip to the given resolution, with aspect ratio and linear light support.

  • shift

    Apply a subpixel shift to the clip using the kernel's scaling logic.

  • supersample

    Supersample a clip by a given scaling factor.

Attributes:

Source code in vskernels/kernels/zimg/bicubic.py
160
161
162
163
164
165
166
167
def __init__(self, **kwargs: Any) -> None:
    """
    Initialize the scaler with optional arguments.

    Args:
        **kwargs: Keyword arguments that configure the internal scaling behavior.
    """
    super().__init__(b=0, c=0.75, **kwargs)

b instance-attribute

b = b

bob_function class-attribute instance-attribute

bob_function: Callable[..., ConstantFormatVideoNode] = Bob

Bob function called internally when performing bobbing operations.

c instance-attribute

c = c

descale_function class-attribute instance-attribute

descale_function: Callable[..., ConstantFormatVideoNode] = Debicubic

kwargs instance-attribute

kwargs: dict[str, Any] = kwargs

Arguments passed to the implemented funcs or internal scale function.

resample_function class-attribute instance-attribute

resample_function: Callable[..., ConstantFormatVideoNode] = Bicubic

rescale_function class-attribute instance-attribute

rescale_function: Callable[..., ConstantFormatVideoNode] = Bicubic

scale_function class-attribute instance-attribute

scale_function: Callable[..., VideoNode] = Bicubic

bob

bob(
    clip: VideoNode, *, tff: FieldBasedLike | bool | None = None, **kwargs: Any
) -> ConstantFormatVideoNode

Apply bob deinterlacing to a given clip using the selected resizer.

Keyword arguments passed during initialization are automatically injected here, unless explicitly overridden by the arguments provided at call time. Only arguments that match named parameters in this method are injected.

Parameters:

  • clip

    (VideoNode) –

    The source clip

  • tff

    (FieldBasedLike | bool | None, default: None ) –

    Field order of the clip.

Returns:

  • ConstantFormatVideoNode

    The bobbed clip.

Source code in vskernels/kernels/zimg/abstract.py
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
def bob(
    self, clip: vs.VideoNode, *, tff: FieldBasedLike | bool | None = None, **kwargs: Any
) -> ConstantFormatVideoNode:
    """
    Apply bob deinterlacing to a given clip using the selected resizer.

    Keyword arguments passed during initialization are automatically injected here,
    unless explicitly overridden by the arguments provided at call time.
    Only arguments that match named parameters in this method are injected.

    Args:
        clip: The source clip
        tff: Field order of the clip.

    Returns:
        The bobbed clip.
    """
    clip_fieldbased = FieldBased.from_param_or_video(tff, clip, True, self.__class__)

    assert check_variable(clip, self.__class__)

    return self.bob_function(clip, **self.get_bob_args(clip, tff=clip_fieldbased.is_tff, **kwargs))

deinterlace

deinterlace(
    clip: VideoNode,
    *,
    tff: FieldBasedLike | bool | None = None,
    double_rate: bool = True,
    **kwargs: Any
) -> ConstantFormatVideoNode

Apply deinterlacing to a given clip using the selected resizer.

Keyword arguments passed during initialization are automatically injected here, unless explicitly overridden by the arguments provided at call time. Only arguments that match named parameters in this method are injected.

Parameters:

  • clip

    (VideoNode) –

    The source clip

  • tff

    (FieldBasedLike | bool | None, default: None ) –

    Field order of the clip.

  • double_rate

    (bool, default: True ) –

    Whether to double the frame rate (True) or retain the original rate (False).

Returns:

  • ConstantFormatVideoNode

    The bobbed clip.

Source code in vskernels/kernels/zimg/abstract.py
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
def deinterlace(
    self, clip: vs.VideoNode, *, tff: FieldBasedLike | bool | None = None, double_rate: bool = True, **kwargs: Any
) -> ConstantFormatVideoNode:
    """
    Apply deinterlacing to a given clip using the selected resizer.

    Keyword arguments passed during initialization are automatically injected here,
    unless explicitly overridden by the arguments provided at call time.
    Only arguments that match named parameters in this method are injected.

    Args:
        clip: The source clip
        tff: Field order of the clip.
        double_rate: Whether to double the frame rate (True) or retain the original rate (False).

    Returns:
        The bobbed clip.
    """
    bobbed = self.bob(clip, tff=tff, **kwargs)

    if not double_rate:
        return bobbed[::2]

    return bobbed

descale

descale(
    clip: VideoNode,
    width: int | None = None,
    height: int | None = None,
    shift: ShiftT = (0, 0),
    *,
    linear: bool | None = None,
    sigmoid: bool | tuple[Slope, Center] = False,
    border_handling: int | BorderHandling = MIRROR,
    sample_grid_model: int | SampleGridModel = MATCH_EDGES,
    field_based: FieldBasedLike | None = None,
    ignore_mask: VideoNode | None = None,
    blur: float | None = None,
    **kwargs: Any
) -> ConstantFormatVideoNode

Descale a clip to the given resolution, with image borders handling and sampling grid alignment, optionally using linear light processing.

Supports both progressive and interlaced sources. When interlaced, it will separate fields, perform per-field descaling, and weave them back.

Keyword arguments passed during initialization are automatically injected here, unless explicitly overridden by the arguments provided at call time. Only arguments that match named parameters in this method are injected.

Parameters:

  • clip

    (VideoNode) –

    The source clip.

  • width

    (int | None, default: None ) –

    Target descaled width (defaults to clip width if None).

  • height

    (int | None, default: None ) –

    Target descaled height (defaults to clip height if None).

  • shift

    (ShiftT, default: (0, 0) ) –

    Subpixel shift (top, left) or per-field shifts.

  • linear

    (bool | None, default: None ) –

    Whether to linearize the input before descaling. If None, inferred from sigmoid.

  • sigmoid

    (bool | tuple[Slope, Center], default: False ) –

    Whether to use sigmoid transfer curve. Can be True, False, or a tuple of (slope, center). True applies the defaults values (6.5, 0.75). Keep in mind sigmoid slope has to be in range 1.0-20.0 (inclusive) and sigmoid center has to be in range 0.0-1.0 (inclusive).

  • border_handling

    (int | BorderHandling, default: MIRROR ) –

    Method for handling image borders during sampling.

  • sample_grid_model

    (int | SampleGridModel, default: MATCH_EDGES ) –

    Model used to align sampling grid.

  • field_based

    (FieldBasedLike | None, default: None ) –

    Field-based processing mode (interlaced or progressive).

  • ignore_mask

    (VideoNode | None, default: None ) –

    Optional mask specifying areas to ignore during descaling.

  • blur

    (float | None, default: None ) –

    Amount of blur to apply during scaling.

  • **kwargs

    (Any, default: {} ) –

    Additional arguments passed to descale_function.

Returns:

  • ConstantFormatVideoNode

    The descaled video node, optionally processed in linear light.

Source code in vskernels/abstract/complex.py
671
672
673
674
675
676
677
678
679
680
681
682
683
684
685
686
687
688
689
690
691
692
693
694
695
696
697
698
699
700
701
702
703
704
705
706
707
708
709
710
711
712
713
714
715
716
717
718
719
720
721
722
723
724
725
726
727
728
729
730
731
732
733
734
735
736
737
738
739
740
741
742
743
744
745
746
747
748
749
750
751
752
753
754
755
756
757
758
759
760
761
762
763
764
765
766
767
768
769
770
771
def descale(
    self,
    clip: vs.VideoNode,
    width: int | None = None,
    height: int | None = None,
    shift: ShiftT = (0, 0),
    *,
    # `linear` and `sigmoid` parameters from LinearDescaler
    linear: bool | None = None,
    sigmoid: bool | tuple[Slope, Center] = False,
    # ComplexDescaler adds border_handling, sample_grid_model, field_based,  ignore_mask and blur
    border_handling: int | BorderHandling = BorderHandling.MIRROR,
    sample_grid_model: int | SampleGridModel = SampleGridModel.MATCH_EDGES,
    field_based: FieldBasedLike | None = None,
    ignore_mask: vs.VideoNode | None = None,
    blur: float | None = None,
    **kwargs: Any,
) -> ConstantFormatVideoNode:
    """
    Descale a clip to the given resolution, with image borders handling and sampling grid alignment,
    optionally using linear light processing.

    Supports both progressive and interlaced sources. When interlaced, it will separate fields,
    perform per-field descaling, and weave them back.

    Keyword arguments passed during initialization are automatically injected here,
    unless explicitly overridden by the arguments provided at call time.
    Only arguments that match named parameters in this method are injected.

    Args:
        clip: The source clip.
        width: Target descaled width (defaults to clip width if None).
        height: Target descaled height (defaults to clip height if None).
        shift: Subpixel shift (top, left) or per-field shifts.
        linear: Whether to linearize the input before descaling. If None, inferred from sigmoid.
        sigmoid: Whether to use sigmoid transfer curve. Can be True, False, or a tuple of (slope, center). `True`
            applies the defaults values (6.5, 0.75). Keep in mind sigmoid slope has to be in range 1.0-20.0
            (inclusive) and sigmoid center has to be in range 0.0-1.0 (inclusive).
        border_handling: Method for handling image borders during sampling.
        sample_grid_model: Model used to align sampling grid.
        field_based: Field-based processing mode (interlaced or progressive).
        ignore_mask: Optional mask specifying areas to ignore during descaling.
        blur: Amount of blur to apply during scaling.
        **kwargs: Additional arguments passed to `descale_function`.

    Returns:
        The descaled video node, optionally processed in linear light.
    """
    width, height = self._wh_norm(clip, width, height)
    check_correct_subsampling(clip, width, height)

    field_based = FieldBased.from_param_or_video(field_based, clip)

    clip, bits = expect_bits(clip, 32)

    de_base_args = (width, height // (1 + field_based.is_inter))
    kwargs.update(
        linear=linear,
        sigmoid=sigmoid,
        border_handling=BorderHandling.from_param(border_handling, self.descale),
        ignore_mask=ignore_mask,
        blur=blur,
    )

    sample_grid_model = SampleGridModel(sample_grid_model)

    if field_based.is_inter:
        from vsdeinterlace import reweave

        shift_y, shift_x = _descale_shift_norm(shift, False, self.descale)

        kwargs_tf, shift = sample_grid_model.for_src(clip, width, height, (shift_y[0], shift_x[0]), **kwargs)
        kwargs_bf, shift = sample_grid_model.for_src(clip, width, height, (shift_y[1], shift_x[1]), **kwargs)

        de_kwargs_tf = self.get_descale_args(clip, (shift_y[0], shift_x[0]), *de_base_args, **kwargs_tf)
        de_kwargs_bf = self.get_descale_args(clip, (shift_y[1], shift_x[1]), *de_base_args, **kwargs_bf)

        if height % 2:
            raise CustomIndexError("You can't descale to odd resolution when crossconverted!", self.descale)

        field_shift = 0.125 * height / clip.height

        fields = clip.std.SeparateFields(field_based.is_tff)

        descaled_tf = super().descale(
            fields[0::2],
            **de_kwargs_tf | {"src_top": de_kwargs_tf.get("src_top", 0.0) + field_shift},
        )
        descaled_bf = super().descale(
            fields[1::2],
            **de_kwargs_bf | {"src_top": de_kwargs_bf.get("src_top", 0.0) - field_shift},
        )
        descaled = reweave(descaled_tf, descaled_bf, field_based)
    else:
        shift = _descale_shift_norm(shift, True, self.descale)

        kwargs, shift = sample_grid_model.for_src(clip, width, height, shift, **kwargs)

        descaled = super().descale(clip, **self.get_descale_args(clip, shift, *de_base_args, **kwargs))

    return depth(descaled, bits)

ensure_obj classmethod

ensure_obj(
    scaler: str | type[Self] | Self | None = None,
    /,
    func_except: FuncExcept | None = None,
) -> Self

Ensure that the input is a scaler instance, resolving it if necessary.

Parameters:

  • scaler

    (str | type[Self] | Self | None, default: None ) –

    Scaler identifier (string, class, or instance).

  • func_except

    (FuncExcept | None, default: None ) –

    Function returned for custom error handling.

Returns:

  • Self

    Scaler instance.

Source code in vskernels/abstract/base.py
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
@classmethod
def ensure_obj(
    cls,
    scaler: str | type[Self] | Self | None = None,
    /,
    func_except: FuncExcept | None = None,
) -> Self:
    """
    Ensure that the input is a scaler instance, resolving it if necessary.

    Args:
        scaler: Scaler identifier (string, class, or instance).
        func_except: Function returned for custom error handling.

    Returns:
        Scaler instance.
    """
    return _base_ensure_obj(cls, scaler, func_except)

from_param classmethod

from_param(
    scaler: str | type[Self] | Self | None = None,
    /,
    func_except: FuncExcept | None = None,
) -> type[Self]

Resolve and return a scaler type from a given input (string, type, or instance).

Parameters:

  • scaler

    (str | type[Self] | Self | None, default: None ) –

    Scaler identifier (string, class, or instance).

  • func_except

    (FuncExcept | None, default: None ) –

    Function returned for custom error handling.

Returns:

  • type[Self]

    Resolved scaler type.

Source code in vskernels/abstract/base.py
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
@classmethod
def from_param(
    cls,
    scaler: str | type[Self] | Self | None = None,
    /,
    func_except: FuncExcept | None = None,
) -> type[Self]:
    """
    Resolve and return a scaler type from a given input (string, type, or instance).

    Args:
        scaler: Scaler identifier (string, class, or instance).
        func_except: Function returned for custom error handling.

    Returns:
        Resolved scaler type.
    """
    return _base_from_param(cls, scaler, cls._err_class, func_except)

get_bob_args

get_bob_args(
    clip: VideoNode, shift: tuple[TopShift, LeftShift] = (0, 0), **kwargs: Any
) -> dict[str, Any]
Source code in vskernels/kernels/zimg/bicubic.py
60
61
62
63
64
65
66
67
68
def get_bob_args(
    self,
    clip: vs.VideoNode,
    shift: tuple[TopShift, LeftShift] = (0, 0),
    **kwargs: Any,
) -> dict[str, Any]:
    return super().get_bob_args(
        clip, shift, filter="bicubic", filter_param_a=self.b, filter_param_b=self.c, **kwargs
    )

get_descale_args

get_descale_args(
    clip: VideoNode,
    shift: tuple[TopShift, LeftShift] = (0, 0),
    width: int | None = None,
    height: int | None = None,
    **kwargs: Any
) -> dict[str, Any]

Generate and normalize argument dictionary for a descale operation.

Parameters:

  • clip

    (VideoNode) –

    The source clip.

  • shift

    (tuple[TopShift, LeftShift], default: (0, 0) ) –

    Vertical and horizontal shift to apply.

  • width

    (int | None, default: None ) –

    Target width.

  • height

    (int | None, default: None ) –

    Target height.

  • **kwargs

    (Any, default: {} ) –

    Additional arguments to pass to the descale function.

Returns:

  • dict[str, Any]

    Dictionary of keyword arguments for the descale function.

Source code in vskernels/abstract/base.py
946
947
948
949
950
951
952
953
954
955
956
957
958
959
960
961
962
963
964
965
966
967
def get_descale_args(
    self,
    clip: vs.VideoNode,
    shift: tuple[TopShift, LeftShift] = (0, 0),
    width: int | None = None,
    height: int | None = None,
    **kwargs: Any,
) -> dict[str, Any]:
    """
    Generate and normalize argument dictionary for a descale operation.

    Args:
        clip: The source clip.
        shift: Vertical and horizontal shift to apply.
        width: Target width.
        height: Target height.
        **kwargs: Additional arguments to pass to the descale function.

    Returns:
        Dictionary of keyword arguments for the descale function.
    """
    return {"src_top": shift[0], "src_left": shift[1]} | self.get_params_args(True, clip, width, height, **kwargs)

get_params_args

get_params_args(
    is_descale: bool,
    clip: VideoNode,
    width: int | None = None,
    height: int | None = None,
    **kwargs: Any
) -> dict[str, Any]
Source code in vskernels/kernels/zimg/bicubic.py
52
53
54
55
56
57
58
def get_params_args(
    self, is_descale: bool, clip: vs.VideoNode, width: int | None = None, height: int | None = None, **kwargs: Any
) -> dict[str, Any]:
    args = super().get_params_args(is_descale, clip, width, height, **kwargs)
    if is_descale:
        return args | {"b": self.b, "c": self.c}
    return args | {"filter_param_a": self.b, "filter_param_b": self.c}

get_resample_args

get_resample_args(
    clip: VideoNode,
    format: int | VideoFormatLike | HoldsVideoFormat,
    matrix: MatrixLike | None,
    matrix_in: MatrixLike | None,
    **kwargs: Any
) -> dict[str, Any]

Generate and normalize argument dictionary for a resample operation.

Parameters:

  • clip

    (VideoNode) –

    The source clip.

  • format

    (int | VideoFormatLike | HoldsVideoFormat) –

    The target video format, which can either be:

    • an integer format ID,
    • a vs.PresetVideoFormat or vs.VideoFormat,
    • or a source from which a valid VideoFormat can be extracted.
  • matrix

    (MatrixLike | None) –

    Target color matrix.

  • matrix_in

    (MatrixLike | None) –

    Source color matrix.

  • **kwargs

    (Any, default: {} ) –

    Additional arguments to pass to the resample function.

Returns:

  • dict[str, Any]

    Dictionary of keyword arguments for the resample function.

Source code in vskernels/abstract/base.py
 992
 993
 994
 995
 996
 997
 998
 999
1000
1001
1002
1003
1004
1005
1006
1007
1008
1009
1010
1011
1012
1013
1014
1015
1016
1017
1018
1019
1020
1021
def get_resample_args(
    self,
    clip: vs.VideoNode,
    format: int | VideoFormatLike | HoldsVideoFormat,
    matrix: MatrixLike | None,
    matrix_in: MatrixLike | None,
    **kwargs: Any,
) -> dict[str, Any]:
    """
    Generate and normalize argument dictionary for a resample operation.

    Args:
        clip: The source clip.
        format: The target video format, which can either be:

               - an integer format ID,
               - a `vs.PresetVideoFormat` or `vs.VideoFormat`,
               - or a source from which a valid `VideoFormat` can be extracted.
        matrix: Target color matrix.
        matrix_in: Source color matrix.
        **kwargs: Additional arguments to pass to the resample function.

    Returns:
        Dictionary of keyword arguments for the resample function.
    """
    return {
        "format": get_video_format(format).id,
        "matrix": Matrix.from_param(matrix),
        "matrix_in": Matrix.from_param(matrix_in),
    } | self.get_params_args(False, clip, **kwargs)

get_rescale_args

get_rescale_args(
    clip: VideoNode,
    shift: tuple[TopShift, LeftShift] = (0, 0),
    width: int | None = None,
    height: int | None = None,
    **kwargs: Any
) -> dict[str, Any]

Generate and normalize argument dictionary for a rescale operation.

Parameters:

  • clip

    (VideoNode) –

    The source clip.

  • shift

    (tuple[TopShift, LeftShift], default: (0, 0) ) –

    Vertical and horizontal shift to apply.

  • width

    (int | None, default: None ) –

    Target width.

  • height

    (int | None, default: None ) –

    Target height.

  • **kwargs

    (Any, default: {} ) –

    Additional arguments to pass to the rescale function.

Returns:

  • dict[str, Any]

    Dictionary of keyword arguments for the rescale function.

Source code in vskernels/abstract/base.py
969
970
971
972
973
974
975
976
977
978
979
980
981
982
983
984
985
986
987
988
989
990
def get_rescale_args(
    self,
    clip: vs.VideoNode,
    shift: tuple[TopShift, LeftShift] = (0, 0),
    width: int | None = None,
    height: int | None = None,
    **kwargs: Any,
) -> dict[str, Any]:
    """
    Generate and normalize argument dictionary for a rescale operation.

    Args:
        clip: The source clip.
        shift: Vertical and horizontal shift to apply.
        width: Target width.
        height: Target height.
        **kwargs: Additional arguments to pass to the rescale function.

    Returns:
        Dictionary of keyword arguments for the rescale function.
    """
    return {"src_top": shift[0], "src_left": shift[1]} | self.get_params_args(True, clip, width, height, **kwargs)

get_scale_args

get_scale_args(
    clip: VideoNode,
    shift: tuple[TopShift, LeftShift] = (0, 0),
    width: int | None = None,
    height: int | None = None,
    **kwargs: Any
) -> dict[str, Any]

Generate and normalize argument dictionary for a scale operation.

Parameters:

  • clip

    (VideoNode) –

    The source clip.

  • shift

    (tuple[TopShift, LeftShift], default: (0, 0) ) –

    Vertical and horizontal shift to apply.

  • width

    (int | None, default: None ) –

    Target width.

  • height

    (int | None, default: None ) –

    Target height.

  • **kwargs

    (Any, default: {} ) –

    Additional arguments to pass to the scale function.

Returns:

  • dict[str, Any]

    Dictionary of keyword arguments for the scale function.

Source code in vskernels/abstract/base.py
923
924
925
926
927
928
929
930
931
932
933
934
935
936
937
938
939
940
941
942
943
944
def get_scale_args(
    self,
    clip: vs.VideoNode,
    shift: tuple[TopShift, LeftShift] = (0, 0),
    width: int | None = None,
    height: int | None = None,
    **kwargs: Any,
) -> dict[str, Any]:
    """
    Generate and normalize argument dictionary for a scale operation.

    Args:
        clip: The source clip.
        shift: Vertical and horizontal shift to apply.
        width: Target width.
        height: Target height.
        **kwargs: Additional arguments to pass to the scale function.

    Returns:
        Dictionary of keyword arguments for the scale function.
    """
    return {"src_top": shift[0], "src_left": shift[1]} | self.get_params_args(False, clip, width, height, **kwargs)

implemented_funcs classmethod

implemented_funcs() -> frozenset[str]

Returns a set of function names that are implemented in the current class and the parent classes.

These functions determine which keyword arguments will be extracted from the init method.

Returns:

Source code in vskernels/abstract/base.py
431
432
433
434
435
436
437
438
439
440
441
442
@classproperty
@classmethod
def implemented_funcs(cls) -> frozenset[str]:
    """
    Returns a set of function names that are implemented in the current class and the parent classes.

    These functions determine which keyword arguments will be extracted from the __init__ method.

    Returns:
        Frozen set of function names.
    """
    return frozenset(func for klass in cls.mro() for func in getattr(klass, "_implemented_funcs", ()))

kernel_radius

kernel_radius() -> int
Source code in vskernels/kernels/zimg/bicubic.py
70
71
72
73
74
@ZimgComplexKernel.cachedproperty
def kernel_radius(self) -> int:
    if (self.b, self.c) == (0, 0):
        return 1
    return 2

multi

multi(
    clip: VideoNodeT,
    multi: float = 2.0,
    shift: tuple[TopShift, LeftShift] = (0, 0),
    **kwargs: Any
) -> VideoNodeT

Deprecated alias for supersample.

Keyword arguments passed during initialization are automatically injected here, unless explicitly overridden by the arguments provided at call time. Only arguments that match named parameters in this method are injected.

Parameters:

  • clip

    (VideoNodeT) –

    The source clip.

  • multi

    (float, default: 2.0 ) –

    Supersampling factor.

  • shift

    (tuple[TopShift, LeftShift], default: (0, 0) ) –

    Subpixel shift (top, left) applied during scaling.

  • **kwargs

    (Any, default: {} ) –

    Additional arguments forwarded to the scale function.

Returns:

Source code in vskernels/abstract/base.py
531
532
533
534
535
536
537
538
539
540
541
542
543
544
545
546
547
548
549
550
551
@deprecated('The "multi" method is deprecated. Use "supersample" instead.', category=DeprecationWarning)
def multi(
    self, clip: VideoNodeT, multi: float = 2.0, shift: tuple[TopShift, LeftShift] = (0, 0), **kwargs: Any
) -> VideoNodeT:
    """
    Deprecated alias for `supersample`.

    Keyword arguments passed during initialization are automatically injected here,
    unless explicitly overridden by the arguments provided at call time.
    Only arguments that match named parameters in this method are injected.

    Args:
        clip: The source clip.
        multi: Supersampling factor.
        shift: Subpixel shift (top, left) applied during scaling.
        **kwargs: Additional arguments forwarded to the scale function.

    Returns:
        The supersampled clip.
    """
    return self.supersample(clip, multi, shift, **kwargs)

pretty_string

pretty_string() -> str

Cached property returning a user-friendly string representation.

Returns:

  • str

    Pretty-printed string with arguments.

Source code in vskernels/abstract/base.py
421
422
423
424
425
426
427
428
429
@BaseScalerMeta.cachedproperty
def pretty_string(self) -> str:
    """
    Cached property returning a user-friendly string representation.

    Returns:
        Pretty-printed string with arguments.
    """
    return self._pretty_string()

resample

resample(
    clip: VideoNode,
    format: int | VideoFormatLike | HoldsVideoFormat,
    matrix: MatrixLike | None = None,
    matrix_in: MatrixLike | None = None,
    **kwargs: Any
) -> ConstantFormatVideoNode

Resample a video clip to the given format.

Keyword arguments passed during initialization are automatically injected here, unless explicitly overridden by the arguments provided at call time. Only arguments that match named parameters in this method are injected.

Parameters:

  • clip

    (VideoNode) –

    The source clip.

  • format

    (int | VideoFormatLike | HoldsVideoFormat) –

    The target video format, which can either be:

    • an integer format ID,
    • a vs.PresetVideoFormat or vs.VideoFormat,
    • or a source from which a valid VideoFormat can be extracted.
  • matrix

    (MatrixLike | None, default: None ) –

    An optional color transformation matrix to apply.

  • matrix_in

    (MatrixLike | None, default: None ) –

    An optional input matrix for color transformations.

  • **kwargs

    (Any, default: {} ) –

    Additional keyword arguments passed to the resample_function.

Returns:

  • ConstantFormatVideoNode

    The resampled clip.

Source code in vskernels/abstract/base.py
717
718
719
720
721
722
723
724
725
726
727
728
729
730
731
732
733
734
735
736
737
738
739
740
741
742
743
744
745
746
747
748
def resample(
    self,
    clip: vs.VideoNode,
    format: int | VideoFormatLike | HoldsVideoFormat,
    matrix: MatrixLike | None = None,
    matrix_in: MatrixLike | None = None,
    **kwargs: Any,
) -> ConstantFormatVideoNode:
    """
    Resample a video clip to the given format.

    Keyword arguments passed during initialization are automatically injected here,
    unless explicitly overridden by the arguments provided at call time.
    Only arguments that match named parameters in this method are injected.

    Args:
        clip: The source clip.
        format: The target video format, which can either be:

               - an integer format ID,
               - a `vs.PresetVideoFormat` or `vs.VideoFormat`,
               - or a source from which a valid `VideoFormat` can be extracted.
        matrix: An optional color transformation matrix to apply.
        matrix_in: An optional input matrix for color transformations.
        **kwargs: Additional keyword arguments passed to the `resample_function`.

    Returns:
        The resampled clip.
    """
    return self.resample_function(
        clip, **_norm_props_enums(self.get_resample_args(clip, format, matrix, matrix_in, **kwargs))
    )

rescale

rescale(
    clip: VideoNode,
    width: int | None = None,
    height: int | None = None,
    shift: ShiftT = (0, 0),
    *,
    linear: bool | None = None,
    sigmoid: bool | tuple[Slope, Center] = False,
    border_handling: int | BorderHandling = MIRROR,
    sample_grid_model: int | SampleGridModel = MATCH_EDGES,
    field_based: FieldBasedLike | None = None,
    ignore_mask: VideoNode | None = None,
    blur: float | None = None,
    **kwargs: Any
) -> ConstantFormatVideoNode

Rescale a clip to the given resolution from a previously descaled clip, with image borders handling and sampling grid alignment, optionally using linear light processing.

Keyword arguments passed during initialization are automatically injected here, unless explicitly overridden by the arguments provided at call time. Only arguments that match named parameters in this method are injected.

Parameters:

  • clip

    (VideoNode) –

    The source clip.

  • width

    (int | None, default: None ) –

    Target scaled width (defaults to clip width if None).

  • height

    (int | None, default: None ) –

    Target scaled height (defaults to clip height if None).

  • shift

    (ShiftT, default: (0, 0) ) –

    Subpixel shift (top, left) or per-field shifts.

  • linear

    (bool | None, default: None ) –

    Whether to linearize the input before rescaling. If None, inferred from sigmoid.

  • sigmoid

    (bool | tuple[Slope, Center], default: False ) –

    Whether to use sigmoid transfer curve. Can be True, False, or a tuple of (slope, center). True applies the defaults values (6.5, 0.75). Keep in mind sigmoid slope has to be in range 1.0-20.0 (inclusive) and sigmoid center has to be in range 0.0-1.0 (inclusive).

  • border_handling

    (int | BorderHandling, default: MIRROR ) –

    Method for handling image borders during sampling.

  • sample_grid_model

    (int | SampleGridModel, default: MATCH_EDGES ) –

    Model used to align sampling grid.

  • field_based

    (FieldBasedLike | None, default: None ) –

    Field-based processing mode (interlaced or progressive).

  • ignore_mask

    (VideoNode | None, default: None ) –

    Optional mask specifying areas to ignore during rescaling.

  • blur

    (float | None, default: None ) –

    Amount of blur to apply during rescaling.

  • **kwargs

    (Any, default: {} ) –

    Additional arguments passed to rescale_function.

Returns:

  • ConstantFormatVideoNode

    The scaled clip.

Source code in vskernels/abstract/complex.py
773
774
775
776
777
778
779
780
781
782
783
784
785
786
787
788
789
790
791
792
793
794
795
796
797
798
799
800
801
802
803
804
805
806
807
808
809
810
811
812
813
814
815
816
817
818
819
820
821
822
823
824
825
826
827
828
829
830
831
832
833
834
835
836
837
838
839
840
841
842
843
def rescale(
    self,
    clip: vs.VideoNode,
    width: int | None = None,
    height: int | None = None,
    shift: ShiftT = (0, 0),
    *,
    # `linear` and `sigmoid` parameters from LinearDescaler
    linear: bool | None = None,
    sigmoid: bool | tuple[Slope, Center] = False,
    # ComplexDescaler adds border_handling, sample_grid_model, field_based, ignore_mask and blur
    border_handling: int | BorderHandling = BorderHandling.MIRROR,
    sample_grid_model: int | SampleGridModel = SampleGridModel.MATCH_EDGES,
    field_based: FieldBasedLike | None = None,
    ignore_mask: vs.VideoNode | None = None,
    blur: float | None = None,
    **kwargs: Any,
) -> ConstantFormatVideoNode:
    """
    Rescale a clip to the given resolution from a previously descaled clip,
    with image borders handling and sampling grid alignment, optionally using linear light processing.

    Keyword arguments passed during initialization are automatically injected here,
    unless explicitly overridden by the arguments provided at call time.
    Only arguments that match named parameters in this method are injected.

    Args:
        clip: The source clip.
        width: Target scaled width (defaults to clip width if None).
        height: Target scaled height (defaults to clip height if None).
        shift: Subpixel shift (top, left) or per-field shifts.
        linear: Whether to linearize the input before rescaling. If None, inferred from sigmoid.
        sigmoid: Whether to use sigmoid transfer curve. Can be True, False, or a tuple of (slope, center). `True`
            applies the defaults values (6.5, 0.75). Keep in mind sigmoid slope has to be in range 1.0-20.0
            (inclusive) and sigmoid center has to be in range 0.0-1.0 (inclusive).
        border_handling: Method for handling image borders during sampling.
        sample_grid_model: Model used to align sampling grid.
        field_based: Field-based processing mode (interlaced or progressive).
        ignore_mask: Optional mask specifying areas to ignore during rescaling.
        blur: Amount of blur to apply during rescaling.
        **kwargs: Additional arguments passed to `rescale_function`.

    Returns:
        The scaled clip.
    """
    width, height = self._wh_norm(clip, width, height)
    check_correct_subsampling(clip, width, height)

    field_based = FieldBased.from_param_or_video(field_based, clip)

    clip, bits = expect_bits(clip, 32)

    de_base_args = (width, height // (1 + field_based.is_inter))
    kwargs.update(
        border_handling=BorderHandling.from_param(border_handling, self.rescale), ignore_mask=ignore_mask, blur=blur
    )

    sample_grid_model = SampleGridModel(sample_grid_model)

    if field_based.is_inter:
        raise NotImplementedError
    else:
        shift = _descale_shift_norm(shift, True, self.rescale)

        kwargs, shift = sample_grid_model.for_src(clip, width, height, shift, **kwargs)

        rescaled = super().rescale(
            clip, **self.get_rescale_args(clip, shift, *de_base_args, **kwargs), linear=linear, sigmoid=sigmoid
        )

    return depth(rescaled, bits)

scale

scale(
    clip: VideoNode,
    width: int | None = None,
    height: int | None = None,
    shift: tuple[TopShift | list[TopShift], LeftShift | list[LeftShift]] = (
        0,
        0,
    ),
    *,
    linear: bool | None = None,
    sigmoid: bool | tuple[Slope, Center] = False,
    border_handling: BorderHandling = MIRROR,
    sample_grid_model: SampleGridModel = MATCH_EDGES,
    sar: Sar | float | bool | None = None,
    dar: Dar | float | bool | None = None,
    dar_in: Dar | bool | float | None = None,
    keep_ar: bool | None = None,
    blur: float | None = None,
    **kwargs: Any
) -> VideoNode | ConstantFormatVideoNode

Scale a clip to the given resolution, with aspect ratio and linear light support.

Keyword arguments passed during initialization are automatically injected here, unless explicitly overridden by the arguments provided at call time. Only arguments that match named parameters in this method are injected.

Parameters:

  • clip

    (VideoNode) –

    The source clip.

  • width

    (int | None, default: None ) –

    Target width (defaults to clip width if None).

  • height

    (int | None, default: None ) –

    Target height (defaults to clip height if None).

  • shift

    (tuple[TopShift | list[TopShift], LeftShift | list[LeftShift]], default: (0, 0) ) –

    Subpixel shift (top, left) applied during scaling. If a tuple is provided, it is used uniformly. If a list is given, the shift is applied per plane.

  • linear

    (bool | None, default: None ) –

    Whether to linearize the input before descaling. If None, inferred from sigmoid.

  • sigmoid

    (bool | tuple[Slope, Center], default: False ) –

    Whether to use sigmoid transfer curve. Can be True, False, or a tuple of (slope, center). True applies the defaults values (6.5, 0.75). Keep in mind sigmoid slope has to be in range 1.0-20.0 (inclusive) and sigmoid center has to be in range 0.0-1.0 (inclusive).

  • border_handling

    (BorderHandling, default: MIRROR ) –

    Method for handling image borders during sampling.

  • sample_grid_model

    (SampleGridModel, default: MATCH_EDGES ) –

    Model used to align sampling grid.

  • sar

    (Sar | float | bool | None, default: None ) –

    Sample aspect ratio to assume or convert to.

  • dar

    (Dar | float | bool | None, default: None ) –

    Desired display aspect ratio.

  • dar_in

    (Dar | bool | float | None, default: None ) –

    Input display aspect ratio, if different from clip's.

  • keep_ar

    (bool | None, default: None ) –

    Whether to adjust dimensions to preserve aspect ratio.

  • blur

    (float | None, default: None ) –

    Amount of blur to apply during scaling.

Returns:

  • VideoNode | ConstantFormatVideoNode

    Scaled clip, optionally aspect-corrected and linearized.

Source code in vskernels/abstract/complex.py
527
528
529
530
531
532
533
534
535
536
537
538
539
540
541
542
543
544
545
546
547
548
549
550
551
552
553
554
555
556
557
558
559
560
561
562
563
564
565
566
567
568
569
570
571
572
573
574
575
576
577
578
579
580
581
582
583
584
585
586
587
588
589
590
591
592
593
594
595
596
597
598
599
600
601
602
603
604
605
606
607
608
609
610
611
612
613
614
615
616
617
618
619
620
621
622
623
624
625
626
627
628
629
630
631
632
633
634
635
636
637
638
639
640
641
642
643
644
645
646
647
648
649
650
651
652
653
654
655
656
657
658
659
660
def scale(
    self,
    clip: vs.VideoNode,
    width: int | None = None,
    height: int | None = None,
    # ComplexScaler adds shift per planes
    shift: tuple[TopShift | list[TopShift], LeftShift | list[LeftShift]] = (0, 0),
    *,
    # `linear` and `sigmoid` from LinearScaler
    linear: bool | None = None,
    sigmoid: bool | tuple[Slope, Center] = False,
    # `border_handling`, `sample_grid_model`, `sar`, `dar`, `dar_in` and `keep_ar` from KeepArScaler
    border_handling: BorderHandling = BorderHandling.MIRROR,
    sample_grid_model: SampleGridModel = SampleGridModel.MATCH_EDGES,
    sar: Sar | float | bool | None = None,
    dar: Dar | float | bool | None = None,
    dar_in: Dar | bool | float | None = None,
    keep_ar: bool | None = None,
    # ComplexScaler adds blur
    blur: float | None = None,
    **kwargs: Any,
) -> vs.VideoNode | ConstantFormatVideoNode:
    """
    Scale a clip to the given resolution, with aspect ratio and linear light support.

    Keyword arguments passed during initialization are automatically injected here,
    unless explicitly overridden by the arguments provided at call time.
    Only arguments that match named parameters in this method are injected.

    Args:
        clip: The source clip.
        width: Target width (defaults to clip width if None).
        height: Target height (defaults to clip height if None).
        shift: Subpixel shift (top, left) applied during scaling. If a tuple is provided, it is used uniformly. If a
            list is given, the shift is applied per plane.
        linear: Whether to linearize the input before descaling. If None, inferred from sigmoid.
        sigmoid: Whether to use sigmoid transfer curve. Can be True, False, or a tuple of (slope, center). `True`
            applies the defaults values (6.5, 0.75). Keep in mind sigmoid slope has to be in range 1.0-20.0
            (inclusive) and sigmoid center has to be in range 0.0-1.0 (inclusive).
        border_handling: Method for handling image borders during sampling.
        sample_grid_model: Model used to align sampling grid.
        sar: Sample aspect ratio to assume or convert to.
        dar: Desired display aspect ratio.
        dar_in: Input display aspect ratio, if different from clip's.
        keep_ar: Whether to adjust dimensions to preserve aspect ratio.
        blur: Amount of blur to apply during scaling.

    Returns:
        Scaled clip, optionally aspect-corrected and linearized.
    """
    kwargs.update(
        linear=linear,
        sigmoid=sigmoid,
        border_handling=border_handling,
        sample_grid_model=sample_grid_model,
        sar=sar,
        dar=dar,
        dar_in=dar_in,
        keep_ar=keep_ar,
        blur=blur,
    )

    shift_top, shift_left = shift

    if isinstance(shift_top, (int, float)) and isinstance(shift_left, (int, float)):
        return super().scale(clip, width, height, (shift_top, shift_left), **kwargs)

    assert check_variable_format(clip, self.scale)

    n_planes = clip.format.num_planes

    shift_top = normalize_seq(shift_top, n_planes)
    shift_left = normalize_seq(shift_left, n_planes)

    if n_planes == 1:
        return super().scale(clip, width, height, (shift_top[0], shift_left[0]), **kwargs)

    width, height = self._wh_norm(clip, width, height)

    format_in = clip.format
    format_out = get_video_format(fallback(kwargs.pop("format", None), self.kwargs.get("format"), clip.format))

    chromaloc = ChromaLocation.from_video(clip, func=self.scale)
    chromaloc_in = ChromaLocation(
        fallback(kwargs.pop("chromaloc_in", None), self.kwargs.get("chromaloc_in"), chromaloc)
    )
    chromaloc_out = ChromaLocation(fallback(kwargs.pop("chromaloc", None), self.kwargs.get("chromaloc"), chromaloc))

    off_left, off_top = chromaloc_in.get_offsets(format_in)
    off_left_out, off_top_out = chromaloc_out.get_offsets(format_out)

    factor_w = 1 / 2**format_in.subsampling_w
    factor_h = 1 / 2**format_in.subsampling_h

    # Offsets for format out
    offc_left = (abs(off_left) + off_left_out) * factor_w
    offc_top = (abs(off_top) + off_top_out) * factor_h

    # Offsets for scale out
    if format_out.subsampling_w:
        offc_left = ((abs(off_left) + off_left * (clip.width / width)) * factor_w) + offc_left
    if format_out.subsampling_h:
        offc_top = ((abs(off_top) + off_top * (clip.height / height)) * factor_h) + offc_top

    for i in range(1, n_planes):
        shift_left[i] += offc_left
        shift_top[i] += offc_top

    scaled_planes = list[vs.VideoNode]()

    for i, (plane, top, left) in enumerate(zip(split(clip), shift_top, shift_left)):
        if i:
            w = round(width * 1 / 2**format_out.subsampling_h)
            h = round(height * 1 / 2**format_out.subsampling_h)
        else:
            w, h = width, height

        scaled_planes.append(
            super().scale(
                plane,
                w,
                h,
                (top, left),
                format=format_out.replace(color_family=vs.GRAY, subsampling_w=0, subsampling_h=0),
                **kwargs,
            )
        )

    merged = vs.core.std.ShufflePlanes(scaled_planes, [0, 0, 0], format_out.color_family, clip)

    if chromaloc_in != chromaloc_out:
        return chromaloc_out.apply(merged)

    return merged

shift

shift(
    clip: VideoNode, shift: tuple[TopShift, LeftShift], /, **kwargs: Any
) -> ConstantFormatVideoNode
shift(
    clip: VideoNode,
    shift_top: float | list[float],
    shift_left: float | list[float],
    /,
    **kwargs: Any,
) -> ConstantFormatVideoNode
shift(
    clip: VideoNode,
    shifts_or_top: float | tuple[float, float] | list[float],
    shift_left: float | list[float] | None = None,
    /,
    **kwargs: Any,
) -> ConstantFormatVideoNode

Apply a subpixel shift to the clip using the kernel's scaling logic.

If a single float or tuple is provided, it is used uniformly. If a list is given, the shift is applied per plane.

Keyword arguments passed during initialization are automatically injected here, unless explicitly overridden by the arguments provided at call time. Only arguments that match named parameters in this method are injected.

Parameters:

  • clip

    (VideoNode) –

    The source clip.

  • shifts_or_top

    (float | tuple[float, float] | list[float]) –

    Either a single vertical shift, a (top, left) tuple, or a list of vertical shifts.

  • shift_left

    (float | list[float] | None, default: None ) –

    Horizontal shift or list of horizontal shifts. Ignored if shifts_or_top is a tuple.

  • **kwargs

    (Any, default: {} ) –

    Additional arguments passed to the internal scale call.

Returns:

  • ConstantFormatVideoNode

    A new clip with the applied shift.

Raises:

  • VariableFormatError

    If the input clip has variable format.

  • CustomValueError

    If the input clip is GRAY but lists of shift has been passed.

Source code in vskernels/abstract/base.py
845
846
847
848
849
850
851
852
853
854
855
856
857
858
859
860
861
862
863
864
865
866
867
868
869
870
871
872
873
874
875
876
877
878
879
880
881
882
883
884
885
886
887
888
889
890
891
892
893
894
895
896
897
898
899
900
901
902
903
def shift(
    self,
    clip: vs.VideoNode,
    shifts_or_top: float | tuple[float, float] | list[float],
    shift_left: float | list[float] | None = None,
    /,
    **kwargs: Any,
) -> ConstantFormatVideoNode:
    """
    Apply a subpixel shift to the clip using the kernel's scaling logic.

    If a single float or tuple is provided, it is used uniformly.
    If a list is given, the shift is applied per plane.

    Keyword arguments passed during initialization are automatically injected here,
    unless explicitly overridden by the arguments provided at call time.
    Only arguments that match named parameters in this method are injected.

    Args:
        clip: The source clip.
        shifts_or_top: Either a single vertical shift, a (top, left) tuple, or a list of vertical shifts.
        shift_left: Horizontal shift or list of horizontal shifts. Ignored if `shifts_or_top` is a tuple.
        **kwargs: Additional arguments passed to the internal `scale` call.

    Returns:
        A new clip with the applied shift.

    Raises:
        VariableFormatError: If the input clip has variable format.
        CustomValueError: If the input clip is GRAY but lists of shift has been passed.
    """
    assert check_variable_format(clip, self.shift)

    n_planes = clip.format.num_planes

    def _shift(src: vs.VideoNode, shift: tuple[TopShift, LeftShift] = (0, 0)) -> ConstantFormatVideoNode:
        return self.scale(src, shift=shift, **kwargs)  # type: ignore[return-value]

    if isinstance(shifts_or_top, tuple):
        return _shift(clip, shifts_or_top)

    if isinstance(shifts_or_top, (int, float)) and isinstance(shift_left, (int, float, NoneType)):
        return _shift(clip, (shifts_or_top, shift_left or 0))

    if shift_left is None:
        shift_left = 0.0

    shifts_top = normalize_seq(shifts_or_top, n_planes)
    shifts_left = normalize_seq(shift_left, n_planes)

    if n_planes == 1:
        return _shift(clip, (shifts_top[0], shifts_left[0]))

    shifted_planes = [
        plane if top == left == 0 else _shift(plane, (top, left))
        for plane, top, left in zip(split(clip), shifts_top, shifts_left)
    ]

    return core.std.ShufflePlanes(shifted_planes, [0, 0, 0], clip.format.color_family, clip)

supersample

supersample(
    clip: VideoNodeT,
    rfactor: float = 2.0,
    shift: tuple[TopShift, LeftShift] = (0, 0),
    **kwargs: Any
) -> VideoNodeT

Supersample a clip by a given scaling factor.

Keyword arguments passed during initialization are automatically injected here, unless explicitly overridden by the arguments provided at call time. Only arguments that match named parameters in this method are injected.

Parameters:

  • clip

    (VideoNodeT) –

    The source clip.

  • rfactor

    (float, default: 2.0 ) –

    Scaling factor for supersampling.

  • shift

    (tuple[TopShift, LeftShift], default: (0, 0) ) –

    Subpixel shift (top, left) applied during scaling.

  • **kwargs

    (Any, default: {} ) –

    Additional arguments forwarded to the scale function.

Raises:

  • CustomValueError

    If resulting resolution is non-positive.

Returns:

Source code in vskernels/abstract/base.py
496
497
498
499
500
501
502
503
504
505
506
507
508
509
510
511
512
513
514
515
516
517
518
519
520
521
522
523
524
525
526
527
528
529
def supersample(
    self, clip: VideoNodeT, rfactor: float = 2.0, shift: tuple[TopShift, LeftShift] = (0, 0), **kwargs: Any
) -> VideoNodeT:
    """
    Supersample a clip by a given scaling factor.

    Keyword arguments passed during initialization are automatically injected here,
    unless explicitly overridden by the arguments provided at call time.
    Only arguments that match named parameters in this method are injected.

    Args:
        clip: The source clip.
        rfactor: Scaling factor for supersampling.
        shift: Subpixel shift (top, left) applied during scaling.
        **kwargs: Additional arguments forwarded to the scale function.

    Raises:
        CustomValueError: If resulting resolution is non-positive.

    Returns:
        The supersampled clip.
    """
    assert check_variable_resolution(clip, self.supersample)

    dst_width, dst_height = ceil(clip.width * rfactor), ceil(clip.height * rfactor)

    if max(dst_width, dst_height) <= 0.0:
        raise CustomValueError(
            'Multiplying the resolution by "rfactor" must result in a positive resolution!',
            self.supersample,
            rfactor,
        )

    return self.scale(clip, dst_width, dst_height, shift, **kwargs)  # type: ignore[return-value]

AdobeBicubicSharper

AdobeBicubicSharper(**kwargs: Any)

Bases: Bicubic

Adobe's "Bicubic Sharper" interpolation preset resizer (b=0, c=1, blur=1.05).

Initialize the scaler with optional arguments.

Parameters:

  • **kwargs

    (Any, default: {} ) –

    Keyword arguments that configure the internal scaling behavior.

Methods:

  • bob

    Apply bob deinterlacing to a given clip using the selected resizer.

  • deinterlace

    Apply deinterlacing to a given clip using the selected resizer.

  • descale

    Descale a clip to the given resolution, with image borders handling and sampling grid alignment,

  • ensure_obj

    Ensure that the input is a scaler instance, resolving it if necessary.

  • from_param

    Resolve and return a scaler type from a given input (string, type, or instance).

  • get_bob_args
  • get_descale_args

    Generate and normalize argument dictionary for a descale operation.

  • get_params_args
  • get_resample_args

    Generate and normalize argument dictionary for a resample operation.

  • get_rescale_args

    Generate and normalize argument dictionary for a rescale operation.

  • get_scale_args

    Generate and normalize argument dictionary for a scale operation.

  • implemented_funcs

    Returns a set of function names that are implemented in the current class and the parent classes.

  • kernel_radius
  • multi

    Deprecated alias for supersample.

  • pretty_string

    Cached property returning a user-friendly string representation.

  • resample

    Resample a video clip to the given format.

  • rescale

    Rescale a clip to the given resolution from a previously descaled clip,

  • scale

    Scale a clip to the given resolution, with aspect ratio and linear light support.

  • shift

    Apply a subpixel shift to the clip using the kernel's scaling logic.

  • supersample

    Supersample a clip by a given scaling factor.

Attributes:

Source code in vskernels/kernels/zimg/bicubic.py
175
176
177
178
179
180
181
182
def __init__(self, **kwargs: Any) -> None:
    """
    Initialize the scaler with optional arguments.

    Args:
        **kwargs: Keyword arguments that configure the internal scaling behavior.
    """
    super().__init__(b=0, c=1, blur=1.05, **kwargs)

b instance-attribute

b = b

bob_function class-attribute instance-attribute

bob_function: Callable[..., ConstantFormatVideoNode] = Bob

Bob function called internally when performing bobbing operations.

c instance-attribute

c = c

descale_function class-attribute instance-attribute

descale_function: Callable[..., ConstantFormatVideoNode] = Debicubic

kwargs instance-attribute

kwargs: dict[str, Any] = kwargs

Arguments passed to the implemented funcs or internal scale function.

resample_function class-attribute instance-attribute

resample_function: Callable[..., ConstantFormatVideoNode] = Bicubic

rescale_function class-attribute instance-attribute

rescale_function: Callable[..., ConstantFormatVideoNode] = Bicubic

scale_function class-attribute instance-attribute

scale_function: Callable[..., VideoNode] = Bicubic

bob

bob(
    clip: VideoNode, *, tff: FieldBasedLike | bool | None = None, **kwargs: Any
) -> ConstantFormatVideoNode

Apply bob deinterlacing to a given clip using the selected resizer.

Keyword arguments passed during initialization are automatically injected here, unless explicitly overridden by the arguments provided at call time. Only arguments that match named parameters in this method are injected.

Parameters:

  • clip

    (VideoNode) –

    The source clip

  • tff

    (FieldBasedLike | bool | None, default: None ) –

    Field order of the clip.

Returns:

  • ConstantFormatVideoNode

    The bobbed clip.

Source code in vskernels/kernels/zimg/abstract.py
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
def bob(
    self, clip: vs.VideoNode, *, tff: FieldBasedLike | bool | None = None, **kwargs: Any
) -> ConstantFormatVideoNode:
    """
    Apply bob deinterlacing to a given clip using the selected resizer.

    Keyword arguments passed during initialization are automatically injected here,
    unless explicitly overridden by the arguments provided at call time.
    Only arguments that match named parameters in this method are injected.

    Args:
        clip: The source clip
        tff: Field order of the clip.

    Returns:
        The bobbed clip.
    """
    clip_fieldbased = FieldBased.from_param_or_video(tff, clip, True, self.__class__)

    assert check_variable(clip, self.__class__)

    return self.bob_function(clip, **self.get_bob_args(clip, tff=clip_fieldbased.is_tff, **kwargs))

deinterlace

deinterlace(
    clip: VideoNode,
    *,
    tff: FieldBasedLike | bool | None = None,
    double_rate: bool = True,
    **kwargs: Any
) -> ConstantFormatVideoNode

Apply deinterlacing to a given clip using the selected resizer.

Keyword arguments passed during initialization are automatically injected here, unless explicitly overridden by the arguments provided at call time. Only arguments that match named parameters in this method are injected.

Parameters:

  • clip

    (VideoNode) –

    The source clip

  • tff

    (FieldBasedLike | bool | None, default: None ) –

    Field order of the clip.

  • double_rate

    (bool, default: True ) –

    Whether to double the frame rate (True) or retain the original rate (False).

Returns:

  • ConstantFormatVideoNode

    The bobbed clip.

Source code in vskernels/kernels/zimg/abstract.py
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
def deinterlace(
    self, clip: vs.VideoNode, *, tff: FieldBasedLike | bool | None = None, double_rate: bool = True, **kwargs: Any
) -> ConstantFormatVideoNode:
    """
    Apply deinterlacing to a given clip using the selected resizer.

    Keyword arguments passed during initialization are automatically injected here,
    unless explicitly overridden by the arguments provided at call time.
    Only arguments that match named parameters in this method are injected.

    Args:
        clip: The source clip
        tff: Field order of the clip.
        double_rate: Whether to double the frame rate (True) or retain the original rate (False).

    Returns:
        The bobbed clip.
    """
    bobbed = self.bob(clip, tff=tff, **kwargs)

    if not double_rate:
        return bobbed[::2]

    return bobbed

descale

descale(
    clip: VideoNode,
    width: int | None = None,
    height: int | None = None,
    shift: ShiftT = (0, 0),
    *,
    linear: bool | None = None,
    sigmoid: bool | tuple[Slope, Center] = False,
    border_handling: int | BorderHandling = MIRROR,
    sample_grid_model: int | SampleGridModel = MATCH_EDGES,
    field_based: FieldBasedLike | None = None,
    ignore_mask: VideoNode | None = None,
    blur: float | None = None,
    **kwargs: Any
) -> ConstantFormatVideoNode

Descale a clip to the given resolution, with image borders handling and sampling grid alignment, optionally using linear light processing.

Supports both progressive and interlaced sources. When interlaced, it will separate fields, perform per-field descaling, and weave them back.

Keyword arguments passed during initialization are automatically injected here, unless explicitly overridden by the arguments provided at call time. Only arguments that match named parameters in this method are injected.

Parameters:

  • clip

    (VideoNode) –

    The source clip.

  • width

    (int | None, default: None ) –

    Target descaled width (defaults to clip width if None).

  • height

    (int | None, default: None ) –

    Target descaled height (defaults to clip height if None).

  • shift

    (ShiftT, default: (0, 0) ) –

    Subpixel shift (top, left) or per-field shifts.

  • linear

    (bool | None, default: None ) –

    Whether to linearize the input before descaling. If None, inferred from sigmoid.

  • sigmoid

    (bool | tuple[Slope, Center], default: False ) –

    Whether to use sigmoid transfer curve. Can be True, False, or a tuple of (slope, center). True applies the defaults values (6.5, 0.75). Keep in mind sigmoid slope has to be in range 1.0-20.0 (inclusive) and sigmoid center has to be in range 0.0-1.0 (inclusive).

  • border_handling

    (int | BorderHandling, default: MIRROR ) –

    Method for handling image borders during sampling.

  • sample_grid_model

    (int | SampleGridModel, default: MATCH_EDGES ) –

    Model used to align sampling grid.

  • field_based

    (FieldBasedLike | None, default: None ) –

    Field-based processing mode (interlaced or progressive).

  • ignore_mask

    (VideoNode | None, default: None ) –

    Optional mask specifying areas to ignore during descaling.

  • blur

    (float | None, default: None ) –

    Amount of blur to apply during scaling.

  • **kwargs

    (Any, default: {} ) –

    Additional arguments passed to descale_function.

Returns:

  • ConstantFormatVideoNode

    The descaled video node, optionally processed in linear light.

Source code in vskernels/abstract/complex.py
671
672
673
674
675
676
677
678
679
680
681
682
683
684
685
686
687
688
689
690
691
692
693
694
695
696
697
698
699
700
701
702
703
704
705
706
707
708
709
710
711
712
713
714
715
716
717
718
719
720
721
722
723
724
725
726
727
728
729
730
731
732
733
734
735
736
737
738
739
740
741
742
743
744
745
746
747
748
749
750
751
752
753
754
755
756
757
758
759
760
761
762
763
764
765
766
767
768
769
770
771
def descale(
    self,
    clip: vs.VideoNode,
    width: int | None = None,
    height: int | None = None,
    shift: ShiftT = (0, 0),
    *,
    # `linear` and `sigmoid` parameters from LinearDescaler
    linear: bool | None = None,
    sigmoid: bool | tuple[Slope, Center] = False,
    # ComplexDescaler adds border_handling, sample_grid_model, field_based,  ignore_mask and blur
    border_handling: int | BorderHandling = BorderHandling.MIRROR,
    sample_grid_model: int | SampleGridModel = SampleGridModel.MATCH_EDGES,
    field_based: FieldBasedLike | None = None,
    ignore_mask: vs.VideoNode | None = None,
    blur: float | None = None,
    **kwargs: Any,
) -> ConstantFormatVideoNode:
    """
    Descale a clip to the given resolution, with image borders handling and sampling grid alignment,
    optionally using linear light processing.

    Supports both progressive and interlaced sources. When interlaced, it will separate fields,
    perform per-field descaling, and weave them back.

    Keyword arguments passed during initialization are automatically injected here,
    unless explicitly overridden by the arguments provided at call time.
    Only arguments that match named parameters in this method are injected.

    Args:
        clip: The source clip.
        width: Target descaled width (defaults to clip width if None).
        height: Target descaled height (defaults to clip height if None).
        shift: Subpixel shift (top, left) or per-field shifts.
        linear: Whether to linearize the input before descaling. If None, inferred from sigmoid.
        sigmoid: Whether to use sigmoid transfer curve. Can be True, False, or a tuple of (slope, center). `True`
            applies the defaults values (6.5, 0.75). Keep in mind sigmoid slope has to be in range 1.0-20.0
            (inclusive) and sigmoid center has to be in range 0.0-1.0 (inclusive).
        border_handling: Method for handling image borders during sampling.
        sample_grid_model: Model used to align sampling grid.
        field_based: Field-based processing mode (interlaced or progressive).
        ignore_mask: Optional mask specifying areas to ignore during descaling.
        blur: Amount of blur to apply during scaling.
        **kwargs: Additional arguments passed to `descale_function`.

    Returns:
        The descaled video node, optionally processed in linear light.
    """
    width, height = self._wh_norm(clip, width, height)
    check_correct_subsampling(clip, width, height)

    field_based = FieldBased.from_param_or_video(field_based, clip)

    clip, bits = expect_bits(clip, 32)

    de_base_args = (width, height // (1 + field_based.is_inter))
    kwargs.update(
        linear=linear,
        sigmoid=sigmoid,
        border_handling=BorderHandling.from_param(border_handling, self.descale),
        ignore_mask=ignore_mask,
        blur=blur,
    )

    sample_grid_model = SampleGridModel(sample_grid_model)

    if field_based.is_inter:
        from vsdeinterlace import reweave

        shift_y, shift_x = _descale_shift_norm(shift, False, self.descale)

        kwargs_tf, shift = sample_grid_model.for_src(clip, width, height, (shift_y[0], shift_x[0]), **kwargs)
        kwargs_bf, shift = sample_grid_model.for_src(clip, width, height, (shift_y[1], shift_x[1]), **kwargs)

        de_kwargs_tf = self.get_descale_args(clip, (shift_y[0], shift_x[0]), *de_base_args, **kwargs_tf)
        de_kwargs_bf = self.get_descale_args(clip, (shift_y[1], shift_x[1]), *de_base_args, **kwargs_bf)

        if height % 2:
            raise CustomIndexError("You can't descale to odd resolution when crossconverted!", self.descale)

        field_shift = 0.125 * height / clip.height

        fields = clip.std.SeparateFields(field_based.is_tff)

        descaled_tf = super().descale(
            fields[0::2],
            **de_kwargs_tf | {"src_top": de_kwargs_tf.get("src_top", 0.0) + field_shift},
        )
        descaled_bf = super().descale(
            fields[1::2],
            **de_kwargs_bf | {"src_top": de_kwargs_bf.get("src_top", 0.0) - field_shift},
        )
        descaled = reweave(descaled_tf, descaled_bf, field_based)
    else:
        shift = _descale_shift_norm(shift, True, self.descale)

        kwargs, shift = sample_grid_model.for_src(clip, width, height, shift, **kwargs)

        descaled = super().descale(clip, **self.get_descale_args(clip, shift, *de_base_args, **kwargs))

    return depth(descaled, bits)

ensure_obj classmethod

ensure_obj(
    scaler: str | type[Self] | Self | None = None,
    /,
    func_except: FuncExcept | None = None,
) -> Self

Ensure that the input is a scaler instance, resolving it if necessary.

Parameters:

  • scaler

    (str | type[Self] | Self | None, default: None ) –

    Scaler identifier (string, class, or instance).

  • func_except

    (FuncExcept | None, default: None ) –

    Function returned for custom error handling.

Returns:

  • Self

    Scaler instance.

Source code in vskernels/abstract/base.py
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
@classmethod
def ensure_obj(
    cls,
    scaler: str | type[Self] | Self | None = None,
    /,
    func_except: FuncExcept | None = None,
) -> Self:
    """
    Ensure that the input is a scaler instance, resolving it if necessary.

    Args:
        scaler: Scaler identifier (string, class, or instance).
        func_except: Function returned for custom error handling.

    Returns:
        Scaler instance.
    """
    return _base_ensure_obj(cls, scaler, func_except)

from_param classmethod

from_param(
    scaler: str | type[Self] | Self | None = None,
    /,
    func_except: FuncExcept | None = None,
) -> type[Self]

Resolve and return a scaler type from a given input (string, type, or instance).

Parameters:

  • scaler

    (str | type[Self] | Self | None, default: None ) –

    Scaler identifier (string, class, or instance).

  • func_except

    (FuncExcept | None, default: None ) –

    Function returned for custom error handling.

Returns:

  • type[Self]

    Resolved scaler type.

Source code in vskernels/abstract/base.py
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
@classmethod
def from_param(
    cls,
    scaler: str | type[Self] | Self | None = None,
    /,
    func_except: FuncExcept | None = None,
) -> type[Self]:
    """
    Resolve and return a scaler type from a given input (string, type, or instance).

    Args:
        scaler: Scaler identifier (string, class, or instance).
        func_except: Function returned for custom error handling.

    Returns:
        Resolved scaler type.
    """
    return _base_from_param(cls, scaler, cls._err_class, func_except)

get_bob_args

get_bob_args(
    clip: VideoNode, shift: tuple[TopShift, LeftShift] = (0, 0), **kwargs: Any
) -> dict[str, Any]
Source code in vskernels/kernels/zimg/bicubic.py
60
61
62
63
64
65
66
67
68
def get_bob_args(
    self,
    clip: vs.VideoNode,
    shift: tuple[TopShift, LeftShift] = (0, 0),
    **kwargs: Any,
) -> dict[str, Any]:
    return super().get_bob_args(
        clip, shift, filter="bicubic", filter_param_a=self.b, filter_param_b=self.c, **kwargs
    )

get_descale_args

get_descale_args(
    clip: VideoNode,
    shift: tuple[TopShift, LeftShift] = (0, 0),
    width: int | None = None,
    height: int | None = None,
    **kwargs: Any
) -> dict[str, Any]

Generate and normalize argument dictionary for a descale operation.

Parameters:

  • clip

    (VideoNode) –

    The source clip.

  • shift

    (tuple[TopShift, LeftShift], default: (0, 0) ) –

    Vertical and horizontal shift to apply.

  • width

    (int | None, default: None ) –

    Target width.

  • height

    (int | None, default: None ) –

    Target height.

  • **kwargs

    (Any, default: {} ) –

    Additional arguments to pass to the descale function.

Returns:

  • dict[str, Any]

    Dictionary of keyword arguments for the descale function.

Source code in vskernels/abstract/base.py
946
947
948
949
950
951
952
953
954
955
956
957
958
959
960
961
962
963
964
965
966
967
def get_descale_args(
    self,
    clip: vs.VideoNode,
    shift: tuple[TopShift, LeftShift] = (0, 0),
    width: int | None = None,
    height: int | None = None,
    **kwargs: Any,
) -> dict[str, Any]:
    """
    Generate and normalize argument dictionary for a descale operation.

    Args:
        clip: The source clip.
        shift: Vertical and horizontal shift to apply.
        width: Target width.
        height: Target height.
        **kwargs: Additional arguments to pass to the descale function.

    Returns:
        Dictionary of keyword arguments for the descale function.
    """
    return {"src_top": shift[0], "src_left": shift[1]} | self.get_params_args(True, clip, width, height, **kwargs)

get_params_args

get_params_args(
    is_descale: bool,
    clip: VideoNode,
    width: int | None = None,
    height: int | None = None,
    **kwargs: Any
) -> dict[str, Any]
Source code in vskernels/kernels/zimg/bicubic.py
52
53
54
55
56
57
58
def get_params_args(
    self, is_descale: bool, clip: vs.VideoNode, width: int | None = None, height: int | None = None, **kwargs: Any
) -> dict[str, Any]:
    args = super().get_params_args(is_descale, clip, width, height, **kwargs)
    if is_descale:
        return args | {"b": self.b, "c": self.c}
    return args | {"filter_param_a": self.b, "filter_param_b": self.c}

get_resample_args

get_resample_args(
    clip: VideoNode,
    format: int | VideoFormatLike | HoldsVideoFormat,
    matrix: MatrixLike | None,
    matrix_in: MatrixLike | None,
    **kwargs: Any
) -> dict[str, Any]

Generate and normalize argument dictionary for a resample operation.

Parameters:

  • clip

    (VideoNode) –

    The source clip.

  • format

    (int | VideoFormatLike | HoldsVideoFormat) –

    The target video format, which can either be:

    • an integer format ID,
    • a vs.PresetVideoFormat or vs.VideoFormat,
    • or a source from which a valid VideoFormat can be extracted.
  • matrix

    (MatrixLike | None) –

    Target color matrix.

  • matrix_in

    (MatrixLike | None) –

    Source color matrix.

  • **kwargs

    (Any, default: {} ) –

    Additional arguments to pass to the resample function.

Returns:

  • dict[str, Any]

    Dictionary of keyword arguments for the resample function.

Source code in vskernels/abstract/base.py
 992
 993
 994
 995
 996
 997
 998
 999
1000
1001
1002
1003
1004
1005
1006
1007
1008
1009
1010
1011
1012
1013
1014
1015
1016
1017
1018
1019
1020
1021
def get_resample_args(
    self,
    clip: vs.VideoNode,
    format: int | VideoFormatLike | HoldsVideoFormat,
    matrix: MatrixLike | None,
    matrix_in: MatrixLike | None,
    **kwargs: Any,
) -> dict[str, Any]:
    """
    Generate and normalize argument dictionary for a resample operation.

    Args:
        clip: The source clip.
        format: The target video format, which can either be:

               - an integer format ID,
               - a `vs.PresetVideoFormat` or `vs.VideoFormat`,
               - or a source from which a valid `VideoFormat` can be extracted.
        matrix: Target color matrix.
        matrix_in: Source color matrix.
        **kwargs: Additional arguments to pass to the resample function.

    Returns:
        Dictionary of keyword arguments for the resample function.
    """
    return {
        "format": get_video_format(format).id,
        "matrix": Matrix.from_param(matrix),
        "matrix_in": Matrix.from_param(matrix_in),
    } | self.get_params_args(False, clip, **kwargs)

get_rescale_args

get_rescale_args(
    clip: VideoNode,
    shift: tuple[TopShift, LeftShift] = (0, 0),
    width: int | None = None,
    height: int | None = None,
    **kwargs: Any
) -> dict[str, Any]

Generate and normalize argument dictionary for a rescale operation.

Parameters:

  • clip

    (VideoNode) –

    The source clip.

  • shift

    (tuple[TopShift, LeftShift], default: (0, 0) ) –

    Vertical and horizontal shift to apply.

  • width

    (int | None, default: None ) –

    Target width.

  • height

    (int | None, default: None ) –

    Target height.

  • **kwargs

    (Any, default: {} ) –

    Additional arguments to pass to the rescale function.

Returns:

  • dict[str, Any]

    Dictionary of keyword arguments for the rescale function.

Source code in vskernels/abstract/base.py
969
970
971
972
973
974
975
976
977
978
979
980
981
982
983
984
985
986
987
988
989
990
def get_rescale_args(
    self,
    clip: vs.VideoNode,
    shift: tuple[TopShift, LeftShift] = (0, 0),
    width: int | None = None,
    height: int | None = None,
    **kwargs: Any,
) -> dict[str, Any]:
    """
    Generate and normalize argument dictionary for a rescale operation.

    Args:
        clip: The source clip.
        shift: Vertical and horizontal shift to apply.
        width: Target width.
        height: Target height.
        **kwargs: Additional arguments to pass to the rescale function.

    Returns:
        Dictionary of keyword arguments for the rescale function.
    """
    return {"src_top": shift[0], "src_left": shift[1]} | self.get_params_args(True, clip, width, height, **kwargs)

get_scale_args

get_scale_args(
    clip: VideoNode,
    shift: tuple[TopShift, LeftShift] = (0, 0),
    width: int | None = None,
    height: int | None = None,
    **kwargs: Any
) -> dict[str, Any]

Generate and normalize argument dictionary for a scale operation.

Parameters:

  • clip

    (VideoNode) –

    The source clip.

  • shift

    (tuple[TopShift, LeftShift], default: (0, 0) ) –

    Vertical and horizontal shift to apply.

  • width

    (int | None, default: None ) –

    Target width.

  • height

    (int | None, default: None ) –

    Target height.

  • **kwargs

    (Any, default: {} ) –

    Additional arguments to pass to the scale function.

Returns:

  • dict[str, Any]

    Dictionary of keyword arguments for the scale function.

Source code in vskernels/abstract/base.py
923
924
925
926
927
928
929
930
931
932
933
934
935
936
937
938
939
940
941
942
943
944
def get_scale_args(
    self,
    clip: vs.VideoNode,
    shift: tuple[TopShift, LeftShift] = (0, 0),
    width: int | None = None,
    height: int | None = None,
    **kwargs: Any,
) -> dict[str, Any]:
    """
    Generate and normalize argument dictionary for a scale operation.

    Args:
        clip: The source clip.
        shift: Vertical and horizontal shift to apply.
        width: Target width.
        height: Target height.
        **kwargs: Additional arguments to pass to the scale function.

    Returns:
        Dictionary of keyword arguments for the scale function.
    """
    return {"src_top": shift[0], "src_left": shift[1]} | self.get_params_args(False, clip, width, height, **kwargs)

implemented_funcs classmethod

implemented_funcs() -> frozenset[str]

Returns a set of function names that are implemented in the current class and the parent classes.

These functions determine which keyword arguments will be extracted from the init method.

Returns:

Source code in vskernels/abstract/base.py
431
432
433
434
435
436
437
438
439
440
441
442
@classproperty
@classmethod
def implemented_funcs(cls) -> frozenset[str]:
    """
    Returns a set of function names that are implemented in the current class and the parent classes.

    These functions determine which keyword arguments will be extracted from the __init__ method.

    Returns:
        Frozen set of function names.
    """
    return frozenset(func for klass in cls.mro() for func in getattr(klass, "_implemented_funcs", ()))

kernel_radius

kernel_radius() -> int
Source code in vskernels/kernels/zimg/bicubic.py
70
71
72
73
74
@ZimgComplexKernel.cachedproperty
def kernel_radius(self) -> int:
    if (self.b, self.c) == (0, 0):
        return 1
    return 2

multi

multi(
    clip: VideoNodeT,
    multi: float = 2.0,
    shift: tuple[TopShift, LeftShift] = (0, 0),
    **kwargs: Any
) -> VideoNodeT

Deprecated alias for supersample.

Keyword arguments passed during initialization are automatically injected here, unless explicitly overridden by the arguments provided at call time. Only arguments that match named parameters in this method are injected.

Parameters:

  • clip

    (VideoNodeT) –

    The source clip.

  • multi

    (float, default: 2.0 ) –

    Supersampling factor.

  • shift

    (tuple[TopShift, LeftShift], default: (0, 0) ) –

    Subpixel shift (top, left) applied during scaling.

  • **kwargs

    (Any, default: {} ) –

    Additional arguments forwarded to the scale function.

Returns:

Source code in vskernels/abstract/base.py
531
532
533
534
535
536
537
538
539
540
541
542
543
544
545
546
547
548
549
550
551
@deprecated('The "multi" method is deprecated. Use "supersample" instead.', category=DeprecationWarning)
def multi(
    self, clip: VideoNodeT, multi: float = 2.0, shift: tuple[TopShift, LeftShift] = (0, 0), **kwargs: Any
) -> VideoNodeT:
    """
    Deprecated alias for `supersample`.

    Keyword arguments passed during initialization are automatically injected here,
    unless explicitly overridden by the arguments provided at call time.
    Only arguments that match named parameters in this method are injected.

    Args:
        clip: The source clip.
        multi: Supersampling factor.
        shift: Subpixel shift (top, left) applied during scaling.
        **kwargs: Additional arguments forwarded to the scale function.

    Returns:
        The supersampled clip.
    """
    return self.supersample(clip, multi, shift, **kwargs)

pretty_string

pretty_string() -> str

Cached property returning a user-friendly string representation.

Returns:

  • str

    Pretty-printed string with arguments.

Source code in vskernels/abstract/base.py
421
422
423
424
425
426
427
428
429
@BaseScalerMeta.cachedproperty
def pretty_string(self) -> str:
    """
    Cached property returning a user-friendly string representation.

    Returns:
        Pretty-printed string with arguments.
    """
    return self._pretty_string()

resample

resample(
    clip: VideoNode,
    format: int | VideoFormatLike | HoldsVideoFormat,
    matrix: MatrixLike | None = None,
    matrix_in: MatrixLike | None = None,
    **kwargs: Any
) -> ConstantFormatVideoNode

Resample a video clip to the given format.

Keyword arguments passed during initialization are automatically injected here, unless explicitly overridden by the arguments provided at call time. Only arguments that match named parameters in this method are injected.

Parameters:

  • clip

    (VideoNode) –

    The source clip.

  • format

    (int | VideoFormatLike | HoldsVideoFormat) –

    The target video format, which can either be:

    • an integer format ID,
    • a vs.PresetVideoFormat or vs.VideoFormat,
    • or a source from which a valid VideoFormat can be extracted.
  • matrix

    (MatrixLike | None, default: None ) –

    An optional color transformation matrix to apply.

  • matrix_in

    (MatrixLike | None, default: None ) –

    An optional input matrix for color transformations.

  • **kwargs

    (Any, default: {} ) –

    Additional keyword arguments passed to the resample_function.

Returns:

  • ConstantFormatVideoNode

    The resampled clip.

Source code in vskernels/abstract/base.py
717
718
719
720
721
722
723
724
725
726
727
728
729
730
731
732
733
734
735
736
737
738
739
740
741
742
743
744
745
746
747
748
def resample(
    self,
    clip: vs.VideoNode,
    format: int | VideoFormatLike | HoldsVideoFormat,
    matrix: MatrixLike | None = None,
    matrix_in: MatrixLike | None = None,
    **kwargs: Any,
) -> ConstantFormatVideoNode:
    """
    Resample a video clip to the given format.

    Keyword arguments passed during initialization are automatically injected here,
    unless explicitly overridden by the arguments provided at call time.
    Only arguments that match named parameters in this method are injected.

    Args:
        clip: The source clip.
        format: The target video format, which can either be:

               - an integer format ID,
               - a `vs.PresetVideoFormat` or `vs.VideoFormat`,
               - or a source from which a valid `VideoFormat` can be extracted.
        matrix: An optional color transformation matrix to apply.
        matrix_in: An optional input matrix for color transformations.
        **kwargs: Additional keyword arguments passed to the `resample_function`.

    Returns:
        The resampled clip.
    """
    return self.resample_function(
        clip, **_norm_props_enums(self.get_resample_args(clip, format, matrix, matrix_in, **kwargs))
    )

rescale

rescale(
    clip: VideoNode,
    width: int | None = None,
    height: int | None = None,
    shift: ShiftT = (0, 0),
    *,
    linear: bool | None = None,
    sigmoid: bool | tuple[Slope, Center] = False,
    border_handling: int | BorderHandling = MIRROR,
    sample_grid_model: int | SampleGridModel = MATCH_EDGES,
    field_based: FieldBasedLike | None = None,
    ignore_mask: VideoNode | None = None,
    blur: float | None = None,
    **kwargs: Any
) -> ConstantFormatVideoNode

Rescale a clip to the given resolution from a previously descaled clip, with image borders handling and sampling grid alignment, optionally using linear light processing.

Keyword arguments passed during initialization are automatically injected here, unless explicitly overridden by the arguments provided at call time. Only arguments that match named parameters in this method are injected.

Parameters:

  • clip

    (VideoNode) –

    The source clip.

  • width

    (int | None, default: None ) –

    Target scaled width (defaults to clip width if None).

  • height

    (int | None, default: None ) –

    Target scaled height (defaults to clip height if None).

  • shift

    (ShiftT, default: (0, 0) ) –

    Subpixel shift (top, left) or per-field shifts.

  • linear

    (bool | None, default: None ) –

    Whether to linearize the input before rescaling. If None, inferred from sigmoid.

  • sigmoid

    (bool | tuple[Slope, Center], default: False ) –

    Whether to use sigmoid transfer curve. Can be True, False, or a tuple of (slope, center). True applies the defaults values (6.5, 0.75). Keep in mind sigmoid slope has to be in range 1.0-20.0 (inclusive) and sigmoid center has to be in range 0.0-1.0 (inclusive).

  • border_handling

    (int | BorderHandling, default: MIRROR ) –

    Method for handling image borders during sampling.

  • sample_grid_model

    (int | SampleGridModel, default: MATCH_EDGES ) –

    Model used to align sampling grid.

  • field_based

    (FieldBasedLike | None, default: None ) –

    Field-based processing mode (interlaced or progressive).

  • ignore_mask

    (VideoNode | None, default: None ) –

    Optional mask specifying areas to ignore during rescaling.

  • blur

    (float | None, default: None ) –

    Amount of blur to apply during rescaling.

  • **kwargs

    (Any, default: {} ) –

    Additional arguments passed to rescale_function.

Returns:

  • ConstantFormatVideoNode

    The scaled clip.

Source code in vskernels/abstract/complex.py
773
774
775
776
777
778
779
780
781
782
783
784
785
786
787
788
789
790
791
792
793
794
795
796
797
798
799
800
801
802
803
804
805
806
807
808
809
810
811
812
813
814
815
816
817
818
819
820
821
822
823
824
825
826
827
828
829
830
831
832
833
834
835
836
837
838
839
840
841
842
843
def rescale(
    self,
    clip: vs.VideoNode,
    width: int | None = None,
    height: int | None = None,
    shift: ShiftT = (0, 0),
    *,
    # `linear` and `sigmoid` parameters from LinearDescaler
    linear: bool | None = None,
    sigmoid: bool | tuple[Slope, Center] = False,
    # ComplexDescaler adds border_handling, sample_grid_model, field_based, ignore_mask and blur
    border_handling: int | BorderHandling = BorderHandling.MIRROR,
    sample_grid_model: int | SampleGridModel = SampleGridModel.MATCH_EDGES,
    field_based: FieldBasedLike | None = None,
    ignore_mask: vs.VideoNode | None = None,
    blur: float | None = None,
    **kwargs: Any,
) -> ConstantFormatVideoNode:
    """
    Rescale a clip to the given resolution from a previously descaled clip,
    with image borders handling and sampling grid alignment, optionally using linear light processing.

    Keyword arguments passed during initialization are automatically injected here,
    unless explicitly overridden by the arguments provided at call time.
    Only arguments that match named parameters in this method are injected.

    Args:
        clip: The source clip.
        width: Target scaled width (defaults to clip width if None).
        height: Target scaled height (defaults to clip height if None).
        shift: Subpixel shift (top, left) or per-field shifts.
        linear: Whether to linearize the input before rescaling. If None, inferred from sigmoid.
        sigmoid: Whether to use sigmoid transfer curve. Can be True, False, or a tuple of (slope, center). `True`
            applies the defaults values (6.5, 0.75). Keep in mind sigmoid slope has to be in range 1.0-20.0
            (inclusive) and sigmoid center has to be in range 0.0-1.0 (inclusive).
        border_handling: Method for handling image borders during sampling.
        sample_grid_model: Model used to align sampling grid.
        field_based: Field-based processing mode (interlaced or progressive).
        ignore_mask: Optional mask specifying areas to ignore during rescaling.
        blur: Amount of blur to apply during rescaling.
        **kwargs: Additional arguments passed to `rescale_function`.

    Returns:
        The scaled clip.
    """
    width, height = self._wh_norm(clip, width, height)
    check_correct_subsampling(clip, width, height)

    field_based = FieldBased.from_param_or_video(field_based, clip)

    clip, bits = expect_bits(clip, 32)

    de_base_args = (width, height // (1 + field_based.is_inter))
    kwargs.update(
        border_handling=BorderHandling.from_param(border_handling, self.rescale), ignore_mask=ignore_mask, blur=blur
    )

    sample_grid_model = SampleGridModel(sample_grid_model)

    if field_based.is_inter:
        raise NotImplementedError
    else:
        shift = _descale_shift_norm(shift, True, self.rescale)

        kwargs, shift = sample_grid_model.for_src(clip, width, height, shift, **kwargs)

        rescaled = super().rescale(
            clip, **self.get_rescale_args(clip, shift, *de_base_args, **kwargs), linear=linear, sigmoid=sigmoid
        )

    return depth(rescaled, bits)

scale

scale(
    clip: VideoNode,
    width: int | None = None,
    height: int | None = None,
    shift: tuple[TopShift | list[TopShift], LeftShift | list[LeftShift]] = (
        0,
        0,
    ),
    *,
    linear: bool | None = None,
    sigmoid: bool | tuple[Slope, Center] = False,
    border_handling: BorderHandling = MIRROR,
    sample_grid_model: SampleGridModel = MATCH_EDGES,
    sar: Sar | float | bool | None = None,
    dar: Dar | float | bool | None = None,
    dar_in: Dar | bool | float | None = None,
    keep_ar: bool | None = None,
    blur: float | None = None,
    **kwargs: Any
) -> VideoNode | ConstantFormatVideoNode

Scale a clip to the given resolution, with aspect ratio and linear light support.

Keyword arguments passed during initialization are automatically injected here, unless explicitly overridden by the arguments provided at call time. Only arguments that match named parameters in this method are injected.

Parameters:

  • clip

    (VideoNode) –

    The source clip.

  • width

    (int | None, default: None ) –

    Target width (defaults to clip width if None).

  • height

    (int | None, default: None ) –

    Target height (defaults to clip height if None).

  • shift

    (tuple[TopShift | list[TopShift], LeftShift | list[LeftShift]], default: (0, 0) ) –

    Subpixel shift (top, left) applied during scaling. If a tuple is provided, it is used uniformly. If a list is given, the shift is applied per plane.

  • linear

    (bool | None, default: None ) –

    Whether to linearize the input before descaling. If None, inferred from sigmoid.

  • sigmoid

    (bool | tuple[Slope, Center], default: False ) –

    Whether to use sigmoid transfer curve. Can be True, False, or a tuple of (slope, center). True applies the defaults values (6.5, 0.75). Keep in mind sigmoid slope has to be in range 1.0-20.0 (inclusive) and sigmoid center has to be in range 0.0-1.0 (inclusive).

  • border_handling

    (BorderHandling, default: MIRROR ) –

    Method for handling image borders during sampling.

  • sample_grid_model

    (SampleGridModel, default: MATCH_EDGES ) –

    Model used to align sampling grid.

  • sar

    (Sar | float | bool | None, default: None ) –

    Sample aspect ratio to assume or convert to.

  • dar

    (Dar | float | bool | None, default: None ) –

    Desired display aspect ratio.

  • dar_in

    (Dar | bool | float | None, default: None ) –

    Input display aspect ratio, if different from clip's.

  • keep_ar

    (bool | None, default: None ) –

    Whether to adjust dimensions to preserve aspect ratio.

  • blur

    (float | None, default: None ) –

    Amount of blur to apply during scaling.

Returns:

  • VideoNode | ConstantFormatVideoNode

    Scaled clip, optionally aspect-corrected and linearized.

Source code in vskernels/abstract/complex.py
527
528
529
530
531
532
533
534
535
536
537
538
539
540
541
542
543
544
545
546
547
548
549
550
551
552
553
554
555
556
557
558
559
560
561
562
563
564
565
566
567
568
569
570
571
572
573
574
575
576
577
578
579
580
581
582
583
584
585
586
587
588
589
590
591
592
593
594
595
596
597
598
599
600
601
602
603
604
605
606
607
608
609
610
611
612
613
614
615
616
617
618
619
620
621
622
623
624
625
626
627
628
629
630
631
632
633
634
635
636
637
638
639
640
641
642
643
644
645
646
647
648
649
650
651
652
653
654
655
656
657
658
659
660
def scale(
    self,
    clip: vs.VideoNode,
    width: int | None = None,
    height: int | None = None,
    # ComplexScaler adds shift per planes
    shift: tuple[TopShift | list[TopShift], LeftShift | list[LeftShift]] = (0, 0),
    *,
    # `linear` and `sigmoid` from LinearScaler
    linear: bool | None = None,
    sigmoid: bool | tuple[Slope, Center] = False,
    # `border_handling`, `sample_grid_model`, `sar`, `dar`, `dar_in` and `keep_ar` from KeepArScaler
    border_handling: BorderHandling = BorderHandling.MIRROR,
    sample_grid_model: SampleGridModel = SampleGridModel.MATCH_EDGES,
    sar: Sar | float | bool | None = None,
    dar: Dar | float | bool | None = None,
    dar_in: Dar | bool | float | None = None,
    keep_ar: bool | None = None,
    # ComplexScaler adds blur
    blur: float | None = None,
    **kwargs: Any,
) -> vs.VideoNode | ConstantFormatVideoNode:
    """
    Scale a clip to the given resolution, with aspect ratio and linear light support.

    Keyword arguments passed during initialization are automatically injected here,
    unless explicitly overridden by the arguments provided at call time.
    Only arguments that match named parameters in this method are injected.

    Args:
        clip: The source clip.
        width: Target width (defaults to clip width if None).
        height: Target height (defaults to clip height if None).
        shift: Subpixel shift (top, left) applied during scaling. If a tuple is provided, it is used uniformly. If a
            list is given, the shift is applied per plane.
        linear: Whether to linearize the input before descaling. If None, inferred from sigmoid.
        sigmoid: Whether to use sigmoid transfer curve. Can be True, False, or a tuple of (slope, center). `True`
            applies the defaults values (6.5, 0.75). Keep in mind sigmoid slope has to be in range 1.0-20.0
            (inclusive) and sigmoid center has to be in range 0.0-1.0 (inclusive).
        border_handling: Method for handling image borders during sampling.
        sample_grid_model: Model used to align sampling grid.
        sar: Sample aspect ratio to assume or convert to.
        dar: Desired display aspect ratio.
        dar_in: Input display aspect ratio, if different from clip's.
        keep_ar: Whether to adjust dimensions to preserve aspect ratio.
        blur: Amount of blur to apply during scaling.

    Returns:
        Scaled clip, optionally aspect-corrected and linearized.
    """
    kwargs.update(
        linear=linear,
        sigmoid=sigmoid,
        border_handling=border_handling,
        sample_grid_model=sample_grid_model,
        sar=sar,
        dar=dar,
        dar_in=dar_in,
        keep_ar=keep_ar,
        blur=blur,
    )

    shift_top, shift_left = shift

    if isinstance(shift_top, (int, float)) and isinstance(shift_left, (int, float)):
        return super().scale(clip, width, height, (shift_top, shift_left), **kwargs)

    assert check_variable_format(clip, self.scale)

    n_planes = clip.format.num_planes

    shift_top = normalize_seq(shift_top, n_planes)
    shift_left = normalize_seq(shift_left, n_planes)

    if n_planes == 1:
        return super().scale(clip, width, height, (shift_top[0], shift_left[0]), **kwargs)

    width, height = self._wh_norm(clip, width, height)

    format_in = clip.format
    format_out = get_video_format(fallback(kwargs.pop("format", None), self.kwargs.get("format"), clip.format))

    chromaloc = ChromaLocation.from_video(clip, func=self.scale)
    chromaloc_in = ChromaLocation(
        fallback(kwargs.pop("chromaloc_in", None), self.kwargs.get("chromaloc_in"), chromaloc)
    )
    chromaloc_out = ChromaLocation(fallback(kwargs.pop("chromaloc", None), self.kwargs.get("chromaloc"), chromaloc))

    off_left, off_top = chromaloc_in.get_offsets(format_in)
    off_left_out, off_top_out = chromaloc_out.get_offsets(format_out)

    factor_w = 1 / 2**format_in.subsampling_w
    factor_h = 1 / 2**format_in.subsampling_h

    # Offsets for format out
    offc_left = (abs(off_left) + off_left_out) * factor_w
    offc_top = (abs(off_top) + off_top_out) * factor_h

    # Offsets for scale out
    if format_out.subsampling_w:
        offc_left = ((abs(off_left) + off_left * (clip.width / width)) * factor_w) + offc_left
    if format_out.subsampling_h:
        offc_top = ((abs(off_top) + off_top * (clip.height / height)) * factor_h) + offc_top

    for i in range(1, n_planes):
        shift_left[i] += offc_left
        shift_top[i] += offc_top

    scaled_planes = list[vs.VideoNode]()

    for i, (plane, top, left) in enumerate(zip(split(clip), shift_top, shift_left)):
        if i:
            w = round(width * 1 / 2**format_out.subsampling_h)
            h = round(height * 1 / 2**format_out.subsampling_h)
        else:
            w, h = width, height

        scaled_planes.append(
            super().scale(
                plane,
                w,
                h,
                (top, left),
                format=format_out.replace(color_family=vs.GRAY, subsampling_w=0, subsampling_h=0),
                **kwargs,
            )
        )

    merged = vs.core.std.ShufflePlanes(scaled_planes, [0, 0, 0], format_out.color_family, clip)

    if chromaloc_in != chromaloc_out:
        return chromaloc_out.apply(merged)

    return merged

shift

shift(
    clip: VideoNode, shift: tuple[TopShift, LeftShift], /, **kwargs: Any
) -> ConstantFormatVideoNode
shift(
    clip: VideoNode,
    shift_top: float | list[float],
    shift_left: float | list[float],
    /,
    **kwargs: Any,
) -> ConstantFormatVideoNode
shift(
    clip: VideoNode,
    shifts_or_top: float | tuple[float, float] | list[float],
    shift_left: float | list[float] | None = None,
    /,
    **kwargs: Any,
) -> ConstantFormatVideoNode

Apply a subpixel shift to the clip using the kernel's scaling logic.

If a single float or tuple is provided, it is used uniformly. If a list is given, the shift is applied per plane.

Keyword arguments passed during initialization are automatically injected here, unless explicitly overridden by the arguments provided at call time. Only arguments that match named parameters in this method are injected.

Parameters:

  • clip

    (VideoNode) –

    The source clip.

  • shifts_or_top

    (float | tuple[float, float] | list[float]) –

    Either a single vertical shift, a (top, left) tuple, or a list of vertical shifts.

  • shift_left

    (float | list[float] | None, default: None ) –

    Horizontal shift or list of horizontal shifts. Ignored if shifts_or_top is a tuple.

  • **kwargs

    (Any, default: {} ) –

    Additional arguments passed to the internal scale call.

Returns:

  • ConstantFormatVideoNode

    A new clip with the applied shift.

Raises:

  • VariableFormatError

    If the input clip has variable format.

  • CustomValueError

    If the input clip is GRAY but lists of shift has been passed.

Source code in vskernels/abstract/base.py
845
846
847
848
849
850
851
852
853
854
855
856
857
858
859
860
861
862
863
864
865
866
867
868
869
870
871
872
873
874
875
876
877
878
879
880
881
882
883
884
885
886
887
888
889
890
891
892
893
894
895
896
897
898
899
900
901
902
903
def shift(
    self,
    clip: vs.VideoNode,
    shifts_or_top: float | tuple[float, float] | list[float],
    shift_left: float | list[float] | None = None,
    /,
    **kwargs: Any,
) -> ConstantFormatVideoNode:
    """
    Apply a subpixel shift to the clip using the kernel's scaling logic.

    If a single float or tuple is provided, it is used uniformly.
    If a list is given, the shift is applied per plane.

    Keyword arguments passed during initialization are automatically injected here,
    unless explicitly overridden by the arguments provided at call time.
    Only arguments that match named parameters in this method are injected.

    Args:
        clip: The source clip.
        shifts_or_top: Either a single vertical shift, a (top, left) tuple, or a list of vertical shifts.
        shift_left: Horizontal shift or list of horizontal shifts. Ignored if `shifts_or_top` is a tuple.
        **kwargs: Additional arguments passed to the internal `scale` call.

    Returns:
        A new clip with the applied shift.

    Raises:
        VariableFormatError: If the input clip has variable format.
        CustomValueError: If the input clip is GRAY but lists of shift has been passed.
    """
    assert check_variable_format(clip, self.shift)

    n_planes = clip.format.num_planes

    def _shift(src: vs.VideoNode, shift: tuple[TopShift, LeftShift] = (0, 0)) -> ConstantFormatVideoNode:
        return self.scale(src, shift=shift, **kwargs)  # type: ignore[return-value]

    if isinstance(shifts_or_top, tuple):
        return _shift(clip, shifts_or_top)

    if isinstance(shifts_or_top, (int, float)) and isinstance(shift_left, (int, float, NoneType)):
        return _shift(clip, (shifts_or_top, shift_left or 0))

    if shift_left is None:
        shift_left = 0.0

    shifts_top = normalize_seq(shifts_or_top, n_planes)
    shifts_left = normalize_seq(shift_left, n_planes)

    if n_planes == 1:
        return _shift(clip, (shifts_top[0], shifts_left[0]))

    shifted_planes = [
        plane if top == left == 0 else _shift(plane, (top, left))
        for plane, top, left in zip(split(clip), shifts_top, shifts_left)
    ]

    return core.std.ShufflePlanes(shifted_planes, [0, 0, 0], clip.format.color_family, clip)

supersample

supersample(
    clip: VideoNodeT,
    rfactor: float = 2.0,
    shift: tuple[TopShift, LeftShift] = (0, 0),
    **kwargs: Any
) -> VideoNodeT

Supersample a clip by a given scaling factor.

Keyword arguments passed during initialization are automatically injected here, unless explicitly overridden by the arguments provided at call time. Only arguments that match named parameters in this method are injected.

Parameters:

  • clip

    (VideoNodeT) –

    The source clip.

  • rfactor

    (float, default: 2.0 ) –

    Scaling factor for supersampling.

  • shift

    (tuple[TopShift, LeftShift], default: (0, 0) ) –

    Subpixel shift (top, left) applied during scaling.

  • **kwargs

    (Any, default: {} ) –

    Additional arguments forwarded to the scale function.

Raises:

  • CustomValueError

    If resulting resolution is non-positive.

Returns:

Source code in vskernels/abstract/base.py
496
497
498
499
500
501
502
503
504
505
506
507
508
509
510
511
512
513
514
515
516
517
518
519
520
521
522
523
524
525
526
527
528
529
def supersample(
    self, clip: VideoNodeT, rfactor: float = 2.0, shift: tuple[TopShift, LeftShift] = (0, 0), **kwargs: Any
) -> VideoNodeT:
    """
    Supersample a clip by a given scaling factor.

    Keyword arguments passed during initialization are automatically injected here,
    unless explicitly overridden by the arguments provided at call time.
    Only arguments that match named parameters in this method are injected.

    Args:
        clip: The source clip.
        rfactor: Scaling factor for supersampling.
        shift: Subpixel shift (top, left) applied during scaling.
        **kwargs: Additional arguments forwarded to the scale function.

    Raises:
        CustomValueError: If resulting resolution is non-positive.

    Returns:
        The supersampled clip.
    """
    assert check_variable_resolution(clip, self.supersample)

    dst_width, dst_height = ceil(clip.width * rfactor), ceil(clip.height * rfactor)

    if max(dst_width, dst_height) <= 0.0:
        raise CustomValueError(
            'Multiplying the resolution by "rfactor" must result in a positive resolution!',
            self.supersample,
            rfactor,
        )

    return self.scale(clip, dst_width, dst_height, shift, **kwargs)  # type: ignore[return-value]

AdobeBicubicSmoother

AdobeBicubicSmoother(**kwargs: Any)

Bases: Bicubic

Adobe's "Bicubic Smoother" interpolation preset resizer (b=0, c=0.625, blur=1.15).

Initialize the scaler with optional arguments.

Parameters:

  • **kwargs

    (Any, default: {} ) –

    Keyword arguments that configure the internal scaling behavior.

Methods:

  • bob

    Apply bob deinterlacing to a given clip using the selected resizer.

  • deinterlace

    Apply deinterlacing to a given clip using the selected resizer.

  • descale

    Descale a clip to the given resolution, with image borders handling and sampling grid alignment,

  • ensure_obj

    Ensure that the input is a scaler instance, resolving it if necessary.

  • from_param

    Resolve and return a scaler type from a given input (string, type, or instance).

  • get_bob_args
  • get_descale_args

    Generate and normalize argument dictionary for a descale operation.

  • get_params_args
  • get_resample_args

    Generate and normalize argument dictionary for a resample operation.

  • get_rescale_args

    Generate and normalize argument dictionary for a rescale operation.

  • get_scale_args

    Generate and normalize argument dictionary for a scale operation.

  • implemented_funcs

    Returns a set of function names that are implemented in the current class and the parent classes.

  • kernel_radius
  • multi

    Deprecated alias for supersample.

  • pretty_string

    Cached property returning a user-friendly string representation.

  • resample

    Resample a video clip to the given format.

  • rescale

    Rescale a clip to the given resolution from a previously descaled clip,

  • scale

    Scale a clip to the given resolution, with aspect ratio and linear light support.

  • shift

    Apply a subpixel shift to the clip using the kernel's scaling logic.

  • supersample

    Supersample a clip by a given scaling factor.

Attributes:

Source code in vskernels/kernels/zimg/bicubic.py
190
191
192
193
194
195
196
197
def __init__(self, **kwargs: Any) -> None:
    """
    Initialize the scaler with optional arguments.

    Args:
        **kwargs: Keyword arguments that configure the internal scaling behavior.
    """
    super().__init__(b=0, c=5 / 8, blur=1.15, **kwargs)

b instance-attribute

b = b

bob_function class-attribute instance-attribute

bob_function: Callable[..., ConstantFormatVideoNode] = Bob

Bob function called internally when performing bobbing operations.

c instance-attribute

c = c

descale_function class-attribute instance-attribute

descale_function: Callable[..., ConstantFormatVideoNode] = Debicubic

kwargs instance-attribute

kwargs: dict[str, Any] = kwargs

Arguments passed to the implemented funcs or internal scale function.

resample_function class-attribute instance-attribute

resample_function: Callable[..., ConstantFormatVideoNode] = Bicubic

rescale_function class-attribute instance-attribute

rescale_function: Callable[..., ConstantFormatVideoNode] = Bicubic

scale_function class-attribute instance-attribute

scale_function: Callable[..., VideoNode] = Bicubic

bob

bob(
    clip: VideoNode, *, tff: FieldBasedLike | bool | None = None, **kwargs: Any
) -> ConstantFormatVideoNode

Apply bob deinterlacing to a given clip using the selected resizer.

Keyword arguments passed during initialization are automatically injected here, unless explicitly overridden by the arguments provided at call time. Only arguments that match named parameters in this method are injected.

Parameters:

  • clip

    (VideoNode) –

    The source clip

  • tff

    (FieldBasedLike | bool | None, default: None ) –

    Field order of the clip.

Returns:

  • ConstantFormatVideoNode

    The bobbed clip.

Source code in vskernels/kernels/zimg/abstract.py
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
def bob(
    self, clip: vs.VideoNode, *, tff: FieldBasedLike | bool | None = None, **kwargs: Any
) -> ConstantFormatVideoNode:
    """
    Apply bob deinterlacing to a given clip using the selected resizer.

    Keyword arguments passed during initialization are automatically injected here,
    unless explicitly overridden by the arguments provided at call time.
    Only arguments that match named parameters in this method are injected.

    Args:
        clip: The source clip
        tff: Field order of the clip.

    Returns:
        The bobbed clip.
    """
    clip_fieldbased = FieldBased.from_param_or_video(tff, clip, True, self.__class__)

    assert check_variable(clip, self.__class__)

    return self.bob_function(clip, **self.get_bob_args(clip, tff=clip_fieldbased.is_tff, **kwargs))

deinterlace

deinterlace(
    clip: VideoNode,
    *,
    tff: FieldBasedLike | bool | None = None,
    double_rate: bool = True,
    **kwargs: Any
) -> ConstantFormatVideoNode

Apply deinterlacing to a given clip using the selected resizer.

Keyword arguments passed during initialization are automatically injected here, unless explicitly overridden by the arguments provided at call time. Only arguments that match named parameters in this method are injected.

Parameters:

  • clip

    (VideoNode) –

    The source clip

  • tff

    (FieldBasedLike | bool | None, default: None ) –

    Field order of the clip.

  • double_rate

    (bool, default: True ) –

    Whether to double the frame rate (True) or retain the original rate (False).

Returns:

  • ConstantFormatVideoNode

    The bobbed clip.

Source code in vskernels/kernels/zimg/abstract.py
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
def deinterlace(
    self, clip: vs.VideoNode, *, tff: FieldBasedLike | bool | None = None, double_rate: bool = True, **kwargs: Any
) -> ConstantFormatVideoNode:
    """
    Apply deinterlacing to a given clip using the selected resizer.

    Keyword arguments passed during initialization are automatically injected here,
    unless explicitly overridden by the arguments provided at call time.
    Only arguments that match named parameters in this method are injected.

    Args:
        clip: The source clip
        tff: Field order of the clip.
        double_rate: Whether to double the frame rate (True) or retain the original rate (False).

    Returns:
        The bobbed clip.
    """
    bobbed = self.bob(clip, tff=tff, **kwargs)

    if not double_rate:
        return bobbed[::2]

    return bobbed

descale

descale(
    clip: VideoNode,
    width: int | None = None,
    height: int | None = None,
    shift: ShiftT = (0, 0),
    *,
    linear: bool | None = None,
    sigmoid: bool | tuple[Slope, Center] = False,
    border_handling: int | BorderHandling = MIRROR,
    sample_grid_model: int | SampleGridModel = MATCH_EDGES,
    field_based: FieldBasedLike | None = None,
    ignore_mask: VideoNode | None = None,
    blur: float | None = None,
    **kwargs: Any
) -> ConstantFormatVideoNode

Descale a clip to the given resolution, with image borders handling and sampling grid alignment, optionally using linear light processing.

Supports both progressive and interlaced sources. When interlaced, it will separate fields, perform per-field descaling, and weave them back.

Keyword arguments passed during initialization are automatically injected here, unless explicitly overridden by the arguments provided at call time. Only arguments that match named parameters in this method are injected.

Parameters:

  • clip

    (VideoNode) –

    The source clip.

  • width

    (int | None, default: None ) –

    Target descaled width (defaults to clip width if None).

  • height

    (int | None, default: None ) –

    Target descaled height (defaults to clip height if None).

  • shift

    (ShiftT, default: (0, 0) ) –

    Subpixel shift (top, left) or per-field shifts.

  • linear

    (bool | None, default: None ) –

    Whether to linearize the input before descaling. If None, inferred from sigmoid.

  • sigmoid

    (bool | tuple[Slope, Center], default: False ) –

    Whether to use sigmoid transfer curve. Can be True, False, or a tuple of (slope, center). True applies the defaults values (6.5, 0.75). Keep in mind sigmoid slope has to be in range 1.0-20.0 (inclusive) and sigmoid center has to be in range 0.0-1.0 (inclusive).

  • border_handling

    (int | BorderHandling, default: MIRROR ) –

    Method for handling image borders during sampling.

  • sample_grid_model

    (int | SampleGridModel, default: MATCH_EDGES ) –

    Model used to align sampling grid.

  • field_based

    (FieldBasedLike | None, default: None ) –

    Field-based processing mode (interlaced or progressive).

  • ignore_mask

    (VideoNode | None, default: None ) –

    Optional mask specifying areas to ignore during descaling.

  • blur

    (float | None, default: None ) –

    Amount of blur to apply during scaling.

  • **kwargs

    (Any, default: {} ) –

    Additional arguments passed to descale_function.

Returns:

  • ConstantFormatVideoNode

    The descaled video node, optionally processed in linear light.

Source code in vskernels/abstract/complex.py
671
672
673
674
675
676
677
678
679
680
681
682
683
684
685
686
687
688
689
690
691
692
693
694
695
696
697
698
699
700
701
702
703
704
705
706
707
708
709
710
711
712
713
714
715
716
717
718
719
720
721
722
723
724
725
726
727
728
729
730
731
732
733
734
735
736
737
738
739
740
741
742
743
744
745
746
747
748
749
750
751
752
753
754
755
756
757
758
759
760
761
762
763
764
765
766
767
768
769
770
771
def descale(
    self,
    clip: vs.VideoNode,
    width: int | None = None,
    height: int | None = None,
    shift: ShiftT = (0, 0),
    *,
    # `linear` and `sigmoid` parameters from LinearDescaler
    linear: bool | None = None,
    sigmoid: bool | tuple[Slope, Center] = False,
    # ComplexDescaler adds border_handling, sample_grid_model, field_based,  ignore_mask and blur
    border_handling: int | BorderHandling = BorderHandling.MIRROR,
    sample_grid_model: int | SampleGridModel = SampleGridModel.MATCH_EDGES,
    field_based: FieldBasedLike | None = None,
    ignore_mask: vs.VideoNode | None = None,
    blur: float | None = None,
    **kwargs: Any,
) -> ConstantFormatVideoNode:
    """
    Descale a clip to the given resolution, with image borders handling and sampling grid alignment,
    optionally using linear light processing.

    Supports both progressive and interlaced sources. When interlaced, it will separate fields,
    perform per-field descaling, and weave them back.

    Keyword arguments passed during initialization are automatically injected here,
    unless explicitly overridden by the arguments provided at call time.
    Only arguments that match named parameters in this method are injected.

    Args:
        clip: The source clip.
        width: Target descaled width (defaults to clip width if None).
        height: Target descaled height (defaults to clip height if None).
        shift: Subpixel shift (top, left) or per-field shifts.
        linear: Whether to linearize the input before descaling. If None, inferred from sigmoid.
        sigmoid: Whether to use sigmoid transfer curve. Can be True, False, or a tuple of (slope, center). `True`
            applies the defaults values (6.5, 0.75). Keep in mind sigmoid slope has to be in range 1.0-20.0
            (inclusive) and sigmoid center has to be in range 0.0-1.0 (inclusive).
        border_handling: Method for handling image borders during sampling.
        sample_grid_model: Model used to align sampling grid.
        field_based: Field-based processing mode (interlaced or progressive).
        ignore_mask: Optional mask specifying areas to ignore during descaling.
        blur: Amount of blur to apply during scaling.
        **kwargs: Additional arguments passed to `descale_function`.

    Returns:
        The descaled video node, optionally processed in linear light.
    """
    width, height = self._wh_norm(clip, width, height)
    check_correct_subsampling(clip, width, height)

    field_based = FieldBased.from_param_or_video(field_based, clip)

    clip, bits = expect_bits(clip, 32)

    de_base_args = (width, height // (1 + field_based.is_inter))
    kwargs.update(
        linear=linear,
        sigmoid=sigmoid,
        border_handling=BorderHandling.from_param(border_handling, self.descale),
        ignore_mask=ignore_mask,
        blur=blur,
    )

    sample_grid_model = SampleGridModel(sample_grid_model)

    if field_based.is_inter:
        from vsdeinterlace import reweave

        shift_y, shift_x = _descale_shift_norm(shift, False, self.descale)

        kwargs_tf, shift = sample_grid_model.for_src(clip, width, height, (shift_y[0], shift_x[0]), **kwargs)
        kwargs_bf, shift = sample_grid_model.for_src(clip, width, height, (shift_y[1], shift_x[1]), **kwargs)

        de_kwargs_tf = self.get_descale_args(clip, (shift_y[0], shift_x[0]), *de_base_args, **kwargs_tf)
        de_kwargs_bf = self.get_descale_args(clip, (shift_y[1], shift_x[1]), *de_base_args, **kwargs_bf)

        if height % 2:
            raise CustomIndexError("You can't descale to odd resolution when crossconverted!", self.descale)

        field_shift = 0.125 * height / clip.height

        fields = clip.std.SeparateFields(field_based.is_tff)

        descaled_tf = super().descale(
            fields[0::2],
            **de_kwargs_tf | {"src_top": de_kwargs_tf.get("src_top", 0.0) + field_shift},
        )
        descaled_bf = super().descale(
            fields[1::2],
            **de_kwargs_bf | {"src_top": de_kwargs_bf.get("src_top", 0.0) - field_shift},
        )
        descaled = reweave(descaled_tf, descaled_bf, field_based)
    else:
        shift = _descale_shift_norm(shift, True, self.descale)

        kwargs, shift = sample_grid_model.for_src(clip, width, height, shift, **kwargs)

        descaled = super().descale(clip, **self.get_descale_args(clip, shift, *de_base_args, **kwargs))

    return depth(descaled, bits)

ensure_obj classmethod

ensure_obj(
    scaler: str | type[Self] | Self | None = None,
    /,
    func_except: FuncExcept | None = None,
) -> Self

Ensure that the input is a scaler instance, resolving it if necessary.

Parameters:

  • scaler

    (str | type[Self] | Self | None, default: None ) –

    Scaler identifier (string, class, or instance).

  • func_except

    (FuncExcept | None, default: None ) –

    Function returned for custom error handling.

Returns:

  • Self

    Scaler instance.

Source code in vskernels/abstract/base.py
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
@classmethod
def ensure_obj(
    cls,
    scaler: str | type[Self] | Self | None = None,
    /,
    func_except: FuncExcept | None = None,
) -> Self:
    """
    Ensure that the input is a scaler instance, resolving it if necessary.

    Args:
        scaler: Scaler identifier (string, class, or instance).
        func_except: Function returned for custom error handling.

    Returns:
        Scaler instance.
    """
    return _base_ensure_obj(cls, scaler, func_except)

from_param classmethod

from_param(
    scaler: str | type[Self] | Self | None = None,
    /,
    func_except: FuncExcept | None = None,
) -> type[Self]

Resolve and return a scaler type from a given input (string, type, or instance).

Parameters:

  • scaler

    (str | type[Self] | Self | None, default: None ) –

    Scaler identifier (string, class, or instance).

  • func_except

    (FuncExcept | None, default: None ) –

    Function returned for custom error handling.

Returns:

  • type[Self]

    Resolved scaler type.

Source code in vskernels/abstract/base.py
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
@classmethod
def from_param(
    cls,
    scaler: str | type[Self] | Self | None = None,
    /,
    func_except: FuncExcept | None = None,
) -> type[Self]:
    """
    Resolve and return a scaler type from a given input (string, type, or instance).

    Args:
        scaler: Scaler identifier (string, class, or instance).
        func_except: Function returned for custom error handling.

    Returns:
        Resolved scaler type.
    """
    return _base_from_param(cls, scaler, cls._err_class, func_except)

get_bob_args

get_bob_args(
    clip: VideoNode, shift: tuple[TopShift, LeftShift] = (0, 0), **kwargs: Any
) -> dict[str, Any]
Source code in vskernels/kernels/zimg/bicubic.py
60
61
62
63
64
65
66
67
68
def get_bob_args(
    self,
    clip: vs.VideoNode,
    shift: tuple[TopShift, LeftShift] = (0, 0),
    **kwargs: Any,
) -> dict[str, Any]:
    return super().get_bob_args(
        clip, shift, filter="bicubic", filter_param_a=self.b, filter_param_b=self.c, **kwargs
    )

get_descale_args

get_descale_args(
    clip: VideoNode,
    shift: tuple[TopShift, LeftShift] = (0, 0),
    width: int | None = None,
    height: int | None = None,
    **kwargs: Any
) -> dict[str, Any]

Generate and normalize argument dictionary for a descale operation.

Parameters:

  • clip

    (VideoNode) –

    The source clip.

  • shift

    (tuple[TopShift, LeftShift], default: (0, 0) ) –

    Vertical and horizontal shift to apply.

  • width

    (int | None, default: None ) –

    Target width.

  • height

    (int | None, default: None ) –

    Target height.

  • **kwargs

    (Any, default: {} ) –

    Additional arguments to pass to the descale function.

Returns:

  • dict[str, Any]

    Dictionary of keyword arguments for the descale function.

Source code in vskernels/abstract/base.py
946
947
948
949
950
951
952
953
954
955
956
957
958
959
960
961
962
963
964
965
966
967
def get_descale_args(
    self,
    clip: vs.VideoNode,
    shift: tuple[TopShift, LeftShift] = (0, 0),
    width: int | None = None,
    height: int | None = None,
    **kwargs: Any,
) -> dict[str, Any]:
    """
    Generate and normalize argument dictionary for a descale operation.

    Args:
        clip: The source clip.
        shift: Vertical and horizontal shift to apply.
        width: Target width.
        height: Target height.
        **kwargs: Additional arguments to pass to the descale function.

    Returns:
        Dictionary of keyword arguments for the descale function.
    """
    return {"src_top": shift[0], "src_left": shift[1]} | self.get_params_args(True, clip, width, height, **kwargs)

get_params_args

get_params_args(
    is_descale: bool,
    clip: VideoNode,
    width: int | None = None,
    height: int | None = None,
    **kwargs: Any
) -> dict[str, Any]
Source code in vskernels/kernels/zimg/bicubic.py
52
53
54
55
56
57
58
def get_params_args(
    self, is_descale: bool, clip: vs.VideoNode, width: int | None = None, height: int | None = None, **kwargs: Any
) -> dict[str, Any]:
    args = super().get_params_args(is_descale, clip, width, height, **kwargs)
    if is_descale:
        return args | {"b": self.b, "c": self.c}
    return args | {"filter_param_a": self.b, "filter_param_b": self.c}

get_resample_args

get_resample_args(
    clip: VideoNode,
    format: int | VideoFormatLike | HoldsVideoFormat,
    matrix: MatrixLike | None,
    matrix_in: MatrixLike | None,
    **kwargs: Any
) -> dict[str, Any]

Generate and normalize argument dictionary for a resample operation.

Parameters:

  • clip

    (VideoNode) –

    The source clip.

  • format

    (int | VideoFormatLike | HoldsVideoFormat) –

    The target video format, which can either be:

    • an integer format ID,
    • a vs.PresetVideoFormat or vs.VideoFormat,
    • or a source from which a valid VideoFormat can be extracted.
  • matrix

    (MatrixLike | None) –

    Target color matrix.

  • matrix_in

    (MatrixLike | None) –

    Source color matrix.

  • **kwargs

    (Any, default: {} ) –

    Additional arguments to pass to the resample function.

Returns:

  • dict[str, Any]

    Dictionary of keyword arguments for the resample function.

Source code in vskernels/abstract/base.py
 992
 993
 994
 995
 996
 997
 998
 999
1000
1001
1002
1003
1004
1005
1006
1007
1008
1009
1010
1011
1012
1013
1014
1015
1016
1017
1018
1019
1020
1021
def get_resample_args(
    self,
    clip: vs.VideoNode,
    format: int | VideoFormatLike | HoldsVideoFormat,
    matrix: MatrixLike | None,
    matrix_in: MatrixLike | None,
    **kwargs: Any,
) -> dict[str, Any]:
    """
    Generate and normalize argument dictionary for a resample operation.

    Args:
        clip: The source clip.
        format: The target video format, which can either be:

               - an integer format ID,
               - a `vs.PresetVideoFormat` or `vs.VideoFormat`,
               - or a source from which a valid `VideoFormat` can be extracted.
        matrix: Target color matrix.
        matrix_in: Source color matrix.
        **kwargs: Additional arguments to pass to the resample function.

    Returns:
        Dictionary of keyword arguments for the resample function.
    """
    return {
        "format": get_video_format(format).id,
        "matrix": Matrix.from_param(matrix),
        "matrix_in": Matrix.from_param(matrix_in),
    } | self.get_params_args(False, clip, **kwargs)

get_rescale_args

get_rescale_args(
    clip: VideoNode,
    shift: tuple[TopShift, LeftShift] = (0, 0),
    width: int | None = None,
    height: int | None = None,
    **kwargs: Any
) -> dict[str, Any]

Generate and normalize argument dictionary for a rescale operation.

Parameters:

  • clip

    (VideoNode) –

    The source clip.

  • shift

    (tuple[TopShift, LeftShift], default: (0, 0) ) –

    Vertical and horizontal shift to apply.

  • width

    (int | None, default: None ) –

    Target width.

  • height

    (int | None, default: None ) –

    Target height.

  • **kwargs

    (Any, default: {} ) –

    Additional arguments to pass to the rescale function.

Returns:

  • dict[str, Any]

    Dictionary of keyword arguments for the rescale function.

Source code in vskernels/abstract/base.py
969
970
971
972
973
974
975
976
977
978
979
980
981
982
983
984
985
986
987
988
989
990
def get_rescale_args(
    self,
    clip: vs.VideoNode,
    shift: tuple[TopShift, LeftShift] = (0, 0),
    width: int | None = None,
    height: int | None = None,
    **kwargs: Any,
) -> dict[str, Any]:
    """
    Generate and normalize argument dictionary for a rescale operation.

    Args:
        clip: The source clip.
        shift: Vertical and horizontal shift to apply.
        width: Target width.
        height: Target height.
        **kwargs: Additional arguments to pass to the rescale function.

    Returns:
        Dictionary of keyword arguments for the rescale function.
    """
    return {"src_top": shift[0], "src_left": shift[1]} | self.get_params_args(True, clip, width, height, **kwargs)

get_scale_args

get_scale_args(
    clip: VideoNode,
    shift: tuple[TopShift, LeftShift] = (0, 0),
    width: int | None = None,
    height: int | None = None,
    **kwargs: Any
) -> dict[str, Any]

Generate and normalize argument dictionary for a scale operation.

Parameters:

  • clip

    (VideoNode) –

    The source clip.

  • shift

    (tuple[TopShift, LeftShift], default: (0, 0) ) –

    Vertical and horizontal shift to apply.

  • width

    (int | None, default: None ) –

    Target width.

  • height

    (int | None, default: None ) –

    Target height.

  • **kwargs

    (Any, default: {} ) –

    Additional arguments to pass to the scale function.

Returns:

  • dict[str, Any]

    Dictionary of keyword arguments for the scale function.

Source code in vskernels/abstract/base.py
923
924
925
926
927
928
929
930
931
932
933
934
935
936
937
938
939
940
941
942
943
944
def get_scale_args(
    self,
    clip: vs.VideoNode,
    shift: tuple[TopShift, LeftShift] = (0, 0),
    width: int | None = None,
    height: int | None = None,
    **kwargs: Any,
) -> dict[str, Any]:
    """
    Generate and normalize argument dictionary for a scale operation.

    Args:
        clip: The source clip.
        shift: Vertical and horizontal shift to apply.
        width: Target width.
        height: Target height.
        **kwargs: Additional arguments to pass to the scale function.

    Returns:
        Dictionary of keyword arguments for the scale function.
    """
    return {"src_top": shift[0], "src_left": shift[1]} | self.get_params_args(False, clip, width, height, **kwargs)

implemented_funcs classmethod

implemented_funcs() -> frozenset[str]

Returns a set of function names that are implemented in the current class and the parent classes.

These functions determine which keyword arguments will be extracted from the init method.

Returns:

Source code in vskernels/abstract/base.py
431
432
433
434
435
436
437
438
439
440
441
442
@classproperty
@classmethod
def implemented_funcs(cls) -> frozenset[str]:
    """
    Returns a set of function names that are implemented in the current class and the parent classes.

    These functions determine which keyword arguments will be extracted from the __init__ method.

    Returns:
        Frozen set of function names.
    """
    return frozenset(func for klass in cls.mro() for func in getattr(klass, "_implemented_funcs", ()))

kernel_radius

kernel_radius() -> int
Source code in vskernels/kernels/zimg/bicubic.py
70
71
72
73
74
@ZimgComplexKernel.cachedproperty
def kernel_radius(self) -> int:
    if (self.b, self.c) == (0, 0):
        return 1
    return 2

multi

multi(
    clip: VideoNodeT,
    multi: float = 2.0,
    shift: tuple[TopShift, LeftShift] = (0, 0),
    **kwargs: Any
) -> VideoNodeT

Deprecated alias for supersample.

Keyword arguments passed during initialization are automatically injected here, unless explicitly overridden by the arguments provided at call time. Only arguments that match named parameters in this method are injected.

Parameters:

  • clip

    (VideoNodeT) –

    The source clip.

  • multi

    (float, default: 2.0 ) –

    Supersampling factor.

  • shift

    (tuple[TopShift, LeftShift], default: (0, 0) ) –

    Subpixel shift (top, left) applied during scaling.

  • **kwargs

    (Any, default: {} ) –

    Additional arguments forwarded to the scale function.

Returns:

Source code in vskernels/abstract/base.py
531
532
533
534
535
536
537
538
539
540
541
542
543
544
545
546
547
548
549
550
551
@deprecated('The "multi" method is deprecated. Use "supersample" instead.', category=DeprecationWarning)
def multi(
    self, clip: VideoNodeT, multi: float = 2.0, shift: tuple[TopShift, LeftShift] = (0, 0), **kwargs: Any
) -> VideoNodeT:
    """
    Deprecated alias for `supersample`.

    Keyword arguments passed during initialization are automatically injected here,
    unless explicitly overridden by the arguments provided at call time.
    Only arguments that match named parameters in this method are injected.

    Args:
        clip: The source clip.
        multi: Supersampling factor.
        shift: Subpixel shift (top, left) applied during scaling.
        **kwargs: Additional arguments forwarded to the scale function.

    Returns:
        The supersampled clip.
    """
    return self.supersample(clip, multi, shift, **kwargs)

pretty_string

pretty_string() -> str

Cached property returning a user-friendly string representation.

Returns:

  • str

    Pretty-printed string with arguments.

Source code in vskernels/abstract/base.py
421
422
423
424
425
426
427
428
429
@BaseScalerMeta.cachedproperty
def pretty_string(self) -> str:
    """
    Cached property returning a user-friendly string representation.

    Returns:
        Pretty-printed string with arguments.
    """
    return self._pretty_string()

resample

resample(
    clip: VideoNode,
    format: int | VideoFormatLike | HoldsVideoFormat,
    matrix: MatrixLike | None = None,
    matrix_in: MatrixLike | None = None,
    **kwargs: Any
) -> ConstantFormatVideoNode

Resample a video clip to the given format.

Keyword arguments passed during initialization are automatically injected here, unless explicitly overridden by the arguments provided at call time. Only arguments that match named parameters in this method are injected.

Parameters:

  • clip

    (VideoNode) –

    The source clip.

  • format

    (int | VideoFormatLike | HoldsVideoFormat) –

    The target video format, which can either be:

    • an integer format ID,
    • a vs.PresetVideoFormat or vs.VideoFormat,
    • or a source from which a valid VideoFormat can be extracted.
  • matrix

    (MatrixLike | None, default: None ) –

    An optional color transformation matrix to apply.

  • matrix_in

    (MatrixLike | None, default: None ) –

    An optional input matrix for color transformations.

  • **kwargs

    (Any, default: {} ) –

    Additional keyword arguments passed to the resample_function.

Returns:

  • ConstantFormatVideoNode

    The resampled clip.

Source code in vskernels/abstract/base.py
717
718
719
720
721
722
723
724
725
726
727
728
729
730
731
732
733
734
735
736
737
738
739
740
741
742
743
744
745
746
747
748
def resample(
    self,
    clip: vs.VideoNode,
    format: int | VideoFormatLike | HoldsVideoFormat,
    matrix: MatrixLike | None = None,
    matrix_in: MatrixLike | None = None,
    **kwargs: Any,
) -> ConstantFormatVideoNode:
    """
    Resample a video clip to the given format.

    Keyword arguments passed during initialization are automatically injected here,
    unless explicitly overridden by the arguments provided at call time.
    Only arguments that match named parameters in this method are injected.

    Args:
        clip: The source clip.
        format: The target video format, which can either be:

               - an integer format ID,
               - a `vs.PresetVideoFormat` or `vs.VideoFormat`,
               - or a source from which a valid `VideoFormat` can be extracted.
        matrix: An optional color transformation matrix to apply.
        matrix_in: An optional input matrix for color transformations.
        **kwargs: Additional keyword arguments passed to the `resample_function`.

    Returns:
        The resampled clip.
    """
    return self.resample_function(
        clip, **_norm_props_enums(self.get_resample_args(clip, format, matrix, matrix_in, **kwargs))
    )

rescale

rescale(
    clip: VideoNode,
    width: int | None = None,
    height: int | None = None,
    shift: ShiftT = (0, 0),
    *,
    linear: bool | None = None,
    sigmoid: bool | tuple[Slope, Center] = False,
    border_handling: int | BorderHandling = MIRROR,
    sample_grid_model: int | SampleGridModel = MATCH_EDGES,
    field_based: FieldBasedLike | None = None,
    ignore_mask: VideoNode | None = None,
    blur: float | None = None,
    **kwargs: Any
) -> ConstantFormatVideoNode

Rescale a clip to the given resolution from a previously descaled clip, with image borders handling and sampling grid alignment, optionally using linear light processing.

Keyword arguments passed during initialization are automatically injected here, unless explicitly overridden by the arguments provided at call time. Only arguments that match named parameters in this method are injected.

Parameters:

  • clip

    (VideoNode) –

    The source clip.

  • width

    (int | None, default: None ) –

    Target scaled width (defaults to clip width if None).

  • height

    (int | None, default: None ) –

    Target scaled height (defaults to clip height if None).

  • shift

    (ShiftT, default: (0, 0) ) –

    Subpixel shift (top, left) or per-field shifts.

  • linear

    (bool | None, default: None ) –

    Whether to linearize the input before rescaling. If None, inferred from sigmoid.

  • sigmoid

    (bool | tuple[Slope, Center], default: False ) –

    Whether to use sigmoid transfer curve. Can be True, False, or a tuple of (slope, center). True applies the defaults values (6.5, 0.75). Keep in mind sigmoid slope has to be in range 1.0-20.0 (inclusive) and sigmoid center has to be in range 0.0-1.0 (inclusive).

  • border_handling

    (int | BorderHandling, default: MIRROR ) –

    Method for handling image borders during sampling.

  • sample_grid_model

    (int | SampleGridModel, default: MATCH_EDGES ) –

    Model used to align sampling grid.

  • field_based

    (FieldBasedLike | None, default: None ) –

    Field-based processing mode (interlaced or progressive).

  • ignore_mask

    (VideoNode | None, default: None ) –

    Optional mask specifying areas to ignore during rescaling.

  • blur

    (float | None, default: None ) –

    Amount of blur to apply during rescaling.

  • **kwargs

    (Any, default: {} ) –

    Additional arguments passed to rescale_function.

Returns:

  • ConstantFormatVideoNode

    The scaled clip.

Source code in vskernels/abstract/complex.py
773
774
775
776
777
778
779
780
781
782
783
784
785
786
787
788
789
790
791
792
793
794
795
796
797
798
799
800
801
802
803
804
805
806
807
808
809
810
811
812
813
814
815
816
817
818
819
820
821
822
823
824
825
826
827
828
829
830
831
832
833
834
835
836
837
838
839
840
841
842
843
def rescale(
    self,
    clip: vs.VideoNode,
    width: int | None = None,
    height: int | None = None,
    shift: ShiftT = (0, 0),
    *,
    # `linear` and `sigmoid` parameters from LinearDescaler
    linear: bool | None = None,
    sigmoid: bool | tuple[Slope, Center] = False,
    # ComplexDescaler adds border_handling, sample_grid_model, field_based, ignore_mask and blur
    border_handling: int | BorderHandling = BorderHandling.MIRROR,
    sample_grid_model: int | SampleGridModel = SampleGridModel.MATCH_EDGES,
    field_based: FieldBasedLike | None = None,
    ignore_mask: vs.VideoNode | None = None,
    blur: float | None = None,
    **kwargs: Any,
) -> ConstantFormatVideoNode:
    """
    Rescale a clip to the given resolution from a previously descaled clip,
    with image borders handling and sampling grid alignment, optionally using linear light processing.

    Keyword arguments passed during initialization are automatically injected here,
    unless explicitly overridden by the arguments provided at call time.
    Only arguments that match named parameters in this method are injected.

    Args:
        clip: The source clip.
        width: Target scaled width (defaults to clip width if None).
        height: Target scaled height (defaults to clip height if None).
        shift: Subpixel shift (top, left) or per-field shifts.
        linear: Whether to linearize the input before rescaling. If None, inferred from sigmoid.
        sigmoid: Whether to use sigmoid transfer curve. Can be True, False, or a tuple of (slope, center). `True`
            applies the defaults values (6.5, 0.75). Keep in mind sigmoid slope has to be in range 1.0-20.0
            (inclusive) and sigmoid center has to be in range 0.0-1.0 (inclusive).
        border_handling: Method for handling image borders during sampling.
        sample_grid_model: Model used to align sampling grid.
        field_based: Field-based processing mode (interlaced or progressive).
        ignore_mask: Optional mask specifying areas to ignore during rescaling.
        blur: Amount of blur to apply during rescaling.
        **kwargs: Additional arguments passed to `rescale_function`.

    Returns:
        The scaled clip.
    """
    width, height = self._wh_norm(clip, width, height)
    check_correct_subsampling(clip, width, height)

    field_based = FieldBased.from_param_or_video(field_based, clip)

    clip, bits = expect_bits(clip, 32)

    de_base_args = (width, height // (1 + field_based.is_inter))
    kwargs.update(
        border_handling=BorderHandling.from_param(border_handling, self.rescale), ignore_mask=ignore_mask, blur=blur
    )

    sample_grid_model = SampleGridModel(sample_grid_model)

    if field_based.is_inter:
        raise NotImplementedError
    else:
        shift = _descale_shift_norm(shift, True, self.rescale)

        kwargs, shift = sample_grid_model.for_src(clip, width, height, shift, **kwargs)

        rescaled = super().rescale(
            clip, **self.get_rescale_args(clip, shift, *de_base_args, **kwargs), linear=linear, sigmoid=sigmoid
        )

    return depth(rescaled, bits)

scale

scale(
    clip: VideoNode,
    width: int | None = None,
    height: int | None = None,
    shift: tuple[TopShift | list[TopShift], LeftShift | list[LeftShift]] = (
        0,
        0,
    ),
    *,
    linear: bool | None = None,
    sigmoid: bool | tuple[Slope, Center] = False,
    border_handling: BorderHandling = MIRROR,
    sample_grid_model: SampleGridModel = MATCH_EDGES,
    sar: Sar | float | bool | None = None,
    dar: Dar | float | bool | None = None,
    dar_in: Dar | bool | float | None = None,
    keep_ar: bool | None = None,
    blur: float | None = None,
    **kwargs: Any
) -> VideoNode | ConstantFormatVideoNode

Scale a clip to the given resolution, with aspect ratio and linear light support.

Keyword arguments passed during initialization are automatically injected here, unless explicitly overridden by the arguments provided at call time. Only arguments that match named parameters in this method are injected.

Parameters:

  • clip

    (VideoNode) –

    The source clip.

  • width

    (int | None, default: None ) –

    Target width (defaults to clip width if None).

  • height

    (int | None, default: None ) –

    Target height (defaults to clip height if None).

  • shift

    (tuple[TopShift | list[TopShift], LeftShift | list[LeftShift]], default: (0, 0) ) –

    Subpixel shift (top, left) applied during scaling. If a tuple is provided, it is used uniformly. If a list is given, the shift is applied per plane.

  • linear

    (bool | None, default: None ) –

    Whether to linearize the input before descaling. If None, inferred from sigmoid.

  • sigmoid

    (bool | tuple[Slope, Center], default: False ) –

    Whether to use sigmoid transfer curve. Can be True, False, or a tuple of (slope, center). True applies the defaults values (6.5, 0.75). Keep in mind sigmoid slope has to be in range 1.0-20.0 (inclusive) and sigmoid center has to be in range 0.0-1.0 (inclusive).

  • border_handling

    (BorderHandling, default: MIRROR ) –

    Method for handling image borders during sampling.

  • sample_grid_model

    (SampleGridModel, default: MATCH_EDGES ) –

    Model used to align sampling grid.

  • sar

    (Sar | float | bool | None, default: None ) –

    Sample aspect ratio to assume or convert to.

  • dar

    (Dar | float | bool | None, default: None ) –

    Desired display aspect ratio.

  • dar_in

    (Dar | bool | float | None, default: None ) –

    Input display aspect ratio, if different from clip's.

  • keep_ar

    (bool | None, default: None ) –

    Whether to adjust dimensions to preserve aspect ratio.

  • blur

    (float | None, default: None ) –

    Amount of blur to apply during scaling.

Returns:

  • VideoNode | ConstantFormatVideoNode

    Scaled clip, optionally aspect-corrected and linearized.

Source code in vskernels/abstract/complex.py
527
528
529
530
531
532
533
534
535
536
537
538
539
540
541
542
543
544
545
546
547
548
549
550
551
552
553
554
555
556
557
558
559
560
561
562
563
564
565
566
567
568
569
570
571
572
573
574
575
576
577
578
579
580
581
582
583
584
585
586
587
588
589
590
591
592
593
594
595
596
597
598
599
600
601
602
603
604
605
606
607
608
609
610
611
612
613
614
615
616
617
618
619
620
621
622
623
624
625
626
627
628
629
630
631
632
633
634
635
636
637
638
639
640
641
642
643
644
645
646
647
648
649
650
651
652
653
654
655
656
657
658
659
660
def scale(
    self,
    clip: vs.VideoNode,
    width: int | None = None,
    height: int | None = None,
    # ComplexScaler adds shift per planes
    shift: tuple[TopShift | list[TopShift], LeftShift | list[LeftShift]] = (0, 0),
    *,
    # `linear` and `sigmoid` from LinearScaler
    linear: bool | None = None,
    sigmoid: bool | tuple[Slope, Center] = False,
    # `border_handling`, `sample_grid_model`, `sar`, `dar`, `dar_in` and `keep_ar` from KeepArScaler
    border_handling: BorderHandling = BorderHandling.MIRROR,
    sample_grid_model: SampleGridModel = SampleGridModel.MATCH_EDGES,
    sar: Sar | float | bool | None = None,
    dar: Dar | float | bool | None = None,
    dar_in: Dar | bool | float | None = None,
    keep_ar: bool | None = None,
    # ComplexScaler adds blur
    blur: float | None = None,
    **kwargs: Any,
) -> vs.VideoNode | ConstantFormatVideoNode:
    """
    Scale a clip to the given resolution, with aspect ratio and linear light support.

    Keyword arguments passed during initialization are automatically injected here,
    unless explicitly overridden by the arguments provided at call time.
    Only arguments that match named parameters in this method are injected.

    Args:
        clip: The source clip.
        width: Target width (defaults to clip width if None).
        height: Target height (defaults to clip height if None).
        shift: Subpixel shift (top, left) applied during scaling. If a tuple is provided, it is used uniformly. If a
            list is given, the shift is applied per plane.
        linear: Whether to linearize the input before descaling. If None, inferred from sigmoid.
        sigmoid: Whether to use sigmoid transfer curve. Can be True, False, or a tuple of (slope, center). `True`
            applies the defaults values (6.5, 0.75). Keep in mind sigmoid slope has to be in range 1.0-20.0
            (inclusive) and sigmoid center has to be in range 0.0-1.0 (inclusive).
        border_handling: Method for handling image borders during sampling.
        sample_grid_model: Model used to align sampling grid.
        sar: Sample aspect ratio to assume or convert to.
        dar: Desired display aspect ratio.
        dar_in: Input display aspect ratio, if different from clip's.
        keep_ar: Whether to adjust dimensions to preserve aspect ratio.
        blur: Amount of blur to apply during scaling.

    Returns:
        Scaled clip, optionally aspect-corrected and linearized.
    """
    kwargs.update(
        linear=linear,
        sigmoid=sigmoid,
        border_handling=border_handling,
        sample_grid_model=sample_grid_model,
        sar=sar,
        dar=dar,
        dar_in=dar_in,
        keep_ar=keep_ar,
        blur=blur,
    )

    shift_top, shift_left = shift

    if isinstance(shift_top, (int, float)) and isinstance(shift_left, (int, float)):
        return super().scale(clip, width, height, (shift_top, shift_left), **kwargs)

    assert check_variable_format(clip, self.scale)

    n_planes = clip.format.num_planes

    shift_top = normalize_seq(shift_top, n_planes)
    shift_left = normalize_seq(shift_left, n_planes)

    if n_planes == 1:
        return super().scale(clip, width, height, (shift_top[0], shift_left[0]), **kwargs)

    width, height = self._wh_norm(clip, width, height)

    format_in = clip.format
    format_out = get_video_format(fallback(kwargs.pop("format", None), self.kwargs.get("format"), clip.format))

    chromaloc = ChromaLocation.from_video(clip, func=self.scale)
    chromaloc_in = ChromaLocation(
        fallback(kwargs.pop("chromaloc_in", None), self.kwargs.get("chromaloc_in"), chromaloc)
    )
    chromaloc_out = ChromaLocation(fallback(kwargs.pop("chromaloc", None), self.kwargs.get("chromaloc"), chromaloc))

    off_left, off_top = chromaloc_in.get_offsets(format_in)
    off_left_out, off_top_out = chromaloc_out.get_offsets(format_out)

    factor_w = 1 / 2**format_in.subsampling_w
    factor_h = 1 / 2**format_in.subsampling_h

    # Offsets for format out
    offc_left = (abs(off_left) + off_left_out) * factor_w
    offc_top = (abs(off_top) + off_top_out) * factor_h

    # Offsets for scale out
    if format_out.subsampling_w:
        offc_left = ((abs(off_left) + off_left * (clip.width / width)) * factor_w) + offc_left
    if format_out.subsampling_h:
        offc_top = ((abs(off_top) + off_top * (clip.height / height)) * factor_h) + offc_top

    for i in range(1, n_planes):
        shift_left[i] += offc_left
        shift_top[i] += offc_top

    scaled_planes = list[vs.VideoNode]()

    for i, (plane, top, left) in enumerate(zip(split(clip), shift_top, shift_left)):
        if i:
            w = round(width * 1 / 2**format_out.subsampling_h)
            h = round(height * 1 / 2**format_out.subsampling_h)
        else:
            w, h = width, height

        scaled_planes.append(
            super().scale(
                plane,
                w,
                h,
                (top, left),
                format=format_out.replace(color_family=vs.GRAY, subsampling_w=0, subsampling_h=0),
                **kwargs,
            )
        )

    merged = vs.core.std.ShufflePlanes(scaled_planes, [0, 0, 0], format_out.color_family, clip)

    if chromaloc_in != chromaloc_out:
        return chromaloc_out.apply(merged)

    return merged

shift

shift(
    clip: VideoNode, shift: tuple[TopShift, LeftShift], /, **kwargs: Any
) -> ConstantFormatVideoNode
shift(
    clip: VideoNode,
    shift_top: float | list[float],
    shift_left: float | list[float],
    /,
    **kwargs: Any,
) -> ConstantFormatVideoNode
shift(
    clip: VideoNode,
    shifts_or_top: float | tuple[float, float] | list[float],
    shift_left: float | list[float] | None = None,
    /,
    **kwargs: Any,
) -> ConstantFormatVideoNode

Apply a subpixel shift to the clip using the kernel's scaling logic.

If a single float or tuple is provided, it is used uniformly. If a list is given, the shift is applied per plane.

Keyword arguments passed during initialization are automatically injected here, unless explicitly overridden by the arguments provided at call time. Only arguments that match named parameters in this method are injected.

Parameters:

  • clip

    (VideoNode) –

    The source clip.

  • shifts_or_top

    (float | tuple[float, float] | list[float]) –

    Either a single vertical shift, a (top, left) tuple, or a list of vertical shifts.

  • shift_left

    (float | list[float] | None, default: None ) –

    Horizontal shift or list of horizontal shifts. Ignored if shifts_or_top is a tuple.

  • **kwargs

    (Any, default: {} ) –

    Additional arguments passed to the internal scale call.

Returns:

  • ConstantFormatVideoNode

    A new clip with the applied shift.

Raises:

  • VariableFormatError

    If the input clip has variable format.

  • CustomValueError

    If the input clip is GRAY but lists of shift has been passed.

Source code in vskernels/abstract/base.py
845
846
847
848
849
850
851
852
853
854
855
856
857
858
859
860
861
862
863
864
865
866
867
868
869
870
871
872
873
874
875
876
877
878
879
880
881
882
883
884
885
886
887
888
889
890
891
892
893
894
895
896
897
898
899
900
901
902
903
def shift(
    self,
    clip: vs.VideoNode,
    shifts_or_top: float | tuple[float, float] | list[float],
    shift_left: float | list[float] | None = None,
    /,
    **kwargs: Any,
) -> ConstantFormatVideoNode:
    """
    Apply a subpixel shift to the clip using the kernel's scaling logic.

    If a single float or tuple is provided, it is used uniformly.
    If a list is given, the shift is applied per plane.

    Keyword arguments passed during initialization are automatically injected here,
    unless explicitly overridden by the arguments provided at call time.
    Only arguments that match named parameters in this method are injected.

    Args:
        clip: The source clip.
        shifts_or_top: Either a single vertical shift, a (top, left) tuple, or a list of vertical shifts.
        shift_left: Horizontal shift or list of horizontal shifts. Ignored if `shifts_or_top` is a tuple.
        **kwargs: Additional arguments passed to the internal `scale` call.

    Returns:
        A new clip with the applied shift.

    Raises:
        VariableFormatError: If the input clip has variable format.
        CustomValueError: If the input clip is GRAY but lists of shift has been passed.
    """
    assert check_variable_format(clip, self.shift)

    n_planes = clip.format.num_planes

    def _shift(src: vs.VideoNode, shift: tuple[TopShift, LeftShift] = (0, 0)) -> ConstantFormatVideoNode:
        return self.scale(src, shift=shift, **kwargs)  # type: ignore[return-value]

    if isinstance(shifts_or_top, tuple):
        return _shift(clip, shifts_or_top)

    if isinstance(shifts_or_top, (int, float)) and isinstance(shift_left, (int, float, NoneType)):
        return _shift(clip, (shifts_or_top, shift_left or 0))

    if shift_left is None:
        shift_left = 0.0

    shifts_top = normalize_seq(shifts_or_top, n_planes)
    shifts_left = normalize_seq(shift_left, n_planes)

    if n_planes == 1:
        return _shift(clip, (shifts_top[0], shifts_left[0]))

    shifted_planes = [
        plane if top == left == 0 else _shift(plane, (top, left))
        for plane, top, left in zip(split(clip), shifts_top, shifts_left)
    ]

    return core.std.ShufflePlanes(shifted_planes, [0, 0, 0], clip.format.color_family, clip)

supersample

supersample(
    clip: VideoNodeT,
    rfactor: float = 2.0,
    shift: tuple[TopShift, LeftShift] = (0, 0),
    **kwargs: Any
) -> VideoNodeT

Supersample a clip by a given scaling factor.

Keyword arguments passed during initialization are automatically injected here, unless explicitly overridden by the arguments provided at call time. Only arguments that match named parameters in this method are injected.

Parameters:

  • clip

    (VideoNodeT) –

    The source clip.

  • rfactor

    (float, default: 2.0 ) –

    Scaling factor for supersampling.

  • shift

    (tuple[TopShift, LeftShift], default: (0, 0) ) –

    Subpixel shift (top, left) applied during scaling.

  • **kwargs

    (Any, default: {} ) –

    Additional arguments forwarded to the scale function.

Raises:

  • CustomValueError

    If resulting resolution is non-positive.

Returns:

Source code in vskernels/abstract/base.py
496
497
498
499
500
501
502
503
504
505
506
507
508
509
510
511
512
513
514
515
516
517
518
519
520
521
522
523
524
525
526
527
528
529
def supersample(
    self, clip: VideoNodeT, rfactor: float = 2.0, shift: tuple[TopShift, LeftShift] = (0, 0), **kwargs: Any
) -> VideoNodeT:
    """
    Supersample a clip by a given scaling factor.

    Keyword arguments passed during initialization are automatically injected here,
    unless explicitly overridden by the arguments provided at call time.
    Only arguments that match named parameters in this method are injected.

    Args:
        clip: The source clip.
        rfactor: Scaling factor for supersampling.
        shift: Subpixel shift (top, left) applied during scaling.
        **kwargs: Additional arguments forwarded to the scale function.

    Raises:
        CustomValueError: If resulting resolution is non-positive.

    Returns:
        The supersampled clip.
    """
    assert check_variable_resolution(clip, self.supersample)

    dst_width, dst_height = ceil(clip.width * rfactor), ceil(clip.height * rfactor)

    if max(dst_width, dst_height) <= 0.0:
        raise CustomValueError(
            'Multiplying the resolution by "rfactor" must result in a positive resolution!',
            self.supersample,
            rfactor,
        )

    return self.scale(clip, dst_width, dst_height, shift, **kwargs)  # type: ignore[return-value]

BSpline

BSpline(**kwargs: Any)

Bases: Bicubic

BSpline resizer (b=1, c=0).

Initialize the scaler with optional arguments.

Parameters:

  • **kwargs

    (Any, default: {} ) –

    Keyword arguments that configure the internal scaling behavior.

Methods:

  • bob

    Apply bob deinterlacing to a given clip using the selected resizer.

  • deinterlace

    Apply deinterlacing to a given clip using the selected resizer.

  • descale

    Descale a clip to the given resolution, with image borders handling and sampling grid alignment,

  • ensure_obj

    Ensure that the input is a scaler instance, resolving it if necessary.

  • from_param

    Resolve and return a scaler type from a given input (string, type, or instance).

  • get_bob_args
  • get_descale_args

    Generate and normalize argument dictionary for a descale operation.

  • get_params_args
  • get_resample_args

    Generate and normalize argument dictionary for a resample operation.

  • get_rescale_args

    Generate and normalize argument dictionary for a rescale operation.

  • get_scale_args

    Generate and normalize argument dictionary for a scale operation.

  • implemented_funcs

    Returns a set of function names that are implemented in the current class and the parent classes.

  • kernel_radius
  • multi

    Deprecated alias for supersample.

  • pretty_string

    Cached property returning a user-friendly string representation.

  • resample

    Resample a video clip to the given format.

  • rescale

    Rescale a clip to the given resolution from a previously descaled clip,

  • scale

    Scale a clip to the given resolution, with aspect ratio and linear light support.

  • shift

    Apply a subpixel shift to the clip using the kernel's scaling logic.

  • supersample

    Supersample a clip by a given scaling factor.

Attributes:

Source code in vskernels/kernels/zimg/bicubic.py
85
86
87
88
89
90
91
92
def __init__(self, **kwargs: Any) -> None:
    """
    Initialize the scaler with optional arguments.

    Args:
        **kwargs: Keyword arguments that configure the internal scaling behavior.
    """
    super().__init__(b=1, c=0, **kwargs)

b instance-attribute

b = b

bob_function class-attribute instance-attribute

bob_function: Callable[..., ConstantFormatVideoNode] = Bob

Bob function called internally when performing bobbing operations.

c instance-attribute

c = c

descale_function class-attribute instance-attribute

descale_function: Callable[..., ConstantFormatVideoNode] = Debicubic

kwargs instance-attribute

kwargs: dict[str, Any] = kwargs

Arguments passed to the implemented funcs or internal scale function.

resample_function class-attribute instance-attribute

resample_function: Callable[..., ConstantFormatVideoNode] = Bicubic

rescale_function class-attribute instance-attribute

rescale_function: Callable[..., ConstantFormatVideoNode] = Bicubic

scale_function class-attribute instance-attribute

scale_function: Callable[..., VideoNode] = Bicubic

bob

bob(
    clip: VideoNode, *, tff: FieldBasedLike | bool | None = None, **kwargs: Any
) -> ConstantFormatVideoNode

Apply bob deinterlacing to a given clip using the selected resizer.

Keyword arguments passed during initialization are automatically injected here, unless explicitly overridden by the arguments provided at call time. Only arguments that match named parameters in this method are injected.

Parameters:

  • clip

    (VideoNode) –

    The source clip

  • tff

    (FieldBasedLike | bool | None, default: None ) –

    Field order of the clip.

Returns:

  • ConstantFormatVideoNode

    The bobbed clip.

Source code in vskernels/kernels/zimg/abstract.py
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
def bob(
    self, clip: vs.VideoNode, *, tff: FieldBasedLike | bool | None = None, **kwargs: Any
) -> ConstantFormatVideoNode:
    """
    Apply bob deinterlacing to a given clip using the selected resizer.

    Keyword arguments passed during initialization are automatically injected here,
    unless explicitly overridden by the arguments provided at call time.
    Only arguments that match named parameters in this method are injected.

    Args:
        clip: The source clip
        tff: Field order of the clip.

    Returns:
        The bobbed clip.
    """
    clip_fieldbased = FieldBased.from_param_or_video(tff, clip, True, self.__class__)

    assert check_variable(clip, self.__class__)

    return self.bob_function(clip, **self.get_bob_args(clip, tff=clip_fieldbased.is_tff, **kwargs))

deinterlace

deinterlace(
    clip: VideoNode,
    *,
    tff: FieldBasedLike | bool | None = None,
    double_rate: bool = True,
    **kwargs: Any
) -> ConstantFormatVideoNode

Apply deinterlacing to a given clip using the selected resizer.

Keyword arguments passed during initialization are automatically injected here, unless explicitly overridden by the arguments provided at call time. Only arguments that match named parameters in this method are injected.

Parameters:

  • clip

    (VideoNode) –

    The source clip

  • tff

    (FieldBasedLike | bool | None, default: None ) –

    Field order of the clip.

  • double_rate

    (bool, default: True ) –

    Whether to double the frame rate (True) or retain the original rate (False).

Returns:

  • ConstantFormatVideoNode

    The bobbed clip.

Source code in vskernels/kernels/zimg/abstract.py
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
def deinterlace(
    self, clip: vs.VideoNode, *, tff: FieldBasedLike | bool | None = None, double_rate: bool = True, **kwargs: Any
) -> ConstantFormatVideoNode:
    """
    Apply deinterlacing to a given clip using the selected resizer.

    Keyword arguments passed during initialization are automatically injected here,
    unless explicitly overridden by the arguments provided at call time.
    Only arguments that match named parameters in this method are injected.

    Args:
        clip: The source clip
        tff: Field order of the clip.
        double_rate: Whether to double the frame rate (True) or retain the original rate (False).

    Returns:
        The bobbed clip.
    """
    bobbed = self.bob(clip, tff=tff, **kwargs)

    if not double_rate:
        return bobbed[::2]

    return bobbed

descale

descale(
    clip: VideoNode,
    width: int | None = None,
    height: int | None = None,
    shift: ShiftT = (0, 0),
    *,
    linear: bool | None = None,
    sigmoid: bool | tuple[Slope, Center] = False,
    border_handling: int | BorderHandling = MIRROR,
    sample_grid_model: int | SampleGridModel = MATCH_EDGES,
    field_based: FieldBasedLike | None = None,
    ignore_mask: VideoNode | None = None,
    blur: float | None = None,
    **kwargs: Any
) -> ConstantFormatVideoNode

Descale a clip to the given resolution, with image borders handling and sampling grid alignment, optionally using linear light processing.

Supports both progressive and interlaced sources. When interlaced, it will separate fields, perform per-field descaling, and weave them back.

Keyword arguments passed during initialization are automatically injected here, unless explicitly overridden by the arguments provided at call time. Only arguments that match named parameters in this method are injected.

Parameters:

  • clip

    (VideoNode) –

    The source clip.

  • width

    (int | None, default: None ) –

    Target descaled width (defaults to clip width if None).

  • height

    (int | None, default: None ) –

    Target descaled height (defaults to clip height if None).

  • shift

    (ShiftT, default: (0, 0) ) –

    Subpixel shift (top, left) or per-field shifts.

  • linear

    (bool | None, default: None ) –

    Whether to linearize the input before descaling. If None, inferred from sigmoid.

  • sigmoid

    (bool | tuple[Slope, Center], default: False ) –

    Whether to use sigmoid transfer curve. Can be True, False, or a tuple of (slope, center). True applies the defaults values (6.5, 0.75). Keep in mind sigmoid slope has to be in range 1.0-20.0 (inclusive) and sigmoid center has to be in range 0.0-1.0 (inclusive).

  • border_handling

    (int | BorderHandling, default: MIRROR ) –

    Method for handling image borders during sampling.

  • sample_grid_model

    (int | SampleGridModel, default: MATCH_EDGES ) –

    Model used to align sampling grid.

  • field_based

    (FieldBasedLike | None, default: None ) –

    Field-based processing mode (interlaced or progressive).

  • ignore_mask

    (VideoNode | None, default: None ) –

    Optional mask specifying areas to ignore during descaling.

  • blur

    (float | None, default: None ) –

    Amount of blur to apply during scaling.

  • **kwargs

    (Any, default: {} ) –

    Additional arguments passed to descale_function.

Returns:

  • ConstantFormatVideoNode

    The descaled video node, optionally processed in linear light.

Source code in vskernels/abstract/complex.py
671
672
673
674
675
676
677
678
679
680
681
682
683
684
685
686
687
688
689
690
691
692
693
694
695
696
697
698
699
700
701
702
703
704
705
706
707
708
709
710
711
712
713
714
715
716
717
718
719
720
721
722
723
724
725
726
727
728
729
730
731
732
733
734
735
736
737
738
739
740
741
742
743
744
745
746
747
748
749
750
751
752
753
754
755
756
757
758
759
760
761
762
763
764
765
766
767
768
769
770
771
def descale(
    self,
    clip: vs.VideoNode,
    width: int | None = None,
    height: int | None = None,
    shift: ShiftT = (0, 0),
    *,
    # `linear` and `sigmoid` parameters from LinearDescaler
    linear: bool | None = None,
    sigmoid: bool | tuple[Slope, Center] = False,
    # ComplexDescaler adds border_handling, sample_grid_model, field_based,  ignore_mask and blur
    border_handling: int | BorderHandling = BorderHandling.MIRROR,
    sample_grid_model: int | SampleGridModel = SampleGridModel.MATCH_EDGES,
    field_based: FieldBasedLike | None = None,
    ignore_mask: vs.VideoNode | None = None,
    blur: float | None = None,
    **kwargs: Any,
) -> ConstantFormatVideoNode:
    """
    Descale a clip to the given resolution, with image borders handling and sampling grid alignment,
    optionally using linear light processing.

    Supports both progressive and interlaced sources. When interlaced, it will separate fields,
    perform per-field descaling, and weave them back.

    Keyword arguments passed during initialization are automatically injected here,
    unless explicitly overridden by the arguments provided at call time.
    Only arguments that match named parameters in this method are injected.

    Args:
        clip: The source clip.
        width: Target descaled width (defaults to clip width if None).
        height: Target descaled height (defaults to clip height if None).
        shift: Subpixel shift (top, left) or per-field shifts.
        linear: Whether to linearize the input before descaling. If None, inferred from sigmoid.
        sigmoid: Whether to use sigmoid transfer curve. Can be True, False, or a tuple of (slope, center). `True`
            applies the defaults values (6.5, 0.75). Keep in mind sigmoid slope has to be in range 1.0-20.0
            (inclusive) and sigmoid center has to be in range 0.0-1.0 (inclusive).
        border_handling: Method for handling image borders during sampling.
        sample_grid_model: Model used to align sampling grid.
        field_based: Field-based processing mode (interlaced or progressive).
        ignore_mask: Optional mask specifying areas to ignore during descaling.
        blur: Amount of blur to apply during scaling.
        **kwargs: Additional arguments passed to `descale_function`.

    Returns:
        The descaled video node, optionally processed in linear light.
    """
    width, height = self._wh_norm(clip, width, height)
    check_correct_subsampling(clip, width, height)

    field_based = FieldBased.from_param_or_video(field_based, clip)

    clip, bits = expect_bits(clip, 32)

    de_base_args = (width, height // (1 + field_based.is_inter))
    kwargs.update(
        linear=linear,
        sigmoid=sigmoid,
        border_handling=BorderHandling.from_param(border_handling, self.descale),
        ignore_mask=ignore_mask,
        blur=blur,
    )

    sample_grid_model = SampleGridModel(sample_grid_model)

    if field_based.is_inter:
        from vsdeinterlace import reweave

        shift_y, shift_x = _descale_shift_norm(shift, False, self.descale)

        kwargs_tf, shift = sample_grid_model.for_src(clip, width, height, (shift_y[0], shift_x[0]), **kwargs)
        kwargs_bf, shift = sample_grid_model.for_src(clip, width, height, (shift_y[1], shift_x[1]), **kwargs)

        de_kwargs_tf = self.get_descale_args(clip, (shift_y[0], shift_x[0]), *de_base_args, **kwargs_tf)
        de_kwargs_bf = self.get_descale_args(clip, (shift_y[1], shift_x[1]), *de_base_args, **kwargs_bf)

        if height % 2:
            raise CustomIndexError("You can't descale to odd resolution when crossconverted!", self.descale)

        field_shift = 0.125 * height / clip.height

        fields = clip.std.SeparateFields(field_based.is_tff)

        descaled_tf = super().descale(
            fields[0::2],
            **de_kwargs_tf | {"src_top": de_kwargs_tf.get("src_top", 0.0) + field_shift},
        )
        descaled_bf = super().descale(
            fields[1::2],
            **de_kwargs_bf | {"src_top": de_kwargs_bf.get("src_top", 0.0) - field_shift},
        )
        descaled = reweave(descaled_tf, descaled_bf, field_based)
    else:
        shift = _descale_shift_norm(shift, True, self.descale)

        kwargs, shift = sample_grid_model.for_src(clip, width, height, shift, **kwargs)

        descaled = super().descale(clip, **self.get_descale_args(clip, shift, *de_base_args, **kwargs))

    return depth(descaled, bits)

ensure_obj classmethod

ensure_obj(
    scaler: str | type[Self] | Self | None = None,
    /,
    func_except: FuncExcept | None = None,
) -> Self

Ensure that the input is a scaler instance, resolving it if necessary.

Parameters:

  • scaler

    (str | type[Self] | Self | None, default: None ) –

    Scaler identifier (string, class, or instance).

  • func_except

    (FuncExcept | None, default: None ) –

    Function returned for custom error handling.

Returns:

  • Self

    Scaler instance.

Source code in vskernels/abstract/base.py
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
@classmethod
def ensure_obj(
    cls,
    scaler: str | type[Self] | Self | None = None,
    /,
    func_except: FuncExcept | None = None,
) -> Self:
    """
    Ensure that the input is a scaler instance, resolving it if necessary.

    Args:
        scaler: Scaler identifier (string, class, or instance).
        func_except: Function returned for custom error handling.

    Returns:
        Scaler instance.
    """
    return _base_ensure_obj(cls, scaler, func_except)

from_param classmethod

from_param(
    scaler: str | type[Self] | Self | None = None,
    /,
    func_except: FuncExcept | None = None,
) -> type[Self]

Resolve and return a scaler type from a given input (string, type, or instance).

Parameters:

  • scaler

    (str | type[Self] | Self | None, default: None ) –

    Scaler identifier (string, class, or instance).

  • func_except

    (FuncExcept | None, default: None ) –

    Function returned for custom error handling.

Returns:

  • type[Self]

    Resolved scaler type.

Source code in vskernels/abstract/base.py
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
@classmethod
def from_param(
    cls,
    scaler: str | type[Self] | Self | None = None,
    /,
    func_except: FuncExcept | None = None,
) -> type[Self]:
    """
    Resolve and return a scaler type from a given input (string, type, or instance).

    Args:
        scaler: Scaler identifier (string, class, or instance).
        func_except: Function returned for custom error handling.

    Returns:
        Resolved scaler type.
    """
    return _base_from_param(cls, scaler, cls._err_class, func_except)

get_bob_args

get_bob_args(
    clip: VideoNode, shift: tuple[TopShift, LeftShift] = (0, 0), **kwargs: Any
) -> dict[str, Any]
Source code in vskernels/kernels/zimg/bicubic.py
60
61
62
63
64
65
66
67
68
def get_bob_args(
    self,
    clip: vs.VideoNode,
    shift: tuple[TopShift, LeftShift] = (0, 0),
    **kwargs: Any,
) -> dict[str, Any]:
    return super().get_bob_args(
        clip, shift, filter="bicubic", filter_param_a=self.b, filter_param_b=self.c, **kwargs
    )

get_descale_args

get_descale_args(
    clip: VideoNode,
    shift: tuple[TopShift, LeftShift] = (0, 0),
    width: int | None = None,
    height: int | None = None,
    **kwargs: Any
) -> dict[str, Any]

Generate and normalize argument dictionary for a descale operation.

Parameters:

  • clip

    (VideoNode) –

    The source clip.

  • shift

    (tuple[TopShift, LeftShift], default: (0, 0) ) –

    Vertical and horizontal shift to apply.

  • width

    (int | None, default: None ) –

    Target width.

  • height

    (int | None, default: None ) –

    Target height.

  • **kwargs

    (Any, default: {} ) –

    Additional arguments to pass to the descale function.

Returns:

  • dict[str, Any]

    Dictionary of keyword arguments for the descale function.

Source code in vskernels/abstract/base.py
946
947
948
949
950
951
952
953
954
955
956
957
958
959
960
961
962
963
964
965
966
967
def get_descale_args(
    self,
    clip: vs.VideoNode,
    shift: tuple[TopShift, LeftShift] = (0, 0),
    width: int | None = None,
    height: int | None = None,
    **kwargs: Any,
) -> dict[str, Any]:
    """
    Generate and normalize argument dictionary for a descale operation.

    Args:
        clip: The source clip.
        shift: Vertical and horizontal shift to apply.
        width: Target width.
        height: Target height.
        **kwargs: Additional arguments to pass to the descale function.

    Returns:
        Dictionary of keyword arguments for the descale function.
    """
    return {"src_top": shift[0], "src_left": shift[1]} | self.get_params_args(True, clip, width, height, **kwargs)

get_params_args

get_params_args(
    is_descale: bool,
    clip: VideoNode,
    width: int | None = None,
    height: int | None = None,
    **kwargs: Any
) -> dict[str, Any]
Source code in vskernels/kernels/zimg/bicubic.py
52
53
54
55
56
57
58
def get_params_args(
    self, is_descale: bool, clip: vs.VideoNode, width: int | None = None, height: int | None = None, **kwargs: Any
) -> dict[str, Any]:
    args = super().get_params_args(is_descale, clip, width, height, **kwargs)
    if is_descale:
        return args | {"b": self.b, "c": self.c}
    return args | {"filter_param_a": self.b, "filter_param_b": self.c}

get_resample_args

get_resample_args(
    clip: VideoNode,
    format: int | VideoFormatLike | HoldsVideoFormat,
    matrix: MatrixLike | None,
    matrix_in: MatrixLike | None,
    **kwargs: Any
) -> dict[str, Any]

Generate and normalize argument dictionary for a resample operation.

Parameters:

  • clip

    (VideoNode) –

    The source clip.

  • format

    (int | VideoFormatLike | HoldsVideoFormat) –

    The target video format, which can either be:

    • an integer format ID,
    • a vs.PresetVideoFormat or vs.VideoFormat,
    • or a source from which a valid VideoFormat can be extracted.
  • matrix

    (MatrixLike | None) –

    Target color matrix.

  • matrix_in

    (MatrixLike | None) –

    Source color matrix.

  • **kwargs

    (Any, default: {} ) –

    Additional arguments to pass to the resample function.

Returns:

  • dict[str, Any]

    Dictionary of keyword arguments for the resample function.

Source code in vskernels/abstract/base.py
 992
 993
 994
 995
 996
 997
 998
 999
1000
1001
1002
1003
1004
1005
1006
1007
1008
1009
1010
1011
1012
1013
1014
1015
1016
1017
1018
1019
1020
1021
def get_resample_args(
    self,
    clip: vs.VideoNode,
    format: int | VideoFormatLike | HoldsVideoFormat,
    matrix: MatrixLike | None,
    matrix_in: MatrixLike | None,
    **kwargs: Any,
) -> dict[str, Any]:
    """
    Generate and normalize argument dictionary for a resample operation.

    Args:
        clip: The source clip.
        format: The target video format, which can either be:

               - an integer format ID,
               - a `vs.PresetVideoFormat` or `vs.VideoFormat`,
               - or a source from which a valid `VideoFormat` can be extracted.
        matrix: Target color matrix.
        matrix_in: Source color matrix.
        **kwargs: Additional arguments to pass to the resample function.

    Returns:
        Dictionary of keyword arguments for the resample function.
    """
    return {
        "format": get_video_format(format).id,
        "matrix": Matrix.from_param(matrix),
        "matrix_in": Matrix.from_param(matrix_in),
    } | self.get_params_args(False, clip, **kwargs)

get_rescale_args

get_rescale_args(
    clip: VideoNode,
    shift: tuple[TopShift, LeftShift] = (0, 0),
    width: int | None = None,
    height: int | None = None,
    **kwargs: Any
) -> dict[str, Any]

Generate and normalize argument dictionary for a rescale operation.

Parameters:

  • clip

    (VideoNode) –

    The source clip.

  • shift

    (tuple[TopShift, LeftShift], default: (0, 0) ) –

    Vertical and horizontal shift to apply.

  • width

    (int | None, default: None ) –

    Target width.

  • height

    (int | None, default: None ) –

    Target height.

  • **kwargs

    (Any, default: {} ) –

    Additional arguments to pass to the rescale function.

Returns:

  • dict[str, Any]

    Dictionary of keyword arguments for the rescale function.

Source code in vskernels/abstract/base.py
969
970
971
972
973
974
975
976
977
978
979
980
981
982
983
984
985
986
987
988
989
990
def get_rescale_args(
    self,
    clip: vs.VideoNode,
    shift: tuple[TopShift, LeftShift] = (0, 0),
    width: int | None = None,
    height: int | None = None,
    **kwargs: Any,
) -> dict[str, Any]:
    """
    Generate and normalize argument dictionary for a rescale operation.

    Args:
        clip: The source clip.
        shift: Vertical and horizontal shift to apply.
        width: Target width.
        height: Target height.
        **kwargs: Additional arguments to pass to the rescale function.

    Returns:
        Dictionary of keyword arguments for the rescale function.
    """
    return {"src_top": shift[0], "src_left": shift[1]} | self.get_params_args(True, clip, width, height, **kwargs)

get_scale_args

get_scale_args(
    clip: VideoNode,
    shift: tuple[TopShift, LeftShift] = (0, 0),
    width: int | None = None,
    height: int | None = None,
    **kwargs: Any
) -> dict[str, Any]

Generate and normalize argument dictionary for a scale operation.

Parameters:

  • clip

    (VideoNode) –

    The source clip.

  • shift

    (tuple[TopShift, LeftShift], default: (0, 0) ) –

    Vertical and horizontal shift to apply.

  • width

    (int | None, default: None ) –

    Target width.

  • height

    (int | None, default: None ) –

    Target height.

  • **kwargs

    (Any, default: {} ) –

    Additional arguments to pass to the scale function.

Returns:

  • dict[str, Any]

    Dictionary of keyword arguments for the scale function.

Source code in vskernels/abstract/base.py
923
924
925
926
927
928
929
930
931
932
933
934
935
936
937
938
939
940
941
942
943
944
def get_scale_args(
    self,
    clip: vs.VideoNode,
    shift: tuple[TopShift, LeftShift] = (0, 0),
    width: int | None = None,
    height: int | None = None,
    **kwargs: Any,
) -> dict[str, Any]:
    """
    Generate and normalize argument dictionary for a scale operation.

    Args:
        clip: The source clip.
        shift: Vertical and horizontal shift to apply.
        width: Target width.
        height: Target height.
        **kwargs: Additional arguments to pass to the scale function.

    Returns:
        Dictionary of keyword arguments for the scale function.
    """
    return {"src_top": shift[0], "src_left": shift[1]} | self.get_params_args(False, clip, width, height, **kwargs)

implemented_funcs classmethod

implemented_funcs() -> frozenset[str]

Returns a set of function names that are implemented in the current class and the parent classes.

These functions determine which keyword arguments will be extracted from the init method.

Returns:

Source code in vskernels/abstract/base.py
431
432
433
434
435
436
437
438
439
440
441
442
@classproperty
@classmethod
def implemented_funcs(cls) -> frozenset[str]:
    """
    Returns a set of function names that are implemented in the current class and the parent classes.

    These functions determine which keyword arguments will be extracted from the __init__ method.

    Returns:
        Frozen set of function names.
    """
    return frozenset(func for klass in cls.mro() for func in getattr(klass, "_implemented_funcs", ()))

kernel_radius

kernel_radius() -> int
Source code in vskernels/kernels/zimg/bicubic.py
70
71
72
73
74
@ZimgComplexKernel.cachedproperty
def kernel_radius(self) -> int:
    if (self.b, self.c) == (0, 0):
        return 1
    return 2

multi

multi(
    clip: VideoNodeT,
    multi: float = 2.0,
    shift: tuple[TopShift, LeftShift] = (0, 0),
    **kwargs: Any
) -> VideoNodeT

Deprecated alias for supersample.

Keyword arguments passed during initialization are automatically injected here, unless explicitly overridden by the arguments provided at call time. Only arguments that match named parameters in this method are injected.

Parameters:

  • clip

    (VideoNodeT) –

    The source clip.

  • multi

    (float, default: 2.0 ) –

    Supersampling factor.

  • shift

    (tuple[TopShift, LeftShift], default: (0, 0) ) –

    Subpixel shift (top, left) applied during scaling.

  • **kwargs

    (Any, default: {} ) –

    Additional arguments forwarded to the scale function.

Returns:

Source code in vskernels/abstract/base.py
531
532
533
534
535
536
537
538
539
540
541
542
543
544
545
546
547
548
549
550
551
@deprecated('The "multi" method is deprecated. Use "supersample" instead.', category=DeprecationWarning)
def multi(
    self, clip: VideoNodeT, multi: float = 2.0, shift: tuple[TopShift, LeftShift] = (0, 0), **kwargs: Any
) -> VideoNodeT:
    """
    Deprecated alias for `supersample`.

    Keyword arguments passed during initialization are automatically injected here,
    unless explicitly overridden by the arguments provided at call time.
    Only arguments that match named parameters in this method are injected.

    Args:
        clip: The source clip.
        multi: Supersampling factor.
        shift: Subpixel shift (top, left) applied during scaling.
        **kwargs: Additional arguments forwarded to the scale function.

    Returns:
        The supersampled clip.
    """
    return self.supersample(clip, multi, shift, **kwargs)

pretty_string

pretty_string() -> str

Cached property returning a user-friendly string representation.

Returns:

  • str

    Pretty-printed string with arguments.

Source code in vskernels/abstract/base.py
421
422
423
424
425
426
427
428
429
@BaseScalerMeta.cachedproperty
def pretty_string(self) -> str:
    """
    Cached property returning a user-friendly string representation.

    Returns:
        Pretty-printed string with arguments.
    """
    return self._pretty_string()

resample

resample(
    clip: VideoNode,
    format: int | VideoFormatLike | HoldsVideoFormat,
    matrix: MatrixLike | None = None,
    matrix_in: MatrixLike | None = None,
    **kwargs: Any
) -> ConstantFormatVideoNode

Resample a video clip to the given format.

Keyword arguments passed during initialization are automatically injected here, unless explicitly overridden by the arguments provided at call time. Only arguments that match named parameters in this method are injected.

Parameters:

  • clip

    (VideoNode) –

    The source clip.

  • format

    (int | VideoFormatLike | HoldsVideoFormat) –

    The target video format, which can either be:

    • an integer format ID,
    • a vs.PresetVideoFormat or vs.VideoFormat,
    • or a source from which a valid VideoFormat can be extracted.
  • matrix

    (MatrixLike | None, default: None ) –

    An optional color transformation matrix to apply.

  • matrix_in

    (MatrixLike | None, default: None ) –

    An optional input matrix for color transformations.

  • **kwargs

    (Any, default: {} ) –

    Additional keyword arguments passed to the resample_function.

Returns:

  • ConstantFormatVideoNode

    The resampled clip.

Source code in vskernels/abstract/base.py
717
718
719
720
721
722
723
724
725
726
727
728
729
730
731
732
733
734
735
736
737
738
739
740
741
742
743
744
745
746
747
748
def resample(
    self,
    clip: vs.VideoNode,
    format: int | VideoFormatLike | HoldsVideoFormat,
    matrix: MatrixLike | None = None,
    matrix_in: MatrixLike | None = None,
    **kwargs: Any,
) -> ConstantFormatVideoNode:
    """
    Resample a video clip to the given format.

    Keyword arguments passed during initialization are automatically injected here,
    unless explicitly overridden by the arguments provided at call time.
    Only arguments that match named parameters in this method are injected.

    Args:
        clip: The source clip.
        format: The target video format, which can either be:

               - an integer format ID,
               - a `vs.PresetVideoFormat` or `vs.VideoFormat`,
               - or a source from which a valid `VideoFormat` can be extracted.
        matrix: An optional color transformation matrix to apply.
        matrix_in: An optional input matrix for color transformations.
        **kwargs: Additional keyword arguments passed to the `resample_function`.

    Returns:
        The resampled clip.
    """
    return self.resample_function(
        clip, **_norm_props_enums(self.get_resample_args(clip, format, matrix, matrix_in, **kwargs))
    )

rescale

rescale(
    clip: VideoNode,
    width: int | None = None,
    height: int | None = None,
    shift: ShiftT = (0, 0),
    *,
    linear: bool | None = None,
    sigmoid: bool | tuple[Slope, Center] = False,
    border_handling: int | BorderHandling = MIRROR,
    sample_grid_model: int | SampleGridModel = MATCH_EDGES,
    field_based: FieldBasedLike | None = None,
    ignore_mask: VideoNode | None = None,
    blur: float | None = None,
    **kwargs: Any
) -> ConstantFormatVideoNode

Rescale a clip to the given resolution from a previously descaled clip, with image borders handling and sampling grid alignment, optionally using linear light processing.

Keyword arguments passed during initialization are automatically injected here, unless explicitly overridden by the arguments provided at call time. Only arguments that match named parameters in this method are injected.

Parameters:

  • clip

    (VideoNode) –

    The source clip.

  • width

    (int | None, default: None ) –

    Target scaled width (defaults to clip width if None).

  • height

    (int | None, default: None ) –

    Target scaled height (defaults to clip height if None).

  • shift

    (ShiftT, default: (0, 0) ) –

    Subpixel shift (top, left) or per-field shifts.

  • linear

    (bool | None, default: None ) –

    Whether to linearize the input before rescaling. If None, inferred from sigmoid.

  • sigmoid

    (bool | tuple[Slope, Center], default: False ) –

    Whether to use sigmoid transfer curve. Can be True, False, or a tuple of (slope, center). True applies the defaults values (6.5, 0.75). Keep in mind sigmoid slope has to be in range 1.0-20.0 (inclusive) and sigmoid center has to be in range 0.0-1.0 (inclusive).

  • border_handling

    (int | BorderHandling, default: MIRROR ) –

    Method for handling image borders during sampling.

  • sample_grid_model

    (int | SampleGridModel, default: MATCH_EDGES ) –

    Model used to align sampling grid.

  • field_based

    (FieldBasedLike | None, default: None ) –

    Field-based processing mode (interlaced or progressive).

  • ignore_mask

    (VideoNode | None, default: None ) –

    Optional mask specifying areas to ignore during rescaling.

  • blur

    (float | None, default: None ) –

    Amount of blur to apply during rescaling.

  • **kwargs

    (Any, default: {} ) –

    Additional arguments passed to rescale_function.

Returns:

  • ConstantFormatVideoNode

    The scaled clip.

Source code in vskernels/abstract/complex.py
773
774
775
776
777
778
779
780
781
782
783
784
785
786
787
788
789
790
791
792
793
794
795
796
797
798
799
800
801
802
803
804
805
806
807
808
809
810
811
812
813
814
815
816
817
818
819
820
821
822
823
824
825
826
827
828
829
830
831
832
833
834
835
836
837
838
839
840
841
842
843
def rescale(
    self,
    clip: vs.VideoNode,
    width: int | None = None,
    height: int | None = None,
    shift: ShiftT = (0, 0),
    *,
    # `linear` and `sigmoid` parameters from LinearDescaler
    linear: bool | None = None,
    sigmoid: bool | tuple[Slope, Center] = False,
    # ComplexDescaler adds border_handling, sample_grid_model, field_based, ignore_mask and blur
    border_handling: int | BorderHandling = BorderHandling.MIRROR,
    sample_grid_model: int | SampleGridModel = SampleGridModel.MATCH_EDGES,
    field_based: FieldBasedLike | None = None,
    ignore_mask: vs.VideoNode | None = None,
    blur: float | None = None,
    **kwargs: Any,
) -> ConstantFormatVideoNode:
    """
    Rescale a clip to the given resolution from a previously descaled clip,
    with image borders handling and sampling grid alignment, optionally using linear light processing.

    Keyword arguments passed during initialization are automatically injected here,
    unless explicitly overridden by the arguments provided at call time.
    Only arguments that match named parameters in this method are injected.

    Args:
        clip: The source clip.
        width: Target scaled width (defaults to clip width if None).
        height: Target scaled height (defaults to clip height if None).
        shift: Subpixel shift (top, left) or per-field shifts.
        linear: Whether to linearize the input before rescaling. If None, inferred from sigmoid.
        sigmoid: Whether to use sigmoid transfer curve. Can be True, False, or a tuple of (slope, center). `True`
            applies the defaults values (6.5, 0.75). Keep in mind sigmoid slope has to be in range 1.0-20.0
            (inclusive) and sigmoid center has to be in range 0.0-1.0 (inclusive).
        border_handling: Method for handling image borders during sampling.
        sample_grid_model: Model used to align sampling grid.
        field_based: Field-based processing mode (interlaced or progressive).
        ignore_mask: Optional mask specifying areas to ignore during rescaling.
        blur: Amount of blur to apply during rescaling.
        **kwargs: Additional arguments passed to `rescale_function`.

    Returns:
        The scaled clip.
    """
    width, height = self._wh_norm(clip, width, height)
    check_correct_subsampling(clip, width, height)

    field_based = FieldBased.from_param_or_video(field_based, clip)

    clip, bits = expect_bits(clip, 32)

    de_base_args = (width, height // (1 + field_based.is_inter))
    kwargs.update(
        border_handling=BorderHandling.from_param(border_handling, self.rescale), ignore_mask=ignore_mask, blur=blur
    )

    sample_grid_model = SampleGridModel(sample_grid_model)

    if field_based.is_inter:
        raise NotImplementedError
    else:
        shift = _descale_shift_norm(shift, True, self.rescale)

        kwargs, shift = sample_grid_model.for_src(clip, width, height, shift, **kwargs)

        rescaled = super().rescale(
            clip, **self.get_rescale_args(clip, shift, *de_base_args, **kwargs), linear=linear, sigmoid=sigmoid
        )

    return depth(rescaled, bits)

scale

scale(
    clip: VideoNode,
    width: int | None = None,
    height: int | None = None,
    shift: tuple[TopShift | list[TopShift], LeftShift | list[LeftShift]] = (
        0,
        0,
    ),
    *,
    linear: bool | None = None,
    sigmoid: bool | tuple[Slope, Center] = False,
    border_handling: BorderHandling = MIRROR,
    sample_grid_model: SampleGridModel = MATCH_EDGES,
    sar: Sar | float | bool | None = None,
    dar: Dar | float | bool | None = None,
    dar_in: Dar | bool | float | None = None,
    keep_ar: bool | None = None,
    blur: float | None = None,
    **kwargs: Any
) -> VideoNode | ConstantFormatVideoNode

Scale a clip to the given resolution, with aspect ratio and linear light support.

Keyword arguments passed during initialization are automatically injected here, unless explicitly overridden by the arguments provided at call time. Only arguments that match named parameters in this method are injected.

Parameters:

  • clip

    (VideoNode) –

    The source clip.

  • width

    (int | None, default: None ) –

    Target width (defaults to clip width if None).

  • height

    (int | None, default: None ) –

    Target height (defaults to clip height if None).

  • shift

    (tuple[TopShift | list[TopShift], LeftShift | list[LeftShift]], default: (0, 0) ) –

    Subpixel shift (top, left) applied during scaling. If a tuple is provided, it is used uniformly. If a list is given, the shift is applied per plane.

  • linear

    (bool | None, default: None ) –

    Whether to linearize the input before descaling. If None, inferred from sigmoid.

  • sigmoid

    (bool | tuple[Slope, Center], default: False ) –

    Whether to use sigmoid transfer curve. Can be True, False, or a tuple of (slope, center). True applies the defaults values (6.5, 0.75). Keep in mind sigmoid slope has to be in range 1.0-20.0 (inclusive) and sigmoid center has to be in range 0.0-1.0 (inclusive).

  • border_handling

    (BorderHandling, default: MIRROR ) –

    Method for handling image borders during sampling.

  • sample_grid_model

    (SampleGridModel, default: MATCH_EDGES ) –

    Model used to align sampling grid.

  • sar

    (Sar | float | bool | None, default: None ) –

    Sample aspect ratio to assume or convert to.

  • dar

    (Dar | float | bool | None, default: None ) –

    Desired display aspect ratio.

  • dar_in

    (Dar | bool | float | None, default: None ) –

    Input display aspect ratio, if different from clip's.

  • keep_ar

    (bool | None, default: None ) –

    Whether to adjust dimensions to preserve aspect ratio.

  • blur

    (float | None, default: None ) –

    Amount of blur to apply during scaling.

Returns:

  • VideoNode | ConstantFormatVideoNode

    Scaled clip, optionally aspect-corrected and linearized.

Source code in vskernels/abstract/complex.py
527
528
529
530
531
532
533
534
535
536
537
538
539
540
541
542
543
544
545
546
547
548
549
550
551
552
553
554
555
556
557
558
559
560
561
562
563
564
565
566
567
568
569
570
571
572
573
574
575
576
577
578
579
580
581
582
583
584
585
586
587
588
589
590
591
592
593
594
595
596
597
598
599
600
601
602
603
604
605
606
607
608
609
610
611
612
613
614
615
616
617
618
619
620
621
622
623
624
625
626
627
628
629
630
631
632
633
634
635
636
637
638
639
640
641
642
643
644
645
646
647
648
649
650
651
652
653
654
655
656
657
658
659
660
def scale(
    self,
    clip: vs.VideoNode,
    width: int | None = None,
    height: int | None = None,
    # ComplexScaler adds shift per planes
    shift: tuple[TopShift | list[TopShift], LeftShift | list[LeftShift]] = (0, 0),
    *,
    # `linear` and `sigmoid` from LinearScaler
    linear: bool | None = None,
    sigmoid: bool | tuple[Slope, Center] = False,
    # `border_handling`, `sample_grid_model`, `sar`, `dar`, `dar_in` and `keep_ar` from KeepArScaler
    border_handling: BorderHandling = BorderHandling.MIRROR,
    sample_grid_model: SampleGridModel = SampleGridModel.MATCH_EDGES,
    sar: Sar | float | bool | None = None,
    dar: Dar | float | bool | None = None,
    dar_in: Dar | bool | float | None = None,
    keep_ar: bool | None = None,
    # ComplexScaler adds blur
    blur: float | None = None,
    **kwargs: Any,
) -> vs.VideoNode | ConstantFormatVideoNode:
    """
    Scale a clip to the given resolution, with aspect ratio and linear light support.

    Keyword arguments passed during initialization are automatically injected here,
    unless explicitly overridden by the arguments provided at call time.
    Only arguments that match named parameters in this method are injected.

    Args:
        clip: The source clip.
        width: Target width (defaults to clip width if None).
        height: Target height (defaults to clip height if None).
        shift: Subpixel shift (top, left) applied during scaling. If a tuple is provided, it is used uniformly. If a
            list is given, the shift is applied per plane.
        linear: Whether to linearize the input before descaling. If None, inferred from sigmoid.
        sigmoid: Whether to use sigmoid transfer curve. Can be True, False, or a tuple of (slope, center). `True`
            applies the defaults values (6.5, 0.75). Keep in mind sigmoid slope has to be in range 1.0-20.0
            (inclusive) and sigmoid center has to be in range 0.0-1.0 (inclusive).
        border_handling: Method for handling image borders during sampling.
        sample_grid_model: Model used to align sampling grid.
        sar: Sample aspect ratio to assume or convert to.
        dar: Desired display aspect ratio.
        dar_in: Input display aspect ratio, if different from clip's.
        keep_ar: Whether to adjust dimensions to preserve aspect ratio.
        blur: Amount of blur to apply during scaling.

    Returns:
        Scaled clip, optionally aspect-corrected and linearized.
    """
    kwargs.update(
        linear=linear,
        sigmoid=sigmoid,
        border_handling=border_handling,
        sample_grid_model=sample_grid_model,
        sar=sar,
        dar=dar,
        dar_in=dar_in,
        keep_ar=keep_ar,
        blur=blur,
    )

    shift_top, shift_left = shift

    if isinstance(shift_top, (int, float)) and isinstance(shift_left, (int, float)):
        return super().scale(clip, width, height, (shift_top, shift_left), **kwargs)

    assert check_variable_format(clip, self.scale)

    n_planes = clip.format.num_planes

    shift_top = normalize_seq(shift_top, n_planes)
    shift_left = normalize_seq(shift_left, n_planes)

    if n_planes == 1:
        return super().scale(clip, width, height, (shift_top[0], shift_left[0]), **kwargs)

    width, height = self._wh_norm(clip, width, height)

    format_in = clip.format
    format_out = get_video_format(fallback(kwargs.pop("format", None), self.kwargs.get("format"), clip.format))

    chromaloc = ChromaLocation.from_video(clip, func=self.scale)
    chromaloc_in = ChromaLocation(
        fallback(kwargs.pop("chromaloc_in", None), self.kwargs.get("chromaloc_in"), chromaloc)
    )
    chromaloc_out = ChromaLocation(fallback(kwargs.pop("chromaloc", None), self.kwargs.get("chromaloc"), chromaloc))

    off_left, off_top = chromaloc_in.get_offsets(format_in)
    off_left_out, off_top_out = chromaloc_out.get_offsets(format_out)

    factor_w = 1 / 2**format_in.subsampling_w
    factor_h = 1 / 2**format_in.subsampling_h

    # Offsets for format out
    offc_left = (abs(off_left) + off_left_out) * factor_w
    offc_top = (abs(off_top) + off_top_out) * factor_h

    # Offsets for scale out
    if format_out.subsampling_w:
        offc_left = ((abs(off_left) + off_left * (clip.width / width)) * factor_w) + offc_left
    if format_out.subsampling_h:
        offc_top = ((abs(off_top) + off_top * (clip.height / height)) * factor_h) + offc_top

    for i in range(1, n_planes):
        shift_left[i] += offc_left
        shift_top[i] += offc_top

    scaled_planes = list[vs.VideoNode]()

    for i, (plane, top, left) in enumerate(zip(split(clip), shift_top, shift_left)):
        if i:
            w = round(width * 1 / 2**format_out.subsampling_h)
            h = round(height * 1 / 2**format_out.subsampling_h)
        else:
            w, h = width, height

        scaled_planes.append(
            super().scale(
                plane,
                w,
                h,
                (top, left),
                format=format_out.replace(color_family=vs.GRAY, subsampling_w=0, subsampling_h=0),
                **kwargs,
            )
        )

    merged = vs.core.std.ShufflePlanes(scaled_planes, [0, 0, 0], format_out.color_family, clip)

    if chromaloc_in != chromaloc_out:
        return chromaloc_out.apply(merged)

    return merged

shift

shift(
    clip: VideoNode, shift: tuple[TopShift, LeftShift], /, **kwargs: Any
) -> ConstantFormatVideoNode
shift(
    clip: VideoNode,
    shift_top: float | list[float],
    shift_left: float | list[float],
    /,
    **kwargs: Any,
) -> ConstantFormatVideoNode
shift(
    clip: VideoNode,
    shifts_or_top: float | tuple[float, float] | list[float],
    shift_left: float | list[float] | None = None,
    /,
    **kwargs: Any,
) -> ConstantFormatVideoNode

Apply a subpixel shift to the clip using the kernel's scaling logic.

If a single float or tuple is provided, it is used uniformly. If a list is given, the shift is applied per plane.

Keyword arguments passed during initialization are automatically injected here, unless explicitly overridden by the arguments provided at call time. Only arguments that match named parameters in this method are injected.

Parameters:

  • clip

    (VideoNode) –

    The source clip.

  • shifts_or_top

    (float | tuple[float, float] | list[float]) –

    Either a single vertical shift, a (top, left) tuple, or a list of vertical shifts.

  • shift_left

    (float | list[float] | None, default: None ) –

    Horizontal shift or list of horizontal shifts. Ignored if shifts_or_top is a tuple.

  • **kwargs

    (Any, default: {} ) –

    Additional arguments passed to the internal scale call.

Returns:

  • ConstantFormatVideoNode

    A new clip with the applied shift.

Raises:

  • VariableFormatError

    If the input clip has variable format.

  • CustomValueError

    If the input clip is GRAY but lists of shift has been passed.

Source code in vskernels/abstract/base.py
845
846
847
848
849
850
851
852
853
854
855
856
857
858
859
860
861
862
863
864
865
866
867
868
869
870
871
872
873
874
875
876
877
878
879
880
881
882
883
884
885
886
887
888
889
890
891
892
893
894
895
896
897
898
899
900
901
902
903
def shift(
    self,
    clip: vs.VideoNode,
    shifts_or_top: float | tuple[float, float] | list[float],
    shift_left: float | list[float] | None = None,
    /,
    **kwargs: Any,
) -> ConstantFormatVideoNode:
    """
    Apply a subpixel shift to the clip using the kernel's scaling logic.

    If a single float or tuple is provided, it is used uniformly.
    If a list is given, the shift is applied per plane.

    Keyword arguments passed during initialization are automatically injected here,
    unless explicitly overridden by the arguments provided at call time.
    Only arguments that match named parameters in this method are injected.

    Args:
        clip: The source clip.
        shifts_or_top: Either a single vertical shift, a (top, left) tuple, or a list of vertical shifts.
        shift_left: Horizontal shift or list of horizontal shifts. Ignored if `shifts_or_top` is a tuple.
        **kwargs: Additional arguments passed to the internal `scale` call.

    Returns:
        A new clip with the applied shift.

    Raises:
        VariableFormatError: If the input clip has variable format.
        CustomValueError: If the input clip is GRAY but lists of shift has been passed.
    """
    assert check_variable_format(clip, self.shift)

    n_planes = clip.format.num_planes

    def _shift(src: vs.VideoNode, shift: tuple[TopShift, LeftShift] = (0, 0)) -> ConstantFormatVideoNode:
        return self.scale(src, shift=shift, **kwargs)  # type: ignore[return-value]

    if isinstance(shifts_or_top, tuple):
        return _shift(clip, shifts_or_top)

    if isinstance(shifts_or_top, (int, float)) and isinstance(shift_left, (int, float, NoneType)):
        return _shift(clip, (shifts_or_top, shift_left or 0))

    if shift_left is None:
        shift_left = 0.0

    shifts_top = normalize_seq(shifts_or_top, n_planes)
    shifts_left = normalize_seq(shift_left, n_planes)

    if n_planes == 1:
        return _shift(clip, (shifts_top[0], shifts_left[0]))

    shifted_planes = [
        plane if top == left == 0 else _shift(plane, (top, left))
        for plane, top, left in zip(split(clip), shifts_top, shifts_left)
    ]

    return core.std.ShufflePlanes(shifted_planes, [0, 0, 0], clip.format.color_family, clip)

supersample

supersample(
    clip: VideoNodeT,
    rfactor: float = 2.0,
    shift: tuple[TopShift, LeftShift] = (0, 0),
    **kwargs: Any
) -> VideoNodeT

Supersample a clip by a given scaling factor.

Keyword arguments passed during initialization are automatically injected here, unless explicitly overridden by the arguments provided at call time. Only arguments that match named parameters in this method are injected.

Parameters:

  • clip

    (VideoNodeT) –

    The source clip.

  • rfactor

    (float, default: 2.0 ) –

    Scaling factor for supersampling.

  • shift

    (tuple[TopShift, LeftShift], default: (0, 0) ) –

    Subpixel shift (top, left) applied during scaling.

  • **kwargs

    (Any, default: {} ) –

    Additional arguments forwarded to the scale function.

Raises:

  • CustomValueError

    If resulting resolution is non-positive.

Returns:

Source code in vskernels/abstract/base.py
496
497
498
499
500
501
502
503
504
505
506
507
508
509
510
511
512
513
514
515
516
517
518
519
520
521
522
523
524
525
526
527
528
529
def supersample(
    self, clip: VideoNodeT, rfactor: float = 2.0, shift: tuple[TopShift, LeftShift] = (0, 0), **kwargs: Any
) -> VideoNodeT:
    """
    Supersample a clip by a given scaling factor.

    Keyword arguments passed during initialization are automatically injected here,
    unless explicitly overridden by the arguments provided at call time.
    Only arguments that match named parameters in this method are injected.

    Args:
        clip: The source clip.
        rfactor: Scaling factor for supersampling.
        shift: Subpixel shift (top, left) applied during scaling.
        **kwargs: Additional arguments forwarded to the scale function.

    Raises:
        CustomValueError: If resulting resolution is non-positive.

    Returns:
        The supersampled clip.
    """
    assert check_variable_resolution(clip, self.supersample)

    dst_width, dst_height = ceil(clip.width * rfactor), ceil(clip.height * rfactor)

    if max(dst_width, dst_height) <= 0.0:
        raise CustomValueError(
            'Multiplying the resolution by "rfactor" must result in a positive resolution!',
            self.supersample,
            rfactor,
        )

    return self.scale(clip, dst_width, dst_height, shift, **kwargs)  # type: ignore[return-value]

Bicubic

Bicubic(b: float = 0, c: float = 0.5, **kwargs: Any)

Bases: ZimgComplexKernel

Bicubic resizer.

Initialize the scaler with specific 'b' and 'c' parameters and optional arguments.

Parameters:

  • b

    (float, default: 0 ) –

    The 'b' parameter for bicubic interpolation.

  • c

    (float, default: 0.5 ) –

    The 'c' parameter for bicubic interpolation.

  • **kwargs

    (Any, default: {} ) –

    Keyword arguments that configure the internal scaling behavior.

Methods:

  • bob

    Apply bob deinterlacing to a given clip using the selected resizer.

  • deinterlace

    Apply deinterlacing to a given clip using the selected resizer.

  • descale

    Descale a clip to the given resolution, with image borders handling and sampling grid alignment,

  • ensure_obj

    Ensure that the input is a scaler instance, resolving it if necessary.

  • from_param

    Resolve and return a scaler type from a given input (string, type, or instance).

  • get_bob_args
  • get_descale_args

    Generate and normalize argument dictionary for a descale operation.

  • get_params_args
  • get_resample_args

    Generate and normalize argument dictionary for a resample operation.

  • get_rescale_args

    Generate and normalize argument dictionary for a rescale operation.

  • get_scale_args

    Generate and normalize argument dictionary for a scale operation.

  • implemented_funcs

    Returns a set of function names that are implemented in the current class and the parent classes.

  • kernel_radius
  • multi

    Deprecated alias for supersample.

  • pretty_string

    Cached property returning a user-friendly string representation.

  • resample

    Resample a video clip to the given format.

  • rescale

    Rescale a clip to the given resolution from a previously descaled clip,

  • scale

    Scale a clip to the given resolution, with aspect ratio and linear light support.

  • shift

    Apply a subpixel shift to the clip using the kernel's scaling logic.

  • supersample

    Supersample a clip by a given scaling factor.

Attributes:

Source code in vskernels/kernels/zimg/bicubic.py
39
40
41
42
43
44
45
46
47
48
49
50
def __init__(self, b: float = 0, c: float = 0.5, **kwargs: Any) -> None:
    """
    Initialize the scaler with specific 'b' and 'c' parameters and optional arguments.

    Args:
        b: The 'b' parameter for bicubic interpolation.
        c: The 'c' parameter for bicubic interpolation.
        **kwargs: Keyword arguments that configure the internal scaling behavior.
    """
    self.b = b
    self.c = c
    super().__init__(**kwargs)

b instance-attribute

b = b

bob_function class-attribute instance-attribute

bob_function: Callable[..., ConstantFormatVideoNode] = Bob

Bob function called internally when performing bobbing operations.

c instance-attribute

c = c

descale_function class-attribute instance-attribute

descale_function: Callable[..., ConstantFormatVideoNode] = Debicubic

kwargs instance-attribute

kwargs: dict[str, Any] = kwargs

Arguments passed to the implemented funcs or internal scale function.

resample_function class-attribute instance-attribute

resample_function: Callable[..., ConstantFormatVideoNode] = Bicubic

rescale_function class-attribute instance-attribute

rescale_function: Callable[..., ConstantFormatVideoNode] = Bicubic

scale_function class-attribute instance-attribute

scale_function: Callable[..., VideoNode] = Bicubic

bob

bob(
    clip: VideoNode, *, tff: FieldBasedLike | bool | None = None, **kwargs: Any
) -> ConstantFormatVideoNode

Apply bob deinterlacing to a given clip using the selected resizer.

Keyword arguments passed during initialization are automatically injected here, unless explicitly overridden by the arguments provided at call time. Only arguments that match named parameters in this method are injected.

Parameters:

  • clip

    (VideoNode) –

    The source clip

  • tff

    (FieldBasedLike | bool | None, default: None ) –

    Field order of the clip.

Returns:

  • ConstantFormatVideoNode

    The bobbed clip.

Source code in vskernels/kernels/zimg/abstract.py
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
def bob(
    self, clip: vs.VideoNode, *, tff: FieldBasedLike | bool | None = None, **kwargs: Any
) -> ConstantFormatVideoNode:
    """
    Apply bob deinterlacing to a given clip using the selected resizer.

    Keyword arguments passed during initialization are automatically injected here,
    unless explicitly overridden by the arguments provided at call time.
    Only arguments that match named parameters in this method are injected.

    Args:
        clip: The source clip
        tff: Field order of the clip.

    Returns:
        The bobbed clip.
    """
    clip_fieldbased = FieldBased.from_param_or_video(tff, clip, True, self.__class__)

    assert check_variable(clip, self.__class__)

    return self.bob_function(clip, **self.get_bob_args(clip, tff=clip_fieldbased.is_tff, **kwargs))

deinterlace

deinterlace(
    clip: VideoNode,
    *,
    tff: FieldBasedLike | bool | None = None,
    double_rate: bool = True,
    **kwargs: Any
) -> ConstantFormatVideoNode

Apply deinterlacing to a given clip using the selected resizer.

Keyword arguments passed during initialization are automatically injected here, unless explicitly overridden by the arguments provided at call time. Only arguments that match named parameters in this method are injected.

Parameters:

  • clip

    (VideoNode) –

    The source clip

  • tff

    (FieldBasedLike | bool | None, default: None ) –

    Field order of the clip.

  • double_rate

    (bool, default: True ) –

    Whether to double the frame rate (True) or retain the original rate (False).

Returns:

  • ConstantFormatVideoNode

    The bobbed clip.

Source code in vskernels/kernels/zimg/abstract.py
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
def deinterlace(
    self, clip: vs.VideoNode, *, tff: FieldBasedLike | bool | None = None, double_rate: bool = True, **kwargs: Any
) -> ConstantFormatVideoNode:
    """
    Apply deinterlacing to a given clip using the selected resizer.

    Keyword arguments passed during initialization are automatically injected here,
    unless explicitly overridden by the arguments provided at call time.
    Only arguments that match named parameters in this method are injected.

    Args:
        clip: The source clip
        tff: Field order of the clip.
        double_rate: Whether to double the frame rate (True) or retain the original rate (False).

    Returns:
        The bobbed clip.
    """
    bobbed = self.bob(clip, tff=tff, **kwargs)

    if not double_rate:
        return bobbed[::2]

    return bobbed

descale

descale(
    clip: VideoNode,
    width: int | None = None,
    height: int | None = None,
    shift: ShiftT = (0, 0),
    *,
    linear: bool | None = None,
    sigmoid: bool | tuple[Slope, Center] = False,
    border_handling: int | BorderHandling = MIRROR,
    sample_grid_model: int | SampleGridModel = MATCH_EDGES,
    field_based: FieldBasedLike | None = None,
    ignore_mask: VideoNode | None = None,
    blur: float | None = None,
    **kwargs: Any
) -> ConstantFormatVideoNode

Descale a clip to the given resolution, with image borders handling and sampling grid alignment, optionally using linear light processing.

Supports both progressive and interlaced sources. When interlaced, it will separate fields, perform per-field descaling, and weave them back.

Keyword arguments passed during initialization are automatically injected here, unless explicitly overridden by the arguments provided at call time. Only arguments that match named parameters in this method are injected.

Parameters:

  • clip

    (VideoNode) –

    The source clip.

  • width

    (int | None, default: None ) –

    Target descaled width (defaults to clip width if None).

  • height

    (int | None, default: None ) –

    Target descaled height (defaults to clip height if None).

  • shift

    (ShiftT, default: (0, 0) ) –

    Subpixel shift (top, left) or per-field shifts.

  • linear

    (bool | None, default: None ) –

    Whether to linearize the input before descaling. If None, inferred from sigmoid.

  • sigmoid

    (bool | tuple[Slope, Center], default: False ) –

    Whether to use sigmoid transfer curve. Can be True, False, or a tuple of (slope, center). True applies the defaults values (6.5, 0.75). Keep in mind sigmoid slope has to be in range 1.0-20.0 (inclusive) and sigmoid center has to be in range 0.0-1.0 (inclusive).

  • border_handling

    (int | BorderHandling, default: MIRROR ) –

    Method for handling image borders during sampling.

  • sample_grid_model

    (int | SampleGridModel, default: MATCH_EDGES ) –

    Model used to align sampling grid.

  • field_based

    (FieldBasedLike | None, default: None ) –

    Field-based processing mode (interlaced or progressive).

  • ignore_mask

    (VideoNode | None, default: None ) –

    Optional mask specifying areas to ignore during descaling.

  • blur

    (float | None, default: None ) –

    Amount of blur to apply during scaling.

  • **kwargs

    (Any, default: {} ) –

    Additional arguments passed to descale_function.

Returns:

  • ConstantFormatVideoNode

    The descaled video node, optionally processed in linear light.

Source code in vskernels/abstract/complex.py
671
672
673
674
675
676
677
678
679
680
681
682
683
684
685
686
687
688
689
690
691
692
693
694
695
696
697
698
699
700
701
702
703
704
705
706
707
708
709
710
711
712
713
714
715
716
717
718
719
720
721
722
723
724
725
726
727
728
729
730
731
732
733
734
735
736
737
738
739
740
741
742
743
744
745
746
747
748
749
750
751
752
753
754
755
756
757
758
759
760
761
762
763
764
765
766
767
768
769
770
771
def descale(
    self,
    clip: vs.VideoNode,
    width: int | None = None,
    height: int | None = None,
    shift: ShiftT = (0, 0),
    *,
    # `linear` and `sigmoid` parameters from LinearDescaler
    linear: bool | None = None,
    sigmoid: bool | tuple[Slope, Center] = False,
    # ComplexDescaler adds border_handling, sample_grid_model, field_based,  ignore_mask and blur
    border_handling: int | BorderHandling = BorderHandling.MIRROR,
    sample_grid_model: int | SampleGridModel = SampleGridModel.MATCH_EDGES,
    field_based: FieldBasedLike | None = None,
    ignore_mask: vs.VideoNode | None = None,
    blur: float | None = None,
    **kwargs: Any,
) -> ConstantFormatVideoNode:
    """
    Descale a clip to the given resolution, with image borders handling and sampling grid alignment,
    optionally using linear light processing.

    Supports both progressive and interlaced sources. When interlaced, it will separate fields,
    perform per-field descaling, and weave them back.

    Keyword arguments passed during initialization are automatically injected here,
    unless explicitly overridden by the arguments provided at call time.
    Only arguments that match named parameters in this method are injected.

    Args:
        clip: The source clip.
        width: Target descaled width (defaults to clip width if None).
        height: Target descaled height (defaults to clip height if None).
        shift: Subpixel shift (top, left) or per-field shifts.
        linear: Whether to linearize the input before descaling. If None, inferred from sigmoid.
        sigmoid: Whether to use sigmoid transfer curve. Can be True, False, or a tuple of (slope, center). `True`
            applies the defaults values (6.5, 0.75). Keep in mind sigmoid slope has to be in range 1.0-20.0
            (inclusive) and sigmoid center has to be in range 0.0-1.0 (inclusive).
        border_handling: Method for handling image borders during sampling.
        sample_grid_model: Model used to align sampling grid.
        field_based: Field-based processing mode (interlaced or progressive).
        ignore_mask: Optional mask specifying areas to ignore during descaling.
        blur: Amount of blur to apply during scaling.
        **kwargs: Additional arguments passed to `descale_function`.

    Returns:
        The descaled video node, optionally processed in linear light.
    """
    width, height = self._wh_norm(clip, width, height)
    check_correct_subsampling(clip, width, height)

    field_based = FieldBased.from_param_or_video(field_based, clip)

    clip, bits = expect_bits(clip, 32)

    de_base_args = (width, height // (1 + field_based.is_inter))
    kwargs.update(
        linear=linear,
        sigmoid=sigmoid,
        border_handling=BorderHandling.from_param(border_handling, self.descale),
        ignore_mask=ignore_mask,
        blur=blur,
    )

    sample_grid_model = SampleGridModel(sample_grid_model)

    if field_based.is_inter:
        from vsdeinterlace import reweave

        shift_y, shift_x = _descale_shift_norm(shift, False, self.descale)

        kwargs_tf, shift = sample_grid_model.for_src(clip, width, height, (shift_y[0], shift_x[0]), **kwargs)
        kwargs_bf, shift = sample_grid_model.for_src(clip, width, height, (shift_y[1], shift_x[1]), **kwargs)

        de_kwargs_tf = self.get_descale_args(clip, (shift_y[0], shift_x[0]), *de_base_args, **kwargs_tf)
        de_kwargs_bf = self.get_descale_args(clip, (shift_y[1], shift_x[1]), *de_base_args, **kwargs_bf)

        if height % 2:
            raise CustomIndexError("You can't descale to odd resolution when crossconverted!", self.descale)

        field_shift = 0.125 * height / clip.height

        fields = clip.std.SeparateFields(field_based.is_tff)

        descaled_tf = super().descale(
            fields[0::2],
            **de_kwargs_tf | {"src_top": de_kwargs_tf.get("src_top", 0.0) + field_shift},
        )
        descaled_bf = super().descale(
            fields[1::2],
            **de_kwargs_bf | {"src_top": de_kwargs_bf.get("src_top", 0.0) - field_shift},
        )
        descaled = reweave(descaled_tf, descaled_bf, field_based)
    else:
        shift = _descale_shift_norm(shift, True, self.descale)

        kwargs, shift = sample_grid_model.for_src(clip, width, height, shift, **kwargs)

        descaled = super().descale(clip, **self.get_descale_args(clip, shift, *de_base_args, **kwargs))

    return depth(descaled, bits)

ensure_obj classmethod

ensure_obj(
    scaler: str | type[Self] | Self | None = None,
    /,
    func_except: FuncExcept | None = None,
) -> Self

Ensure that the input is a scaler instance, resolving it if necessary.

Parameters:

  • scaler

    (str | type[Self] | Self | None, default: None ) –

    Scaler identifier (string, class, or instance).

  • func_except

    (FuncExcept | None, default: None ) –

    Function returned for custom error handling.

Returns:

  • Self

    Scaler instance.

Source code in vskernels/abstract/base.py
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
@classmethod
def ensure_obj(
    cls,
    scaler: str | type[Self] | Self | None = None,
    /,
    func_except: FuncExcept | None = None,
) -> Self:
    """
    Ensure that the input is a scaler instance, resolving it if necessary.

    Args:
        scaler: Scaler identifier (string, class, or instance).
        func_except: Function returned for custom error handling.

    Returns:
        Scaler instance.
    """
    return _base_ensure_obj(cls, scaler, func_except)

from_param classmethod

from_param(
    scaler: str | type[Self] | Self | None = None,
    /,
    func_except: FuncExcept | None = None,
) -> type[Self]

Resolve and return a scaler type from a given input (string, type, or instance).

Parameters:

  • scaler

    (str | type[Self] | Self | None, default: None ) –

    Scaler identifier (string, class, or instance).

  • func_except

    (FuncExcept | None, default: None ) –

    Function returned for custom error handling.

Returns:

  • type[Self]

    Resolved scaler type.

Source code in vskernels/abstract/base.py
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
@classmethod
def from_param(
    cls,
    scaler: str | type[Self] | Self | None = None,
    /,
    func_except: FuncExcept | None = None,
) -> type[Self]:
    """
    Resolve and return a scaler type from a given input (string, type, or instance).

    Args:
        scaler: Scaler identifier (string, class, or instance).
        func_except: Function returned for custom error handling.

    Returns:
        Resolved scaler type.
    """
    return _base_from_param(cls, scaler, cls._err_class, func_except)

get_bob_args

get_bob_args(
    clip: VideoNode, shift: tuple[TopShift, LeftShift] = (0, 0), **kwargs: Any
) -> dict[str, Any]
Source code in vskernels/kernels/zimg/bicubic.py
60
61
62
63
64
65
66
67
68
def get_bob_args(
    self,
    clip: vs.VideoNode,
    shift: tuple[TopShift, LeftShift] = (0, 0),
    **kwargs: Any,
) -> dict[str, Any]:
    return super().get_bob_args(
        clip, shift, filter="bicubic", filter_param_a=self.b, filter_param_b=self.c, **kwargs
    )

get_descale_args

get_descale_args(
    clip: VideoNode,
    shift: tuple[TopShift, LeftShift] = (0, 0),
    width: int | None = None,
    height: int | None = None,
    **kwargs: Any
) -> dict[str, Any]

Generate and normalize argument dictionary for a descale operation.

Parameters:

  • clip

    (VideoNode) –

    The source clip.

  • shift

    (tuple[TopShift, LeftShift], default: (0, 0) ) –

    Vertical and horizontal shift to apply.

  • width

    (int | None, default: None ) –

    Target width.

  • height

    (int | None, default: None ) –

    Target height.

  • **kwargs

    (Any, default: {} ) –

    Additional arguments to pass to the descale function.

Returns:

  • dict[str, Any]

    Dictionary of keyword arguments for the descale function.

Source code in vskernels/abstract/base.py
946
947
948
949
950
951
952
953
954
955
956
957
958
959
960
961
962
963
964
965
966
967
def get_descale_args(
    self,
    clip: vs.VideoNode,
    shift: tuple[TopShift, LeftShift] = (0, 0),
    width: int | None = None,
    height: int | None = None,
    **kwargs: Any,
) -> dict[str, Any]:
    """
    Generate and normalize argument dictionary for a descale operation.

    Args:
        clip: The source clip.
        shift: Vertical and horizontal shift to apply.
        width: Target width.
        height: Target height.
        **kwargs: Additional arguments to pass to the descale function.

    Returns:
        Dictionary of keyword arguments for the descale function.
    """
    return {"src_top": shift[0], "src_left": shift[1]} | self.get_params_args(True, clip, width, height, **kwargs)

get_params_args

get_params_args(
    is_descale: bool,
    clip: VideoNode,
    width: int | None = None,
    height: int | None = None,
    **kwargs: Any
) -> dict[str, Any]
Source code in vskernels/kernels/zimg/bicubic.py
52
53
54
55
56
57
58
def get_params_args(
    self, is_descale: bool, clip: vs.VideoNode, width: int | None = None, height: int | None = None, **kwargs: Any
) -> dict[str, Any]:
    args = super().get_params_args(is_descale, clip, width, height, **kwargs)
    if is_descale:
        return args | {"b": self.b, "c": self.c}
    return args | {"filter_param_a": self.b, "filter_param_b": self.c}

get_resample_args

get_resample_args(
    clip: VideoNode,
    format: int | VideoFormatLike | HoldsVideoFormat,
    matrix: MatrixLike | None,
    matrix_in: MatrixLike | None,
    **kwargs: Any
) -> dict[str, Any]

Generate and normalize argument dictionary for a resample operation.

Parameters:

  • clip

    (VideoNode) –

    The source clip.

  • format

    (int | VideoFormatLike | HoldsVideoFormat) –

    The target video format, which can either be:

    • an integer format ID,
    • a vs.PresetVideoFormat or vs.VideoFormat,
    • or a source from which a valid VideoFormat can be extracted.
  • matrix

    (MatrixLike | None) –

    Target color matrix.

  • matrix_in

    (MatrixLike | None) –

    Source color matrix.

  • **kwargs

    (Any, default: {} ) –

    Additional arguments to pass to the resample function.

Returns:

  • dict[str, Any]

    Dictionary of keyword arguments for the resample function.

Source code in vskernels/abstract/base.py
 992
 993
 994
 995
 996
 997
 998
 999
1000
1001
1002
1003
1004
1005
1006
1007
1008
1009
1010
1011
1012
1013
1014
1015
1016
1017
1018
1019
1020
1021
def get_resample_args(
    self,
    clip: vs.VideoNode,
    format: int | VideoFormatLike | HoldsVideoFormat,
    matrix: MatrixLike | None,
    matrix_in: MatrixLike | None,
    **kwargs: Any,
) -> dict[str, Any]:
    """
    Generate and normalize argument dictionary for a resample operation.

    Args:
        clip: The source clip.
        format: The target video format, which can either be:

               - an integer format ID,
               - a `vs.PresetVideoFormat` or `vs.VideoFormat`,
               - or a source from which a valid `VideoFormat` can be extracted.
        matrix: Target color matrix.
        matrix_in: Source color matrix.
        **kwargs: Additional arguments to pass to the resample function.

    Returns:
        Dictionary of keyword arguments for the resample function.
    """
    return {
        "format": get_video_format(format).id,
        "matrix": Matrix.from_param(matrix),
        "matrix_in": Matrix.from_param(matrix_in),
    } | self.get_params_args(False, clip, **kwargs)

get_rescale_args

get_rescale_args(
    clip: VideoNode,
    shift: tuple[TopShift, LeftShift] = (0, 0),
    width: int | None = None,
    height: int | None = None,
    **kwargs: Any
) -> dict[str, Any]

Generate and normalize argument dictionary for a rescale operation.

Parameters:

  • clip

    (VideoNode) –

    The source clip.

  • shift

    (tuple[TopShift, LeftShift], default: (0, 0) ) –

    Vertical and horizontal shift to apply.

  • width

    (int | None, default: None ) –

    Target width.

  • height

    (int | None, default: None ) –

    Target height.

  • **kwargs

    (Any, default: {} ) –

    Additional arguments to pass to the rescale function.

Returns:

  • dict[str, Any]

    Dictionary of keyword arguments for the rescale function.

Source code in vskernels/abstract/base.py
969
970
971
972
973
974
975
976
977
978
979
980
981
982
983
984
985
986
987
988
989
990
def get_rescale_args(
    self,
    clip: vs.VideoNode,
    shift: tuple[TopShift, LeftShift] = (0, 0),
    width: int | None = None,
    height: int | None = None,
    **kwargs: Any,
) -> dict[str, Any]:
    """
    Generate and normalize argument dictionary for a rescale operation.

    Args:
        clip: The source clip.
        shift: Vertical and horizontal shift to apply.
        width: Target width.
        height: Target height.
        **kwargs: Additional arguments to pass to the rescale function.

    Returns:
        Dictionary of keyword arguments for the rescale function.
    """
    return {"src_top": shift[0], "src_left": shift[1]} | self.get_params_args(True, clip, width, height, **kwargs)

get_scale_args

get_scale_args(
    clip: VideoNode,
    shift: tuple[TopShift, LeftShift] = (0, 0),
    width: int | None = None,
    height: int | None = None,
    **kwargs: Any
) -> dict[str, Any]

Generate and normalize argument dictionary for a scale operation.

Parameters:

  • clip

    (VideoNode) –

    The source clip.

  • shift

    (tuple[TopShift, LeftShift], default: (0, 0) ) –

    Vertical and horizontal shift to apply.

  • width

    (int | None, default: None ) –

    Target width.

  • height

    (int | None, default: None ) –

    Target height.

  • **kwargs

    (Any, default: {} ) –

    Additional arguments to pass to the scale function.

Returns:

  • dict[str, Any]

    Dictionary of keyword arguments for the scale function.

Source code in vskernels/abstract/base.py
923
924
925
926
927
928
929
930
931
932
933
934
935
936
937
938
939
940
941
942
943
944
def get_scale_args(
    self,
    clip: vs.VideoNode,
    shift: tuple[TopShift, LeftShift] = (0, 0),
    width: int | None = None,
    height: int | None = None,
    **kwargs: Any,
) -> dict[str, Any]:
    """
    Generate and normalize argument dictionary for a scale operation.

    Args:
        clip: The source clip.
        shift: Vertical and horizontal shift to apply.
        width: Target width.
        height: Target height.
        **kwargs: Additional arguments to pass to the scale function.

    Returns:
        Dictionary of keyword arguments for the scale function.
    """
    return {"src_top": shift[0], "src_left": shift[1]} | self.get_params_args(False, clip, width, height, **kwargs)

implemented_funcs classmethod

implemented_funcs() -> frozenset[str]

Returns a set of function names that are implemented in the current class and the parent classes.

These functions determine which keyword arguments will be extracted from the init method.

Returns:

Source code in vskernels/abstract/base.py
431
432
433
434
435
436
437
438
439
440
441
442
@classproperty
@classmethod
def implemented_funcs(cls) -> frozenset[str]:
    """
    Returns a set of function names that are implemented in the current class and the parent classes.

    These functions determine which keyword arguments will be extracted from the __init__ method.

    Returns:
        Frozen set of function names.
    """
    return frozenset(func for klass in cls.mro() for func in getattr(klass, "_implemented_funcs", ()))

kernel_radius

kernel_radius() -> int
Source code in vskernels/kernels/zimg/bicubic.py
70
71
72
73
74
@ZimgComplexKernel.cachedproperty
def kernel_radius(self) -> int:
    if (self.b, self.c) == (0, 0):
        return 1
    return 2

multi

multi(
    clip: VideoNodeT,
    multi: float = 2.0,
    shift: tuple[TopShift, LeftShift] = (0, 0),
    **kwargs: Any
) -> VideoNodeT

Deprecated alias for supersample.

Keyword arguments passed during initialization are automatically injected here, unless explicitly overridden by the arguments provided at call time. Only arguments that match named parameters in this method are injected.

Parameters:

  • clip

    (VideoNodeT) –

    The source clip.

  • multi

    (float, default: 2.0 ) –

    Supersampling factor.

  • shift

    (tuple[TopShift, LeftShift], default: (0, 0) ) –

    Subpixel shift (top, left) applied during scaling.

  • **kwargs

    (Any, default: {} ) –

    Additional arguments forwarded to the scale function.

Returns:

Source code in vskernels/abstract/base.py
531
532
533
534
535
536
537
538
539
540
541
542
543
544
545
546
547
548
549
550
551
@deprecated('The "multi" method is deprecated. Use "supersample" instead.', category=DeprecationWarning)
def multi(
    self, clip: VideoNodeT, multi: float = 2.0, shift: tuple[TopShift, LeftShift] = (0, 0), **kwargs: Any
) -> VideoNodeT:
    """
    Deprecated alias for `supersample`.

    Keyword arguments passed during initialization are automatically injected here,
    unless explicitly overridden by the arguments provided at call time.
    Only arguments that match named parameters in this method are injected.

    Args:
        clip: The source clip.
        multi: Supersampling factor.
        shift: Subpixel shift (top, left) applied during scaling.
        **kwargs: Additional arguments forwarded to the scale function.

    Returns:
        The supersampled clip.
    """
    return self.supersample(clip, multi, shift, **kwargs)

pretty_string

pretty_string() -> str

Cached property returning a user-friendly string representation.

Returns:

  • str

    Pretty-printed string with arguments.

Source code in vskernels/abstract/base.py
421
422
423
424
425
426
427
428
429
@BaseScalerMeta.cachedproperty
def pretty_string(self) -> str:
    """
    Cached property returning a user-friendly string representation.

    Returns:
        Pretty-printed string with arguments.
    """
    return self._pretty_string()

resample

resample(
    clip: VideoNode,
    format: int | VideoFormatLike | HoldsVideoFormat,
    matrix: MatrixLike | None = None,
    matrix_in: MatrixLike | None = None,
    **kwargs: Any
) -> ConstantFormatVideoNode

Resample a video clip to the given format.

Keyword arguments passed during initialization are automatically injected here, unless explicitly overridden by the arguments provided at call time. Only arguments that match named parameters in this method are injected.

Parameters:

  • clip

    (VideoNode) –

    The source clip.

  • format

    (int | VideoFormatLike | HoldsVideoFormat) –

    The target video format, which can either be:

    • an integer format ID,
    • a vs.PresetVideoFormat or vs.VideoFormat,
    • or a source from which a valid VideoFormat can be extracted.
  • matrix

    (MatrixLike | None, default: None ) –

    An optional color transformation matrix to apply.

  • matrix_in

    (MatrixLike | None, default: None ) –

    An optional input matrix for color transformations.

  • **kwargs

    (Any, default: {} ) –

    Additional keyword arguments passed to the resample_function.

Returns:

  • ConstantFormatVideoNode

    The resampled clip.

Source code in vskernels/abstract/base.py
717
718
719
720
721
722
723
724
725
726
727
728
729
730
731
732
733
734
735
736
737
738
739
740
741
742
743
744
745
746
747
748
def resample(
    self,
    clip: vs.VideoNode,
    format: int | VideoFormatLike | HoldsVideoFormat,
    matrix: MatrixLike | None = None,
    matrix_in: MatrixLike | None = None,
    **kwargs: Any,
) -> ConstantFormatVideoNode:
    """
    Resample a video clip to the given format.

    Keyword arguments passed during initialization are automatically injected here,
    unless explicitly overridden by the arguments provided at call time.
    Only arguments that match named parameters in this method are injected.

    Args:
        clip: The source clip.
        format: The target video format, which can either be:

               - an integer format ID,
               - a `vs.PresetVideoFormat` or `vs.VideoFormat`,
               - or a source from which a valid `VideoFormat` can be extracted.
        matrix: An optional color transformation matrix to apply.
        matrix_in: An optional input matrix for color transformations.
        **kwargs: Additional keyword arguments passed to the `resample_function`.

    Returns:
        The resampled clip.
    """
    return self.resample_function(
        clip, **_norm_props_enums(self.get_resample_args(clip, format, matrix, matrix_in, **kwargs))
    )

rescale

rescale(
    clip: VideoNode,
    width: int | None = None,
    height: int | None = None,
    shift: ShiftT = (0, 0),
    *,
    linear: bool | None = None,
    sigmoid: bool | tuple[Slope, Center] = False,
    border_handling: int | BorderHandling = MIRROR,
    sample_grid_model: int | SampleGridModel = MATCH_EDGES,
    field_based: FieldBasedLike | None = None,
    ignore_mask: VideoNode | None = None,
    blur: float | None = None,
    **kwargs: Any
) -> ConstantFormatVideoNode

Rescale a clip to the given resolution from a previously descaled clip, with image borders handling and sampling grid alignment, optionally using linear light processing.

Keyword arguments passed during initialization are automatically injected here, unless explicitly overridden by the arguments provided at call time. Only arguments that match named parameters in this method are injected.

Parameters:

  • clip

    (VideoNode) –

    The source clip.

  • width

    (int | None, default: None ) –

    Target scaled width (defaults to clip width if None).

  • height

    (int | None, default: None ) –

    Target scaled height (defaults to clip height if None).

  • shift

    (ShiftT, default: (0, 0) ) –

    Subpixel shift (top, left) or per-field shifts.

  • linear

    (bool | None, default: None ) –

    Whether to linearize the input before rescaling. If None, inferred from sigmoid.

  • sigmoid

    (bool | tuple[Slope, Center], default: False ) –

    Whether to use sigmoid transfer curve. Can be True, False, or a tuple of (slope, center). True applies the defaults values (6.5, 0.75). Keep in mind sigmoid slope has to be in range 1.0-20.0 (inclusive) and sigmoid center has to be in range 0.0-1.0 (inclusive).

  • border_handling

    (int | BorderHandling, default: MIRROR ) –

    Method for handling image borders during sampling.

  • sample_grid_model

    (int | SampleGridModel, default: MATCH_EDGES ) –

    Model used to align sampling grid.

  • field_based

    (FieldBasedLike | None, default: None ) –

    Field-based processing mode (interlaced or progressive).

  • ignore_mask

    (VideoNode | None, default: None ) –

    Optional mask specifying areas to ignore during rescaling.

  • blur

    (float | None, default: None ) –

    Amount of blur to apply during rescaling.

  • **kwargs

    (Any, default: {} ) –

    Additional arguments passed to rescale_function.

Returns:

  • ConstantFormatVideoNode

    The scaled clip.

Source code in vskernels/abstract/complex.py
773
774
775
776
777
778
779
780
781
782
783
784
785
786
787
788
789
790
791
792
793
794
795
796
797
798
799
800
801
802
803
804
805
806
807
808
809
810
811
812
813
814
815
816
817
818
819
820
821
822
823
824
825
826
827
828
829
830
831
832
833
834
835
836
837
838
839
840
841
842
843
def rescale(
    self,
    clip: vs.VideoNode,
    width: int | None = None,
    height: int | None = None,
    shift: ShiftT = (0, 0),
    *,
    # `linear` and `sigmoid` parameters from LinearDescaler
    linear: bool | None = None,
    sigmoid: bool | tuple[Slope, Center] = False,
    # ComplexDescaler adds border_handling, sample_grid_model, field_based, ignore_mask and blur
    border_handling: int | BorderHandling = BorderHandling.MIRROR,
    sample_grid_model: int | SampleGridModel = SampleGridModel.MATCH_EDGES,
    field_based: FieldBasedLike | None = None,
    ignore_mask: vs.VideoNode | None = None,
    blur: float | None = None,
    **kwargs: Any,
) -> ConstantFormatVideoNode:
    """
    Rescale a clip to the given resolution from a previously descaled clip,
    with image borders handling and sampling grid alignment, optionally using linear light processing.

    Keyword arguments passed during initialization are automatically injected here,
    unless explicitly overridden by the arguments provided at call time.
    Only arguments that match named parameters in this method are injected.

    Args:
        clip: The source clip.
        width: Target scaled width (defaults to clip width if None).
        height: Target scaled height (defaults to clip height if None).
        shift: Subpixel shift (top, left) or per-field shifts.
        linear: Whether to linearize the input before rescaling. If None, inferred from sigmoid.
        sigmoid: Whether to use sigmoid transfer curve. Can be True, False, or a tuple of (slope, center). `True`
            applies the defaults values (6.5, 0.75). Keep in mind sigmoid slope has to be in range 1.0-20.0
            (inclusive) and sigmoid center has to be in range 0.0-1.0 (inclusive).
        border_handling: Method for handling image borders during sampling.
        sample_grid_model: Model used to align sampling grid.
        field_based: Field-based processing mode (interlaced or progressive).
        ignore_mask: Optional mask specifying areas to ignore during rescaling.
        blur: Amount of blur to apply during rescaling.
        **kwargs: Additional arguments passed to `rescale_function`.

    Returns:
        The scaled clip.
    """
    width, height = self._wh_norm(clip, width, height)
    check_correct_subsampling(clip, width, height)

    field_based = FieldBased.from_param_or_video(field_based, clip)

    clip, bits = expect_bits(clip, 32)

    de_base_args = (width, height // (1 + field_based.is_inter))
    kwargs.update(
        border_handling=BorderHandling.from_param(border_handling, self.rescale), ignore_mask=ignore_mask, blur=blur
    )

    sample_grid_model = SampleGridModel(sample_grid_model)

    if field_based.is_inter:
        raise NotImplementedError
    else:
        shift = _descale_shift_norm(shift, True, self.rescale)

        kwargs, shift = sample_grid_model.for_src(clip, width, height, shift, **kwargs)

        rescaled = super().rescale(
            clip, **self.get_rescale_args(clip, shift, *de_base_args, **kwargs), linear=linear, sigmoid=sigmoid
        )

    return depth(rescaled, bits)

scale

scale(
    clip: VideoNode,
    width: int | None = None,
    height: int | None = None,
    shift: tuple[TopShift | list[TopShift], LeftShift | list[LeftShift]] = (
        0,
        0,
    ),
    *,
    linear: bool | None = None,
    sigmoid: bool | tuple[Slope, Center] = False,
    border_handling: BorderHandling = MIRROR,
    sample_grid_model: SampleGridModel = MATCH_EDGES,
    sar: Sar | float | bool | None = None,
    dar: Dar | float | bool | None = None,
    dar_in: Dar | bool | float | None = None,
    keep_ar: bool | None = None,
    blur: float | None = None,
    **kwargs: Any
) -> VideoNode | ConstantFormatVideoNode

Scale a clip to the given resolution, with aspect ratio and linear light support.

Keyword arguments passed during initialization are automatically injected here, unless explicitly overridden by the arguments provided at call time. Only arguments that match named parameters in this method are injected.

Parameters:

  • clip

    (VideoNode) –

    The source clip.

  • width

    (int | None, default: None ) –

    Target width (defaults to clip width if None).

  • height

    (int | None, default: None ) –

    Target height (defaults to clip height if None).

  • shift

    (tuple[TopShift | list[TopShift], LeftShift | list[LeftShift]], default: (0, 0) ) –

    Subpixel shift (top, left) applied during scaling. If a tuple is provided, it is used uniformly. If a list is given, the shift is applied per plane.

  • linear

    (bool | None, default: None ) –

    Whether to linearize the input before descaling. If None, inferred from sigmoid.

  • sigmoid

    (bool | tuple[Slope, Center], default: False ) –

    Whether to use sigmoid transfer curve. Can be True, False, or a tuple of (slope, center). True applies the defaults values (6.5, 0.75). Keep in mind sigmoid slope has to be in range 1.0-20.0 (inclusive) and sigmoid center has to be in range 0.0-1.0 (inclusive).

  • border_handling

    (BorderHandling, default: MIRROR ) –

    Method for handling image borders during sampling.

  • sample_grid_model

    (SampleGridModel, default: MATCH_EDGES ) –

    Model used to align sampling grid.

  • sar

    (Sar | float | bool | None, default: None ) –

    Sample aspect ratio to assume or convert to.

  • dar

    (Dar | float | bool | None, default: None ) –

    Desired display aspect ratio.

  • dar_in

    (Dar | bool | float | None, default: None ) –

    Input display aspect ratio, if different from clip's.

  • keep_ar

    (bool | None, default: None ) –

    Whether to adjust dimensions to preserve aspect ratio.

  • blur

    (float | None, default: None ) –

    Amount of blur to apply during scaling.

Returns:

  • VideoNode | ConstantFormatVideoNode

    Scaled clip, optionally aspect-corrected and linearized.

Source code in vskernels/abstract/complex.py
527
528
529
530
531
532
533
534
535
536
537
538
539
540
541
542
543
544
545
546
547
548
549
550
551
552
553
554
555
556
557
558
559
560
561
562
563
564
565
566
567
568
569
570
571
572
573
574
575
576
577
578
579
580
581
582
583
584
585
586
587
588
589
590
591
592
593
594
595
596
597
598
599
600
601
602
603
604
605
606
607
608
609
610
611
612
613
614
615
616
617
618
619
620
621
622
623
624
625
626
627
628
629
630
631
632
633
634
635
636
637
638
639
640
641
642
643
644
645
646
647
648
649
650
651
652
653
654
655
656
657
658
659
660
def scale(
    self,
    clip: vs.VideoNode,
    width: int | None = None,
    height: int | None = None,
    # ComplexScaler adds shift per planes
    shift: tuple[TopShift | list[TopShift], LeftShift | list[LeftShift]] = (0, 0),
    *,
    # `linear` and `sigmoid` from LinearScaler
    linear: bool | None = None,
    sigmoid: bool | tuple[Slope, Center] = False,
    # `border_handling`, `sample_grid_model`, `sar`, `dar`, `dar_in` and `keep_ar` from KeepArScaler
    border_handling: BorderHandling = BorderHandling.MIRROR,
    sample_grid_model: SampleGridModel = SampleGridModel.MATCH_EDGES,
    sar: Sar | float | bool | None = None,
    dar: Dar | float | bool | None = None,
    dar_in: Dar | bool | float | None = None,
    keep_ar: bool | None = None,
    # ComplexScaler adds blur
    blur: float | None = None,
    **kwargs: Any,
) -> vs.VideoNode | ConstantFormatVideoNode:
    """
    Scale a clip to the given resolution, with aspect ratio and linear light support.

    Keyword arguments passed during initialization are automatically injected here,
    unless explicitly overridden by the arguments provided at call time.
    Only arguments that match named parameters in this method are injected.

    Args:
        clip: The source clip.
        width: Target width (defaults to clip width if None).
        height: Target height (defaults to clip height if None).
        shift: Subpixel shift (top, left) applied during scaling. If a tuple is provided, it is used uniformly. If a
            list is given, the shift is applied per plane.
        linear: Whether to linearize the input before descaling. If None, inferred from sigmoid.
        sigmoid: Whether to use sigmoid transfer curve. Can be True, False, or a tuple of (slope, center). `True`
            applies the defaults values (6.5, 0.75). Keep in mind sigmoid slope has to be in range 1.0-20.0
            (inclusive) and sigmoid center has to be in range 0.0-1.0 (inclusive).
        border_handling: Method for handling image borders during sampling.
        sample_grid_model: Model used to align sampling grid.
        sar: Sample aspect ratio to assume or convert to.
        dar: Desired display aspect ratio.
        dar_in: Input display aspect ratio, if different from clip's.
        keep_ar: Whether to adjust dimensions to preserve aspect ratio.
        blur: Amount of blur to apply during scaling.

    Returns:
        Scaled clip, optionally aspect-corrected and linearized.
    """
    kwargs.update(
        linear=linear,
        sigmoid=sigmoid,
        border_handling=border_handling,
        sample_grid_model=sample_grid_model,
        sar=sar,
        dar=dar,
        dar_in=dar_in,
        keep_ar=keep_ar,
        blur=blur,
    )

    shift_top, shift_left = shift

    if isinstance(shift_top, (int, float)) and isinstance(shift_left, (int, float)):
        return super().scale(clip, width, height, (shift_top, shift_left), **kwargs)

    assert check_variable_format(clip, self.scale)

    n_planes = clip.format.num_planes

    shift_top = normalize_seq(shift_top, n_planes)
    shift_left = normalize_seq(shift_left, n_planes)

    if n_planes == 1:
        return super().scale(clip, width, height, (shift_top[0], shift_left[0]), **kwargs)

    width, height = self._wh_norm(clip, width, height)

    format_in = clip.format
    format_out = get_video_format(fallback(kwargs.pop("format", None), self.kwargs.get("format"), clip.format))

    chromaloc = ChromaLocation.from_video(clip, func=self.scale)
    chromaloc_in = ChromaLocation(
        fallback(kwargs.pop("chromaloc_in", None), self.kwargs.get("chromaloc_in"), chromaloc)
    )
    chromaloc_out = ChromaLocation(fallback(kwargs.pop("chromaloc", None), self.kwargs.get("chromaloc"), chromaloc))

    off_left, off_top = chromaloc_in.get_offsets(format_in)
    off_left_out, off_top_out = chromaloc_out.get_offsets(format_out)

    factor_w = 1 / 2**format_in.subsampling_w
    factor_h = 1 / 2**format_in.subsampling_h

    # Offsets for format out
    offc_left = (abs(off_left) + off_left_out) * factor_w
    offc_top = (abs(off_top) + off_top_out) * factor_h

    # Offsets for scale out
    if format_out.subsampling_w:
        offc_left = ((abs(off_left) + off_left * (clip.width / width)) * factor_w) + offc_left
    if format_out.subsampling_h:
        offc_top = ((abs(off_top) + off_top * (clip.height / height)) * factor_h) + offc_top

    for i in range(1, n_planes):
        shift_left[i] += offc_left
        shift_top[i] += offc_top

    scaled_planes = list[vs.VideoNode]()

    for i, (plane, top, left) in enumerate(zip(split(clip), shift_top, shift_left)):
        if i:
            w = round(width * 1 / 2**format_out.subsampling_h)
            h = round(height * 1 / 2**format_out.subsampling_h)
        else:
            w, h = width, height

        scaled_planes.append(
            super().scale(
                plane,
                w,
                h,
                (top, left),
                format=format_out.replace(color_family=vs.GRAY, subsampling_w=0, subsampling_h=0),
                **kwargs,
            )
        )

    merged = vs.core.std.ShufflePlanes(scaled_planes, [0, 0, 0], format_out.color_family, clip)

    if chromaloc_in != chromaloc_out:
        return chromaloc_out.apply(merged)

    return merged

shift

shift(
    clip: VideoNode, shift: tuple[TopShift, LeftShift], /, **kwargs: Any
) -> ConstantFormatVideoNode
shift(
    clip: VideoNode,
    shift_top: float | list[float],
    shift_left: float | list[float],
    /,
    **kwargs: Any,
) -> ConstantFormatVideoNode
shift(
    clip: VideoNode,
    shifts_or_top: float | tuple[float, float] | list[float],
    shift_left: float | list[float] | None = None,
    /,
    **kwargs: Any,
) -> ConstantFormatVideoNode

Apply a subpixel shift to the clip using the kernel's scaling logic.

If a single float or tuple is provided, it is used uniformly. If a list is given, the shift is applied per plane.

Keyword arguments passed during initialization are automatically injected here, unless explicitly overridden by the arguments provided at call time. Only arguments that match named parameters in this method are injected.

Parameters:

  • clip

    (VideoNode) –

    The source clip.

  • shifts_or_top

    (float | tuple[float, float] | list[float]) –

    Either a single vertical shift, a (top, left) tuple, or a list of vertical shifts.

  • shift_left

    (float | list[float] | None, default: None ) –

    Horizontal shift or list of horizontal shifts. Ignored if shifts_or_top is a tuple.

  • **kwargs

    (Any, default: {} ) –

    Additional arguments passed to the internal scale call.

Returns:

  • ConstantFormatVideoNode

    A new clip with the applied shift.

Raises:

  • VariableFormatError

    If the input clip has variable format.

  • CustomValueError

    If the input clip is GRAY but lists of shift has been passed.

Source code in vskernels/abstract/base.py
845
846
847
848
849
850
851
852
853
854
855
856
857
858
859
860
861
862
863
864
865
866
867
868
869
870
871
872
873
874
875
876
877
878
879
880
881
882
883
884
885
886
887
888
889
890
891
892
893
894
895
896
897
898
899
900
901
902
903
def shift(
    self,
    clip: vs.VideoNode,
    shifts_or_top: float | tuple[float, float] | list[float],
    shift_left: float | list[float] | None = None,
    /,
    **kwargs: Any,
) -> ConstantFormatVideoNode:
    """
    Apply a subpixel shift to the clip using the kernel's scaling logic.

    If a single float or tuple is provided, it is used uniformly.
    If a list is given, the shift is applied per plane.

    Keyword arguments passed during initialization are automatically injected here,
    unless explicitly overridden by the arguments provided at call time.
    Only arguments that match named parameters in this method are injected.

    Args:
        clip: The source clip.
        shifts_or_top: Either a single vertical shift, a (top, left) tuple, or a list of vertical shifts.
        shift_left: Horizontal shift or list of horizontal shifts. Ignored if `shifts_or_top` is a tuple.
        **kwargs: Additional arguments passed to the internal `scale` call.

    Returns:
        A new clip with the applied shift.

    Raises:
        VariableFormatError: If the input clip has variable format.
        CustomValueError: If the input clip is GRAY but lists of shift has been passed.
    """
    assert check_variable_format(clip, self.shift)

    n_planes = clip.format.num_planes

    def _shift(src: vs.VideoNode, shift: tuple[TopShift, LeftShift] = (0, 0)) -> ConstantFormatVideoNode:
        return self.scale(src, shift=shift, **kwargs)  # type: ignore[return-value]

    if isinstance(shifts_or_top, tuple):
        return _shift(clip, shifts_or_top)

    if isinstance(shifts_or_top, (int, float)) and isinstance(shift_left, (int, float, NoneType)):
        return _shift(clip, (shifts_or_top, shift_left or 0))

    if shift_left is None:
        shift_left = 0.0

    shifts_top = normalize_seq(shifts_or_top, n_planes)
    shifts_left = normalize_seq(shift_left, n_planes)

    if n_planes == 1:
        return _shift(clip, (shifts_top[0], shifts_left[0]))

    shifted_planes = [
        plane if top == left == 0 else _shift(plane, (top, left))
        for plane, top, left in zip(split(clip), shifts_top, shifts_left)
    ]

    return core.std.ShufflePlanes(shifted_planes, [0, 0, 0], clip.format.color_family, clip)

supersample

supersample(
    clip: VideoNodeT,
    rfactor: float = 2.0,
    shift: tuple[TopShift, LeftShift] = (0, 0),
    **kwargs: Any
) -> VideoNodeT

Supersample a clip by a given scaling factor.

Keyword arguments passed during initialization are automatically injected here, unless explicitly overridden by the arguments provided at call time. Only arguments that match named parameters in this method are injected.

Parameters:

  • clip

    (VideoNodeT) –

    The source clip.

  • rfactor

    (float, default: 2.0 ) –

    Scaling factor for supersampling.

  • shift

    (tuple[TopShift, LeftShift], default: (0, 0) ) –

    Subpixel shift (top, left) applied during scaling.

  • **kwargs

    (Any, default: {} ) –

    Additional arguments forwarded to the scale function.

Raises:

  • CustomValueError

    If resulting resolution is non-positive.

Returns:

Source code in vskernels/abstract/base.py
496
497
498
499
500
501
502
503
504
505
506
507
508
509
510
511
512
513
514
515
516
517
518
519
520
521
522
523
524
525
526
527
528
529
def supersample(
    self, clip: VideoNodeT, rfactor: float = 2.0, shift: tuple[TopShift, LeftShift] = (0, 0), **kwargs: Any
) -> VideoNodeT:
    """
    Supersample a clip by a given scaling factor.

    Keyword arguments passed during initialization are automatically injected here,
    unless explicitly overridden by the arguments provided at call time.
    Only arguments that match named parameters in this method are injected.

    Args:
        clip: The source clip.
        rfactor: Scaling factor for supersampling.
        shift: Subpixel shift (top, left) applied during scaling.
        **kwargs: Additional arguments forwarded to the scale function.

    Raises:
        CustomValueError: If resulting resolution is non-positive.

    Returns:
        The supersampled clip.
    """
    assert check_variable_resolution(clip, self.supersample)

    dst_width, dst_height = ceil(clip.width * rfactor), ceil(clip.height * rfactor)

    if max(dst_width, dst_height) <= 0.0:
        raise CustomValueError(
            'Multiplying the resolution by "rfactor" must result in a positive resolution!',
            self.supersample,
            rfactor,
        )

    return self.scale(clip, dst_width, dst_height, shift, **kwargs)  # type: ignore[return-value]

BicubicAuto

BicubicAuto(b: float = ..., c: None = None, **kwargs: Any)
BicubicAuto(b: None = None, c: float = ..., **kwargs: Any)
BicubicAuto(b: float | None = None, c: float | None = None, **kwargs: Any)

Bases: Bicubic

Bicubic resizer that follows the rule of b + 2c = ...

Initialize the scaler with optional arguments.

Parameters:

  • b

    (float | None, default: None ) –

    The 'b' parameter for bicubic interpolation.

  • c

    (float | None, default: None ) –

    The 'c' parameter for bicubic interpolation.

  • **kwargs

    (Any, default: {} ) –

    Keyword arguments that configure the internal scaling behavior.

Raises:

  • CustomValueError

    If both 'b' and 'c' are specified

Methods:

  • bob

    Apply bob deinterlacing to a given clip using the selected resizer.

  • deinterlace

    Apply deinterlacing to a given clip using the selected resizer.

  • descale

    Descale a clip to the given resolution, with image borders handling and sampling grid alignment,

  • ensure_obj

    Ensure that the input is a scaler instance, resolving it if necessary.

  • from_param

    Resolve and return a scaler type from a given input (string, type, or instance).

  • get_bob_args
  • get_descale_args

    Generate and normalize argument dictionary for a descale operation.

  • get_params_args
  • get_resample_args

    Generate and normalize argument dictionary for a resample operation.

  • get_rescale_args

    Generate and normalize argument dictionary for a rescale operation.

  • get_scale_args

    Generate and normalize argument dictionary for a scale operation.

  • implemented_funcs

    Returns a set of function names that are implemented in the current class and the parent classes.

  • kernel_radius
  • multi

    Deprecated alias for supersample.

  • pretty_string

    Cached property returning a user-friendly string representation.

  • resample

    Resample a video clip to the given format.

  • rescale

    Rescale a clip to the given resolution from a previously descaled clip,

  • scale

    Scale a clip to the given resolution, with aspect ratio and linear light support.

  • shift

    Apply a subpixel shift to the clip using the kernel's scaling logic.

  • supersample

    Supersample a clip by a given scaling factor.

Attributes:

Source code in vskernels/kernels/zimg/bicubic.py
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
def __init__(self, b: float | None = None, c: float | None = None, **kwargs: Any) -> None:
    """
    Initialize the scaler with optional arguments.

    Args:
        b: The 'b' parameter for bicubic interpolation.
        c: The 'c' parameter for bicubic interpolation.
        **kwargs: Keyword arguments that configure the internal scaling behavior.

    Raises:
        CustomValueError: If both 'b' and 'c' are specified
    """
    if None not in {b, c}:
        raise CustomValueError("You can't specify both b and c!", self.__class__)

    super().__init__(*self._get_bc_args(b, c), **kwargs)

b instance-attribute

b = b

bob_function class-attribute instance-attribute

bob_function: Callable[..., ConstantFormatVideoNode] = Bob

Bob function called internally when performing bobbing operations.

c instance-attribute

c = c

descale_function class-attribute instance-attribute

descale_function: Callable[..., ConstantFormatVideoNode] = Debicubic

kwargs instance-attribute

kwargs: dict[str, Any] = kwargs

Arguments passed to the implemented funcs or internal scale function.

resample_function class-attribute instance-attribute

resample_function: Callable[..., ConstantFormatVideoNode] = Bicubic

rescale_function class-attribute instance-attribute

rescale_function: Callable[..., ConstantFormatVideoNode] = Bicubic

scale_function class-attribute instance-attribute

scale_function: Callable[..., VideoNode] = Bicubic

bob

bob(
    clip: VideoNode, *, tff: FieldBasedLike | bool | None = None, **kwargs: Any
) -> ConstantFormatVideoNode

Apply bob deinterlacing to a given clip using the selected resizer.

Keyword arguments passed during initialization are automatically injected here, unless explicitly overridden by the arguments provided at call time. Only arguments that match named parameters in this method are injected.

Parameters:

  • clip

    (VideoNode) –

    The source clip

  • tff

    (FieldBasedLike | bool | None, default: None ) –

    Field order of the clip.

Returns:

  • ConstantFormatVideoNode

    The bobbed clip.

Source code in vskernels/kernels/zimg/abstract.py
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
def bob(
    self, clip: vs.VideoNode, *, tff: FieldBasedLike | bool | None = None, **kwargs: Any
) -> ConstantFormatVideoNode:
    """
    Apply bob deinterlacing to a given clip using the selected resizer.

    Keyword arguments passed during initialization are automatically injected here,
    unless explicitly overridden by the arguments provided at call time.
    Only arguments that match named parameters in this method are injected.

    Args:
        clip: The source clip
        tff: Field order of the clip.

    Returns:
        The bobbed clip.
    """
    clip_fieldbased = FieldBased.from_param_or_video(tff, clip, True, self.__class__)

    assert check_variable(clip, self.__class__)

    return self.bob_function(clip, **self.get_bob_args(clip, tff=clip_fieldbased.is_tff, **kwargs))

deinterlace

deinterlace(
    clip: VideoNode,
    *,
    tff: FieldBasedLike | bool | None = None,
    double_rate: bool = True,
    **kwargs: Any
) -> ConstantFormatVideoNode

Apply deinterlacing to a given clip using the selected resizer.

Keyword arguments passed during initialization are automatically injected here, unless explicitly overridden by the arguments provided at call time. Only arguments that match named parameters in this method are injected.

Parameters:

  • clip

    (VideoNode) –

    The source clip

  • tff

    (FieldBasedLike | bool | None, default: None ) –

    Field order of the clip.

  • double_rate

    (bool, default: True ) –

    Whether to double the frame rate (True) or retain the original rate (False).

Returns:

  • ConstantFormatVideoNode

    The bobbed clip.

Source code in vskernels/kernels/zimg/abstract.py
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
def deinterlace(
    self, clip: vs.VideoNode, *, tff: FieldBasedLike | bool | None = None, double_rate: bool = True, **kwargs: Any
) -> ConstantFormatVideoNode:
    """
    Apply deinterlacing to a given clip using the selected resizer.

    Keyword arguments passed during initialization are automatically injected here,
    unless explicitly overridden by the arguments provided at call time.
    Only arguments that match named parameters in this method are injected.

    Args:
        clip: The source clip
        tff: Field order of the clip.
        double_rate: Whether to double the frame rate (True) or retain the original rate (False).

    Returns:
        The bobbed clip.
    """
    bobbed = self.bob(clip, tff=tff, **kwargs)

    if not double_rate:
        return bobbed[::2]

    return bobbed

descale

descale(
    clip: VideoNode,
    width: int | None = None,
    height: int | None = None,
    shift: ShiftT = (0, 0),
    *,
    linear: bool | None = None,
    sigmoid: bool | tuple[Slope, Center] = False,
    border_handling: int | BorderHandling = MIRROR,
    sample_grid_model: int | SampleGridModel = MATCH_EDGES,
    field_based: FieldBasedLike | None = None,
    ignore_mask: VideoNode | None = None,
    blur: float | None = None,
    **kwargs: Any
) -> ConstantFormatVideoNode

Descale a clip to the given resolution, with image borders handling and sampling grid alignment, optionally using linear light processing.

Supports both progressive and interlaced sources. When interlaced, it will separate fields, perform per-field descaling, and weave them back.

Keyword arguments passed during initialization are automatically injected here, unless explicitly overridden by the arguments provided at call time. Only arguments that match named parameters in this method are injected.

Parameters:

  • clip

    (VideoNode) –

    The source clip.

  • width

    (int | None, default: None ) –

    Target descaled width (defaults to clip width if None).

  • height

    (int | None, default: None ) –

    Target descaled height (defaults to clip height if None).

  • shift

    (ShiftT, default: (0, 0) ) –

    Subpixel shift (top, left) or per-field shifts.

  • linear

    (bool | None, default: None ) –

    Whether to linearize the input before descaling. If None, inferred from sigmoid.

  • sigmoid

    (bool | tuple[Slope, Center], default: False ) –

    Whether to use sigmoid transfer curve. Can be True, False, or a tuple of (slope, center). True applies the defaults values (6.5, 0.75). Keep in mind sigmoid slope has to be in range 1.0-20.0 (inclusive) and sigmoid center has to be in range 0.0-1.0 (inclusive).

  • border_handling

    (int | BorderHandling, default: MIRROR ) –

    Method for handling image borders during sampling.

  • sample_grid_model

    (int | SampleGridModel, default: MATCH_EDGES ) –

    Model used to align sampling grid.

  • field_based

    (FieldBasedLike | None, default: None ) –

    Field-based processing mode (interlaced or progressive).

  • ignore_mask

    (VideoNode | None, default: None ) –

    Optional mask specifying areas to ignore during descaling.

  • blur

    (float | None, default: None ) –

    Amount of blur to apply during scaling.

  • **kwargs

    (Any, default: {} ) –

    Additional arguments passed to descale_function.

Returns:

  • ConstantFormatVideoNode

    The descaled video node, optionally processed in linear light.

Source code in vskernels/abstract/complex.py
671
672
673
674
675
676
677
678
679
680
681
682
683
684
685
686
687
688
689
690
691
692
693
694
695
696
697
698
699
700
701
702
703
704
705
706
707
708
709
710
711
712
713
714
715
716
717
718
719
720
721
722
723
724
725
726
727
728
729
730
731
732
733
734
735
736
737
738
739
740
741
742
743
744
745
746
747
748
749
750
751
752
753
754
755
756
757
758
759
760
761
762
763
764
765
766
767
768
769
770
771
def descale(
    self,
    clip: vs.VideoNode,
    width: int | None = None,
    height: int | None = None,
    shift: ShiftT = (0, 0),
    *,
    # `linear` and `sigmoid` parameters from LinearDescaler
    linear: bool | None = None,
    sigmoid: bool | tuple[Slope, Center] = False,
    # ComplexDescaler adds border_handling, sample_grid_model, field_based,  ignore_mask and blur
    border_handling: int | BorderHandling = BorderHandling.MIRROR,
    sample_grid_model: int | SampleGridModel = SampleGridModel.MATCH_EDGES,
    field_based: FieldBasedLike | None = None,
    ignore_mask: vs.VideoNode | None = None,
    blur: float | None = None,
    **kwargs: Any,
) -> ConstantFormatVideoNode:
    """
    Descale a clip to the given resolution, with image borders handling and sampling grid alignment,
    optionally using linear light processing.

    Supports both progressive and interlaced sources. When interlaced, it will separate fields,
    perform per-field descaling, and weave them back.

    Keyword arguments passed during initialization are automatically injected here,
    unless explicitly overridden by the arguments provided at call time.
    Only arguments that match named parameters in this method are injected.

    Args:
        clip: The source clip.
        width: Target descaled width (defaults to clip width if None).
        height: Target descaled height (defaults to clip height if None).
        shift: Subpixel shift (top, left) or per-field shifts.
        linear: Whether to linearize the input before descaling. If None, inferred from sigmoid.
        sigmoid: Whether to use sigmoid transfer curve. Can be True, False, or a tuple of (slope, center). `True`
            applies the defaults values (6.5, 0.75). Keep in mind sigmoid slope has to be in range 1.0-20.0
            (inclusive) and sigmoid center has to be in range 0.0-1.0 (inclusive).
        border_handling: Method for handling image borders during sampling.
        sample_grid_model: Model used to align sampling grid.
        field_based: Field-based processing mode (interlaced or progressive).
        ignore_mask: Optional mask specifying areas to ignore during descaling.
        blur: Amount of blur to apply during scaling.
        **kwargs: Additional arguments passed to `descale_function`.

    Returns:
        The descaled video node, optionally processed in linear light.
    """
    width, height = self._wh_norm(clip, width, height)
    check_correct_subsampling(clip, width, height)

    field_based = FieldBased.from_param_or_video(field_based, clip)

    clip, bits = expect_bits(clip, 32)

    de_base_args = (width, height // (1 + field_based.is_inter))
    kwargs.update(
        linear=linear,
        sigmoid=sigmoid,
        border_handling=BorderHandling.from_param(border_handling, self.descale),
        ignore_mask=ignore_mask,
        blur=blur,
    )

    sample_grid_model = SampleGridModel(sample_grid_model)

    if field_based.is_inter:
        from vsdeinterlace import reweave

        shift_y, shift_x = _descale_shift_norm(shift, False, self.descale)

        kwargs_tf, shift = sample_grid_model.for_src(clip, width, height, (shift_y[0], shift_x[0]), **kwargs)
        kwargs_bf, shift = sample_grid_model.for_src(clip, width, height, (shift_y[1], shift_x[1]), **kwargs)

        de_kwargs_tf = self.get_descale_args(clip, (shift_y[0], shift_x[0]), *de_base_args, **kwargs_tf)
        de_kwargs_bf = self.get_descale_args(clip, (shift_y[1], shift_x[1]), *de_base_args, **kwargs_bf)

        if height % 2:
            raise CustomIndexError("You can't descale to odd resolution when crossconverted!", self.descale)

        field_shift = 0.125 * height / clip.height

        fields = clip.std.SeparateFields(field_based.is_tff)

        descaled_tf = super().descale(
            fields[0::2],
            **de_kwargs_tf | {"src_top": de_kwargs_tf.get("src_top", 0.0) + field_shift},
        )
        descaled_bf = super().descale(
            fields[1::2],
            **de_kwargs_bf | {"src_top": de_kwargs_bf.get("src_top", 0.0) - field_shift},
        )
        descaled = reweave(descaled_tf, descaled_bf, field_based)
    else:
        shift = _descale_shift_norm(shift, True, self.descale)

        kwargs, shift = sample_grid_model.for_src(clip, width, height, shift, **kwargs)

        descaled = super().descale(clip, **self.get_descale_args(clip, shift, *de_base_args, **kwargs))

    return depth(descaled, bits)

ensure_obj classmethod

ensure_obj(
    scaler: str | type[Self] | Self | None = None,
    /,
    func_except: FuncExcept | None = None,
) -> Self

Ensure that the input is a scaler instance, resolving it if necessary.

Parameters:

  • scaler

    (str | type[Self] | Self | None, default: None ) –

    Scaler identifier (string, class, or instance).

  • func_except

    (FuncExcept | None, default: None ) –

    Function returned for custom error handling.

Returns:

  • Self

    Scaler instance.

Source code in vskernels/abstract/base.py
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
@classmethod
def ensure_obj(
    cls,
    scaler: str | type[Self] | Self | None = None,
    /,
    func_except: FuncExcept | None = None,
) -> Self:
    """
    Ensure that the input is a scaler instance, resolving it if necessary.

    Args:
        scaler: Scaler identifier (string, class, or instance).
        func_except: Function returned for custom error handling.

    Returns:
        Scaler instance.
    """
    return _base_ensure_obj(cls, scaler, func_except)

from_param classmethod

from_param(
    scaler: str | type[Self] | Self | None = None,
    /,
    func_except: FuncExcept | None = None,
) -> type[Self]

Resolve and return a scaler type from a given input (string, type, or instance).

Parameters:

  • scaler

    (str | type[Self] | Self | None, default: None ) –

    Scaler identifier (string, class, or instance).

  • func_except

    (FuncExcept | None, default: None ) –

    Function returned for custom error handling.

Returns:

  • type[Self]

    Resolved scaler type.

Source code in vskernels/abstract/base.py
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
@classmethod
def from_param(
    cls,
    scaler: str | type[Self] | Self | None = None,
    /,
    func_except: FuncExcept | None = None,
) -> type[Self]:
    """
    Resolve and return a scaler type from a given input (string, type, or instance).

    Args:
        scaler: Scaler identifier (string, class, or instance).
        func_except: Function returned for custom error handling.

    Returns:
        Resolved scaler type.
    """
    return _base_from_param(cls, scaler, cls._err_class, func_except)

get_bob_args

get_bob_args(
    clip: VideoNode, shift: tuple[TopShift, LeftShift] = (0, 0), **kwargs: Any
) -> dict[str, Any]
Source code in vskernels/kernels/zimg/bicubic.py
60
61
62
63
64
65
66
67
68
def get_bob_args(
    self,
    clip: vs.VideoNode,
    shift: tuple[TopShift, LeftShift] = (0, 0),
    **kwargs: Any,
) -> dict[str, Any]:
    return super().get_bob_args(
        clip, shift, filter="bicubic", filter_param_a=self.b, filter_param_b=self.c, **kwargs
    )

get_descale_args

get_descale_args(
    clip: VideoNode,
    shift: tuple[TopShift, LeftShift] = (0, 0),
    width: int | None = None,
    height: int | None = None,
    **kwargs: Any
) -> dict[str, Any]

Generate and normalize argument dictionary for a descale operation.

Parameters:

  • clip

    (VideoNode) –

    The source clip.

  • shift

    (tuple[TopShift, LeftShift], default: (0, 0) ) –

    Vertical and horizontal shift to apply.

  • width

    (int | None, default: None ) –

    Target width.

  • height

    (int | None, default: None ) –

    Target height.

  • **kwargs

    (Any, default: {} ) –

    Additional arguments to pass to the descale function.

Returns:

  • dict[str, Any]

    Dictionary of keyword arguments for the descale function.

Source code in vskernels/abstract/base.py
946
947
948
949
950
951
952
953
954
955
956
957
958
959
960
961
962
963
964
965
966
967
def get_descale_args(
    self,
    clip: vs.VideoNode,
    shift: tuple[TopShift, LeftShift] = (0, 0),
    width: int | None = None,
    height: int | None = None,
    **kwargs: Any,
) -> dict[str, Any]:
    """
    Generate and normalize argument dictionary for a descale operation.

    Args:
        clip: The source clip.
        shift: Vertical and horizontal shift to apply.
        width: Target width.
        height: Target height.
        **kwargs: Additional arguments to pass to the descale function.

    Returns:
        Dictionary of keyword arguments for the descale function.
    """
    return {"src_top": shift[0], "src_left": shift[1]} | self.get_params_args(True, clip, width, height, **kwargs)

get_params_args

get_params_args(
    is_descale: bool,
    clip: VideoNode,
    width: int | None = None,
    height: int | None = None,
    **kwargs: Any
) -> dict[str, Any]
Source code in vskernels/kernels/zimg/bicubic.py
52
53
54
55
56
57
58
def get_params_args(
    self, is_descale: bool, clip: vs.VideoNode, width: int | None = None, height: int | None = None, **kwargs: Any
) -> dict[str, Any]:
    args = super().get_params_args(is_descale, clip, width, height, **kwargs)
    if is_descale:
        return args | {"b": self.b, "c": self.c}
    return args | {"filter_param_a": self.b, "filter_param_b": self.c}

get_resample_args

get_resample_args(
    clip: VideoNode,
    format: int | VideoFormatLike | HoldsVideoFormat,
    matrix: MatrixLike | None,
    matrix_in: MatrixLike | None,
    **kwargs: Any
) -> dict[str, Any]

Generate and normalize argument dictionary for a resample operation.

Parameters:

  • clip

    (VideoNode) –

    The source clip.

  • format

    (int | VideoFormatLike | HoldsVideoFormat) –

    The target video format, which can either be:

    • an integer format ID,
    • a vs.PresetVideoFormat or vs.VideoFormat,
    • or a source from which a valid VideoFormat can be extracted.
  • matrix

    (MatrixLike | None) –

    Target color matrix.

  • matrix_in

    (MatrixLike | None) –

    Source color matrix.

  • **kwargs

    (Any, default: {} ) –

    Additional arguments to pass to the resample function.

Returns:

  • dict[str, Any]

    Dictionary of keyword arguments for the resample function.

Source code in vskernels/abstract/base.py
 992
 993
 994
 995
 996
 997
 998
 999
1000
1001
1002
1003
1004
1005
1006
1007
1008
1009
1010
1011
1012
1013
1014
1015
1016
1017
1018
1019
1020
1021
def get_resample_args(
    self,
    clip: vs.VideoNode,
    format: int | VideoFormatLike | HoldsVideoFormat,
    matrix: MatrixLike | None,
    matrix_in: MatrixLike | None,
    **kwargs: Any,
) -> dict[str, Any]:
    """
    Generate and normalize argument dictionary for a resample operation.

    Args:
        clip: The source clip.
        format: The target video format, which can either be:

               - an integer format ID,
               - a `vs.PresetVideoFormat` or `vs.VideoFormat`,
               - or a source from which a valid `VideoFormat` can be extracted.
        matrix: Target color matrix.
        matrix_in: Source color matrix.
        **kwargs: Additional arguments to pass to the resample function.

    Returns:
        Dictionary of keyword arguments for the resample function.
    """
    return {
        "format": get_video_format(format).id,
        "matrix": Matrix.from_param(matrix),
        "matrix_in": Matrix.from_param(matrix_in),
    } | self.get_params_args(False, clip, **kwargs)

get_rescale_args

get_rescale_args(
    clip: VideoNode,
    shift: tuple[TopShift, LeftShift] = (0, 0),
    width: int | None = None,
    height: int | None = None,
    **kwargs: Any
) -> dict[str, Any]

Generate and normalize argument dictionary for a rescale operation.

Parameters:

  • clip

    (VideoNode) –

    The source clip.

  • shift

    (tuple[TopShift, LeftShift], default: (0, 0) ) –

    Vertical and horizontal shift to apply.

  • width

    (int | None, default: None ) –

    Target width.

  • height

    (int | None, default: None ) –

    Target height.

  • **kwargs

    (Any, default: {} ) –

    Additional arguments to pass to the rescale function.

Returns:

  • dict[str, Any]

    Dictionary of keyword arguments for the rescale function.

Source code in vskernels/abstract/base.py
969
970
971
972
973
974
975
976
977
978
979
980
981
982
983
984
985
986
987
988
989
990
def get_rescale_args(
    self,
    clip: vs.VideoNode,
    shift: tuple[TopShift, LeftShift] = (0, 0),
    width: int | None = None,
    height: int | None = None,
    **kwargs: Any,
) -> dict[str, Any]:
    """
    Generate and normalize argument dictionary for a rescale operation.

    Args:
        clip: The source clip.
        shift: Vertical and horizontal shift to apply.
        width: Target width.
        height: Target height.
        **kwargs: Additional arguments to pass to the rescale function.

    Returns:
        Dictionary of keyword arguments for the rescale function.
    """
    return {"src_top": shift[0], "src_left": shift[1]} | self.get_params_args(True, clip, width, height, **kwargs)

get_scale_args

get_scale_args(
    clip: VideoNode,
    shift: tuple[TopShift, LeftShift] = (0, 0),
    width: int | None = None,
    height: int | None = None,
    **kwargs: Any
) -> dict[str, Any]

Generate and normalize argument dictionary for a scale operation.

Parameters:

  • clip

    (VideoNode) –

    The source clip.

  • shift

    (tuple[TopShift, LeftShift], default: (0, 0) ) –

    Vertical and horizontal shift to apply.

  • width

    (int | None, default: None ) –

    Target width.

  • height

    (int | None, default: None ) –

    Target height.

  • **kwargs

    (Any, default: {} ) –

    Additional arguments to pass to the scale function.

Returns:

  • dict[str, Any]

    Dictionary of keyword arguments for the scale function.

Source code in vskernels/abstract/base.py
923
924
925
926
927
928
929
930
931
932
933
934
935
936
937
938
939
940
941
942
943
944
def get_scale_args(
    self,
    clip: vs.VideoNode,
    shift: tuple[TopShift, LeftShift] = (0, 0),
    width: int | None = None,
    height: int | None = None,
    **kwargs: Any,
) -> dict[str, Any]:
    """
    Generate and normalize argument dictionary for a scale operation.

    Args:
        clip: The source clip.
        shift: Vertical and horizontal shift to apply.
        width: Target width.
        height: Target height.
        **kwargs: Additional arguments to pass to the scale function.

    Returns:
        Dictionary of keyword arguments for the scale function.
    """
    return {"src_top": shift[0], "src_left": shift[1]} | self.get_params_args(False, clip, width, height, **kwargs)

implemented_funcs classmethod

implemented_funcs() -> frozenset[str]

Returns a set of function names that are implemented in the current class and the parent classes.

These functions determine which keyword arguments will be extracted from the init method.

Returns:

Source code in vskernels/abstract/base.py
431
432
433
434
435
436
437
438
439
440
441
442
@classproperty
@classmethod
def implemented_funcs(cls) -> frozenset[str]:
    """
    Returns a set of function names that are implemented in the current class and the parent classes.

    These functions determine which keyword arguments will be extracted from the __init__ method.

    Returns:
        Frozen set of function names.
    """
    return frozenset(func for klass in cls.mro() for func in getattr(klass, "_implemented_funcs", ()))

kernel_radius

kernel_radius() -> int
Source code in vskernels/kernels/zimg/bicubic.py
70
71
72
73
74
@ZimgComplexKernel.cachedproperty
def kernel_radius(self) -> int:
    if (self.b, self.c) == (0, 0):
        return 1
    return 2

multi

multi(
    clip: VideoNodeT,
    multi: float = 2.0,
    shift: tuple[TopShift, LeftShift] = (0, 0),
    **kwargs: Any
) -> VideoNodeT

Deprecated alias for supersample.

Keyword arguments passed during initialization are automatically injected here, unless explicitly overridden by the arguments provided at call time. Only arguments that match named parameters in this method are injected.

Parameters:

  • clip

    (VideoNodeT) –

    The source clip.

  • multi

    (float, default: 2.0 ) –

    Supersampling factor.

  • shift

    (tuple[TopShift, LeftShift], default: (0, 0) ) –

    Subpixel shift (top, left) applied during scaling.

  • **kwargs

    (Any, default: {} ) –

    Additional arguments forwarded to the scale function.

Returns:

Source code in vskernels/abstract/base.py
531
532
533
534
535
536
537
538
539
540
541
542
543
544
545
546
547
548
549
550
551
@deprecated('The "multi" method is deprecated. Use "supersample" instead.', category=DeprecationWarning)
def multi(
    self, clip: VideoNodeT, multi: float = 2.0, shift: tuple[TopShift, LeftShift] = (0, 0), **kwargs: Any
) -> VideoNodeT:
    """
    Deprecated alias for `supersample`.

    Keyword arguments passed during initialization are automatically injected here,
    unless explicitly overridden by the arguments provided at call time.
    Only arguments that match named parameters in this method are injected.

    Args:
        clip: The source clip.
        multi: Supersampling factor.
        shift: Subpixel shift (top, left) applied during scaling.
        **kwargs: Additional arguments forwarded to the scale function.

    Returns:
        The supersampled clip.
    """
    return self.supersample(clip, multi, shift, **kwargs)

pretty_string

pretty_string() -> str

Cached property returning a user-friendly string representation.

Returns:

  • str

    Pretty-printed string with arguments.

Source code in vskernels/abstract/base.py
421
422
423
424
425
426
427
428
429
@BaseScalerMeta.cachedproperty
def pretty_string(self) -> str:
    """
    Cached property returning a user-friendly string representation.

    Returns:
        Pretty-printed string with arguments.
    """
    return self._pretty_string()

resample

resample(
    clip: VideoNode,
    format: int | VideoFormatLike | HoldsVideoFormat,
    matrix: MatrixLike | None = None,
    matrix_in: MatrixLike | None = None,
    **kwargs: Any
) -> ConstantFormatVideoNode

Resample a video clip to the given format.

Keyword arguments passed during initialization are automatically injected here, unless explicitly overridden by the arguments provided at call time. Only arguments that match named parameters in this method are injected.

Parameters:

  • clip

    (VideoNode) –

    The source clip.

  • format

    (int | VideoFormatLike | HoldsVideoFormat) –

    The target video format, which can either be:

    • an integer format ID,
    • a vs.PresetVideoFormat or vs.VideoFormat,
    • or a source from which a valid VideoFormat can be extracted.
  • matrix

    (MatrixLike | None, default: None ) –

    An optional color transformation matrix to apply.

  • matrix_in

    (MatrixLike | None, default: None ) –

    An optional input matrix for color transformations.

  • **kwargs

    (Any, default: {} ) –

    Additional keyword arguments passed to the resample_function.

Returns:

  • ConstantFormatVideoNode

    The resampled clip.

Source code in vskernels/abstract/base.py
717
718
719
720
721
722
723
724
725
726
727
728
729
730
731
732
733
734
735
736
737
738
739
740
741
742
743
744
745
746
747
748
def resample(
    self,
    clip: vs.VideoNode,
    format: int | VideoFormatLike | HoldsVideoFormat,
    matrix: MatrixLike | None = None,
    matrix_in: MatrixLike | None = None,
    **kwargs: Any,
) -> ConstantFormatVideoNode:
    """
    Resample a video clip to the given format.

    Keyword arguments passed during initialization are automatically injected here,
    unless explicitly overridden by the arguments provided at call time.
    Only arguments that match named parameters in this method are injected.

    Args:
        clip: The source clip.
        format: The target video format, which can either be:

               - an integer format ID,
               - a `vs.PresetVideoFormat` or `vs.VideoFormat`,
               - or a source from which a valid `VideoFormat` can be extracted.
        matrix: An optional color transformation matrix to apply.
        matrix_in: An optional input matrix for color transformations.
        **kwargs: Additional keyword arguments passed to the `resample_function`.

    Returns:
        The resampled clip.
    """
    return self.resample_function(
        clip, **_norm_props_enums(self.get_resample_args(clip, format, matrix, matrix_in, **kwargs))
    )

rescale

rescale(
    clip: VideoNode,
    width: int | None = None,
    height: int | None = None,
    shift: ShiftT = (0, 0),
    *,
    linear: bool | None = None,
    sigmoid: bool | tuple[Slope, Center] = False,
    border_handling: int | BorderHandling = MIRROR,
    sample_grid_model: int | SampleGridModel = MATCH_EDGES,
    field_based: FieldBasedLike | None = None,
    ignore_mask: VideoNode | None = None,
    blur: float | None = None,
    **kwargs: Any
) -> ConstantFormatVideoNode

Rescale a clip to the given resolution from a previously descaled clip, with image borders handling and sampling grid alignment, optionally using linear light processing.

Keyword arguments passed during initialization are automatically injected here, unless explicitly overridden by the arguments provided at call time. Only arguments that match named parameters in this method are injected.

Parameters:

  • clip

    (VideoNode) –

    The source clip.

  • width

    (int | None, default: None ) –

    Target scaled width (defaults to clip width if None).

  • height

    (int | None, default: None ) –

    Target scaled height (defaults to clip height if None).

  • shift

    (ShiftT, default: (0, 0) ) –

    Subpixel shift (top, left) or per-field shifts.

  • linear

    (bool | None, default: None ) –

    Whether to linearize the input before rescaling. If None, inferred from sigmoid.

  • sigmoid

    (bool | tuple[Slope, Center], default: False ) –

    Whether to use sigmoid transfer curve. Can be True, False, or a tuple of (slope, center). True applies the defaults values (6.5, 0.75). Keep in mind sigmoid slope has to be in range 1.0-20.0 (inclusive) and sigmoid center has to be in range 0.0-1.0 (inclusive).

  • border_handling

    (int | BorderHandling, default: MIRROR ) –

    Method for handling image borders during sampling.

  • sample_grid_model

    (int | SampleGridModel, default: MATCH_EDGES ) –

    Model used to align sampling grid.

  • field_based

    (FieldBasedLike | None, default: None ) –

    Field-based processing mode (interlaced or progressive).

  • ignore_mask

    (VideoNode | None, default: None ) –

    Optional mask specifying areas to ignore during rescaling.

  • blur

    (float | None, default: None ) –

    Amount of blur to apply during rescaling.

  • **kwargs

    (Any, default: {} ) –

    Additional arguments passed to rescale_function.

Returns:

  • ConstantFormatVideoNode

    The scaled clip.

Source code in vskernels/abstract/complex.py
773
774
775
776
777
778
779
780
781
782
783
784
785
786
787
788
789
790
791
792
793
794
795
796
797
798
799
800
801
802
803
804
805
806
807
808
809
810
811
812
813
814
815
816
817
818
819
820
821
822
823
824
825
826
827
828
829
830
831
832
833
834
835
836
837
838
839
840
841
842
843
def rescale(
    self,
    clip: vs.VideoNode,
    width: int | None = None,
    height: int | None = None,
    shift: ShiftT = (0, 0),
    *,
    # `linear` and `sigmoid` parameters from LinearDescaler
    linear: bool | None = None,
    sigmoid: bool | tuple[Slope, Center] = False,
    # ComplexDescaler adds border_handling, sample_grid_model, field_based, ignore_mask and blur
    border_handling: int | BorderHandling = BorderHandling.MIRROR,
    sample_grid_model: int | SampleGridModel = SampleGridModel.MATCH_EDGES,
    field_based: FieldBasedLike | None = None,
    ignore_mask: vs.VideoNode | None = None,
    blur: float | None = None,
    **kwargs: Any,
) -> ConstantFormatVideoNode:
    """
    Rescale a clip to the given resolution from a previously descaled clip,
    with image borders handling and sampling grid alignment, optionally using linear light processing.

    Keyword arguments passed during initialization are automatically injected here,
    unless explicitly overridden by the arguments provided at call time.
    Only arguments that match named parameters in this method are injected.

    Args:
        clip: The source clip.
        width: Target scaled width (defaults to clip width if None).
        height: Target scaled height (defaults to clip height if None).
        shift: Subpixel shift (top, left) or per-field shifts.
        linear: Whether to linearize the input before rescaling. If None, inferred from sigmoid.
        sigmoid: Whether to use sigmoid transfer curve. Can be True, False, or a tuple of (slope, center). `True`
            applies the defaults values (6.5, 0.75). Keep in mind sigmoid slope has to be in range 1.0-20.0
            (inclusive) and sigmoid center has to be in range 0.0-1.0 (inclusive).
        border_handling: Method for handling image borders during sampling.
        sample_grid_model: Model used to align sampling grid.
        field_based: Field-based processing mode (interlaced or progressive).
        ignore_mask: Optional mask specifying areas to ignore during rescaling.
        blur: Amount of blur to apply during rescaling.
        **kwargs: Additional arguments passed to `rescale_function`.

    Returns:
        The scaled clip.
    """
    width, height = self._wh_norm(clip, width, height)
    check_correct_subsampling(clip, width, height)

    field_based = FieldBased.from_param_or_video(field_based, clip)

    clip, bits = expect_bits(clip, 32)

    de_base_args = (width, height // (1 + field_based.is_inter))
    kwargs.update(
        border_handling=BorderHandling.from_param(border_handling, self.rescale), ignore_mask=ignore_mask, blur=blur
    )

    sample_grid_model = SampleGridModel(sample_grid_model)

    if field_based.is_inter:
        raise NotImplementedError
    else:
        shift = _descale_shift_norm(shift, True, self.rescale)

        kwargs, shift = sample_grid_model.for_src(clip, width, height, shift, **kwargs)

        rescaled = super().rescale(
            clip, **self.get_rescale_args(clip, shift, *de_base_args, **kwargs), linear=linear, sigmoid=sigmoid
        )

    return depth(rescaled, bits)

scale

scale(
    clip: VideoNode,
    width: int | None = None,
    height: int | None = None,
    shift: tuple[TopShift | list[TopShift], LeftShift | list[LeftShift]] = (
        0,
        0,
    ),
    *,
    linear: bool | None = None,
    sigmoid: bool | tuple[Slope, Center] = False,
    border_handling: BorderHandling = MIRROR,
    sample_grid_model: SampleGridModel = MATCH_EDGES,
    sar: Sar | float | bool | None = None,
    dar: Dar | float | bool | None = None,
    dar_in: Dar | bool | float | None = None,
    keep_ar: bool | None = None,
    blur: float | None = None,
    **kwargs: Any
) -> VideoNode | ConstantFormatVideoNode

Scale a clip to the given resolution, with aspect ratio and linear light support.

Keyword arguments passed during initialization are automatically injected here, unless explicitly overridden by the arguments provided at call time. Only arguments that match named parameters in this method are injected.

Parameters:

  • clip

    (VideoNode) –

    The source clip.

  • width

    (int | None, default: None ) –

    Target width (defaults to clip width if None).

  • height

    (int | None, default: None ) –

    Target height (defaults to clip height if None).

  • shift

    (tuple[TopShift | list[TopShift], LeftShift | list[LeftShift]], default: (0, 0) ) –

    Subpixel shift (top, left) applied during scaling. If a tuple is provided, it is used uniformly. If a list is given, the shift is applied per plane.

  • linear

    (bool | None, default: None ) –

    Whether to linearize the input before descaling. If None, inferred from sigmoid.

  • sigmoid

    (bool | tuple[Slope, Center], default: False ) –

    Whether to use sigmoid transfer curve. Can be True, False, or a tuple of (slope, center). True applies the defaults values (6.5, 0.75). Keep in mind sigmoid slope has to be in range 1.0-20.0 (inclusive) and sigmoid center has to be in range 0.0-1.0 (inclusive).

  • border_handling

    (BorderHandling, default: MIRROR ) –

    Method for handling image borders during sampling.

  • sample_grid_model

    (SampleGridModel, default: MATCH_EDGES ) –

    Model used to align sampling grid.

  • sar

    (Sar | float | bool | None, default: None ) –

    Sample aspect ratio to assume or convert to.

  • dar

    (Dar | float | bool | None, default: None ) –

    Desired display aspect ratio.

  • dar_in

    (Dar | bool | float | None, default: None ) –

    Input display aspect ratio, if different from clip's.

  • keep_ar

    (bool | None, default: None ) –

    Whether to adjust dimensions to preserve aspect ratio.

  • blur

    (float | None, default: None ) –

    Amount of blur to apply during scaling.

Returns:

  • VideoNode | ConstantFormatVideoNode

    Scaled clip, optionally aspect-corrected and linearized.

Source code in vskernels/abstract/complex.py
527
528
529
530
531
532
533
534
535
536
537
538
539
540
541
542
543
544
545
546
547
548
549
550
551
552
553
554
555
556
557
558
559
560
561
562
563
564
565
566
567
568
569
570
571
572
573
574
575
576
577
578
579
580
581
582
583
584
585
586
587
588
589
590
591
592
593
594
595
596
597
598
599
600
601
602
603
604
605
606
607
608
609
610
611
612
613
614
615
616
617
618
619
620
621
622
623
624
625
626
627
628
629
630
631
632
633
634
635
636
637
638
639
640
641
642
643
644
645
646
647
648
649
650
651
652
653
654
655
656
657
658
659
660
def scale(
    self,
    clip: vs.VideoNode,
    width: int | None = None,
    height: int | None = None,
    # ComplexScaler adds shift per planes
    shift: tuple[TopShift | list[TopShift], LeftShift | list[LeftShift]] = (0, 0),
    *,
    # `linear` and `sigmoid` from LinearScaler
    linear: bool | None = None,
    sigmoid: bool | tuple[Slope, Center] = False,
    # `border_handling`, `sample_grid_model`, `sar`, `dar`, `dar_in` and `keep_ar` from KeepArScaler
    border_handling: BorderHandling = BorderHandling.MIRROR,
    sample_grid_model: SampleGridModel = SampleGridModel.MATCH_EDGES,
    sar: Sar | float | bool | None = None,
    dar: Dar | float | bool | None = None,
    dar_in: Dar | bool | float | None = None,
    keep_ar: bool | None = None,
    # ComplexScaler adds blur
    blur: float | None = None,
    **kwargs: Any,
) -> vs.VideoNode | ConstantFormatVideoNode:
    """
    Scale a clip to the given resolution, with aspect ratio and linear light support.

    Keyword arguments passed during initialization are automatically injected here,
    unless explicitly overridden by the arguments provided at call time.
    Only arguments that match named parameters in this method are injected.

    Args:
        clip: The source clip.
        width: Target width (defaults to clip width if None).
        height: Target height (defaults to clip height if None).
        shift: Subpixel shift (top, left) applied during scaling. If a tuple is provided, it is used uniformly. If a
            list is given, the shift is applied per plane.
        linear: Whether to linearize the input before descaling. If None, inferred from sigmoid.
        sigmoid: Whether to use sigmoid transfer curve. Can be True, False, or a tuple of (slope, center). `True`
            applies the defaults values (6.5, 0.75). Keep in mind sigmoid slope has to be in range 1.0-20.0
            (inclusive) and sigmoid center has to be in range 0.0-1.0 (inclusive).
        border_handling: Method for handling image borders during sampling.
        sample_grid_model: Model used to align sampling grid.
        sar: Sample aspect ratio to assume or convert to.
        dar: Desired display aspect ratio.
        dar_in: Input display aspect ratio, if different from clip's.
        keep_ar: Whether to adjust dimensions to preserve aspect ratio.
        blur: Amount of blur to apply during scaling.

    Returns:
        Scaled clip, optionally aspect-corrected and linearized.
    """
    kwargs.update(
        linear=linear,
        sigmoid=sigmoid,
        border_handling=border_handling,
        sample_grid_model=sample_grid_model,
        sar=sar,
        dar=dar,
        dar_in=dar_in,
        keep_ar=keep_ar,
        blur=blur,
    )

    shift_top, shift_left = shift

    if isinstance(shift_top, (int, float)) and isinstance(shift_left, (int, float)):
        return super().scale(clip, width, height, (shift_top, shift_left), **kwargs)

    assert check_variable_format(clip, self.scale)

    n_planes = clip.format.num_planes

    shift_top = normalize_seq(shift_top, n_planes)
    shift_left = normalize_seq(shift_left, n_planes)

    if n_planes == 1:
        return super().scale(clip, width, height, (shift_top[0], shift_left[0]), **kwargs)

    width, height = self._wh_norm(clip, width, height)

    format_in = clip.format
    format_out = get_video_format(fallback(kwargs.pop("format", None), self.kwargs.get("format"), clip.format))

    chromaloc = ChromaLocation.from_video(clip, func=self.scale)
    chromaloc_in = ChromaLocation(
        fallback(kwargs.pop("chromaloc_in", None), self.kwargs.get("chromaloc_in"), chromaloc)
    )
    chromaloc_out = ChromaLocation(fallback(kwargs.pop("chromaloc", None), self.kwargs.get("chromaloc"), chromaloc))

    off_left, off_top = chromaloc_in.get_offsets(format_in)
    off_left_out, off_top_out = chromaloc_out.get_offsets(format_out)

    factor_w = 1 / 2**format_in.subsampling_w
    factor_h = 1 / 2**format_in.subsampling_h

    # Offsets for format out
    offc_left = (abs(off_left) + off_left_out) * factor_w
    offc_top = (abs(off_top) + off_top_out) * factor_h

    # Offsets for scale out
    if format_out.subsampling_w:
        offc_left = ((abs(off_left) + off_left * (clip.width / width)) * factor_w) + offc_left
    if format_out.subsampling_h:
        offc_top = ((abs(off_top) + off_top * (clip.height / height)) * factor_h) + offc_top

    for i in range(1, n_planes):
        shift_left[i] += offc_left
        shift_top[i] += offc_top

    scaled_planes = list[vs.VideoNode]()

    for i, (plane, top, left) in enumerate(zip(split(clip), shift_top, shift_left)):
        if i:
            w = round(width * 1 / 2**format_out.subsampling_h)
            h = round(height * 1 / 2**format_out.subsampling_h)
        else:
            w, h = width, height

        scaled_planes.append(
            super().scale(
                plane,
                w,
                h,
                (top, left),
                format=format_out.replace(color_family=vs.GRAY, subsampling_w=0, subsampling_h=0),
                **kwargs,
            )
        )

    merged = vs.core.std.ShufflePlanes(scaled_planes, [0, 0, 0], format_out.color_family, clip)

    if chromaloc_in != chromaloc_out:
        return chromaloc_out.apply(merged)

    return merged

shift

shift(
    clip: VideoNode, shift: tuple[TopShift, LeftShift], /, **kwargs: Any
) -> ConstantFormatVideoNode
shift(
    clip: VideoNode,
    shift_top: float | list[float],
    shift_left: float | list[float],
    /,
    **kwargs: Any,
) -> ConstantFormatVideoNode
shift(
    clip: VideoNode,
    shifts_or_top: float | tuple[float, float] | list[float],
    shift_left: float | list[float] | None = None,
    /,
    **kwargs: Any,
) -> ConstantFormatVideoNode

Apply a subpixel shift to the clip using the kernel's scaling logic.

If a single float or tuple is provided, it is used uniformly. If a list is given, the shift is applied per plane.

Keyword arguments passed during initialization are automatically injected here, unless explicitly overridden by the arguments provided at call time. Only arguments that match named parameters in this method are injected.

Parameters:

  • clip

    (VideoNode) –

    The source clip.

  • shifts_or_top

    (float | tuple[float, float] | list[float]) –

    Either a single vertical shift, a (top, left) tuple, or a list of vertical shifts.

  • shift_left

    (float | list[float] | None, default: None ) –

    Horizontal shift or list of horizontal shifts. Ignored if shifts_or_top is a tuple.

  • **kwargs

    (Any, default: {} ) –

    Additional arguments passed to the internal scale call.

Returns:

  • ConstantFormatVideoNode

    A new clip with the applied shift.

Raises:

  • VariableFormatError

    If the input clip has variable format.

  • CustomValueError

    If the input clip is GRAY but lists of shift has been passed.

Source code in vskernels/abstract/base.py
845
846
847
848
849
850
851
852
853
854
855
856
857
858
859
860
861
862
863
864
865
866
867
868
869
870
871
872
873
874
875
876
877
878
879
880
881
882
883
884
885
886
887
888
889
890
891
892
893
894
895
896
897
898
899
900
901
902
903
def shift(
    self,
    clip: vs.VideoNode,
    shifts_or_top: float | tuple[float, float] | list[float],
    shift_left: float | list[float] | None = None,
    /,
    **kwargs: Any,
) -> ConstantFormatVideoNode:
    """
    Apply a subpixel shift to the clip using the kernel's scaling logic.

    If a single float or tuple is provided, it is used uniformly.
    If a list is given, the shift is applied per plane.

    Keyword arguments passed during initialization are automatically injected here,
    unless explicitly overridden by the arguments provided at call time.
    Only arguments that match named parameters in this method are injected.

    Args:
        clip: The source clip.
        shifts_or_top: Either a single vertical shift, a (top, left) tuple, or a list of vertical shifts.
        shift_left: Horizontal shift or list of horizontal shifts. Ignored if `shifts_or_top` is a tuple.
        **kwargs: Additional arguments passed to the internal `scale` call.

    Returns:
        A new clip with the applied shift.

    Raises:
        VariableFormatError: If the input clip has variable format.
        CustomValueError: If the input clip is GRAY but lists of shift has been passed.
    """
    assert check_variable_format(clip, self.shift)

    n_planes = clip.format.num_planes

    def _shift(src: vs.VideoNode, shift: tuple[TopShift, LeftShift] = (0, 0)) -> ConstantFormatVideoNode:
        return self.scale(src, shift=shift, **kwargs)  # type: ignore[return-value]

    if isinstance(shifts_or_top, tuple):
        return _shift(clip, shifts_or_top)

    if isinstance(shifts_or_top, (int, float)) and isinstance(shift_left, (int, float, NoneType)):
        return _shift(clip, (shifts_or_top, shift_left or 0))

    if shift_left is None:
        shift_left = 0.0

    shifts_top = normalize_seq(shifts_or_top, n_planes)
    shifts_left = normalize_seq(shift_left, n_planes)

    if n_planes == 1:
        return _shift(clip, (shifts_top[0], shifts_left[0]))

    shifted_planes = [
        plane if top == left == 0 else _shift(plane, (top, left))
        for plane, top, left in zip(split(clip), shifts_top, shifts_left)
    ]

    return core.std.ShufflePlanes(shifted_planes, [0, 0, 0], clip.format.color_family, clip)

supersample

supersample(
    clip: VideoNodeT,
    rfactor: float = 2.0,
    shift: tuple[TopShift, LeftShift] = (0, 0),
    **kwargs: Any
) -> VideoNodeT

Supersample a clip by a given scaling factor.

Keyword arguments passed during initialization are automatically injected here, unless explicitly overridden by the arguments provided at call time. Only arguments that match named parameters in this method are injected.

Parameters:

  • clip

    (VideoNodeT) –

    The source clip.

  • rfactor

    (float, default: 2.0 ) –

    Scaling factor for supersampling.

  • shift

    (tuple[TopShift, LeftShift], default: (0, 0) ) –

    Subpixel shift (top, left) applied during scaling.

  • **kwargs

    (Any, default: {} ) –

    Additional arguments forwarded to the scale function.

Raises:

  • CustomValueError

    If resulting resolution is non-positive.

Returns:

Source code in vskernels/abstract/base.py
496
497
498
499
500
501
502
503
504
505
506
507
508
509
510
511
512
513
514
515
516
517
518
519
520
521
522
523
524
525
526
527
528
529
def supersample(
    self, clip: VideoNodeT, rfactor: float = 2.0, shift: tuple[TopShift, LeftShift] = (0, 0), **kwargs: Any
) -> VideoNodeT:
    """
    Supersample a clip by a given scaling factor.

    Keyword arguments passed during initialization are automatically injected here,
    unless explicitly overridden by the arguments provided at call time.
    Only arguments that match named parameters in this method are injected.

    Args:
        clip: The source clip.
        rfactor: Scaling factor for supersampling.
        shift: Subpixel shift (top, left) applied during scaling.
        **kwargs: Additional arguments forwarded to the scale function.

    Raises:
        CustomValueError: If resulting resolution is non-positive.

    Returns:
        The supersampled clip.
    """
    assert check_variable_resolution(clip, self.supersample)

    dst_width, dst_height = ceil(clip.width * rfactor), ceil(clip.height * rfactor)

    if max(dst_width, dst_height) <= 0.0:
        raise CustomValueError(
            'Multiplying the resolution by "rfactor" must result in a positive resolution!',
            self.supersample,
            rfactor,
        )

    return self.scale(clip, dst_width, dst_height, shift, **kwargs)  # type: ignore[return-value]

BicubicSharp

BicubicSharp(**kwargs: Any)

Bases: Bicubic

BicubicSharp resizer (b=0, c=1).

Initialize the scaler with optional arguments.

Parameters:

  • **kwargs

    (Any, default: {} ) –

    Keyword arguments that configure the internal scaling behavior.

Methods:

  • bob

    Apply bob deinterlacing to a given clip using the selected resizer.

  • deinterlace

    Apply deinterlacing to a given clip using the selected resizer.

  • descale

    Descale a clip to the given resolution, with image borders handling and sampling grid alignment,

  • ensure_obj

    Ensure that the input is a scaler instance, resolving it if necessary.

  • from_param

    Resolve and return a scaler type from a given input (string, type, or instance).

  • get_bob_args
  • get_descale_args

    Generate and normalize argument dictionary for a descale operation.

  • get_params_args
  • get_resample_args

    Generate and normalize argument dictionary for a resample operation.

  • get_rescale_args

    Generate and normalize argument dictionary for a rescale operation.

  • get_scale_args

    Generate and normalize argument dictionary for a scale operation.

  • implemented_funcs

    Returns a set of function names that are implemented in the current class and the parent classes.

  • kernel_radius
  • multi

    Deprecated alias for supersample.

  • pretty_string

    Cached property returning a user-friendly string representation.

  • resample

    Resample a video clip to the given format.

  • rescale

    Rescale a clip to the given resolution from a previously descaled clip,

  • scale

    Scale a clip to the given resolution, with aspect ratio and linear light support.

  • shift

    Apply a subpixel shift to the clip using the kernel's scaling logic.

  • supersample

    Supersample a clip by a given scaling factor.

Attributes:

Source code in vskernels/kernels/zimg/bicubic.py
205
206
207
208
209
210
211
212
def __init__(self, **kwargs: Any) -> None:
    """
    Initialize the scaler with optional arguments.

    Args:
        **kwargs: Keyword arguments that configure the internal scaling behavior.
    """
    super().__init__(b=0, c=1, **kwargs)

b instance-attribute

b = b

bob_function class-attribute instance-attribute

bob_function: Callable[..., ConstantFormatVideoNode] = Bob

Bob function called internally when performing bobbing operations.

c instance-attribute

c = c

descale_function class-attribute instance-attribute

descale_function: Callable[..., ConstantFormatVideoNode] = Debicubic

kwargs instance-attribute

kwargs: dict[str, Any] = kwargs

Arguments passed to the implemented funcs or internal scale function.

resample_function class-attribute instance-attribute

resample_function: Callable[..., ConstantFormatVideoNode] = Bicubic

rescale_function class-attribute instance-attribute

rescale_function: Callable[..., ConstantFormatVideoNode] = Bicubic

scale_function class-attribute instance-attribute

scale_function: Callable[..., VideoNode] = Bicubic

bob

bob(
    clip: VideoNode, *, tff: FieldBasedLike | bool | None = None, **kwargs: Any
) -> ConstantFormatVideoNode

Apply bob deinterlacing to a given clip using the selected resizer.

Keyword arguments passed during initialization are automatically injected here, unless explicitly overridden by the arguments provided at call time. Only arguments that match named parameters in this method are injected.

Parameters:

  • clip

    (VideoNode) –

    The source clip

  • tff

    (FieldBasedLike | bool | None, default: None ) –

    Field order of the clip.

Returns:

  • ConstantFormatVideoNode

    The bobbed clip.

Source code in vskernels/kernels/zimg/abstract.py
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
def bob(
    self, clip: vs.VideoNode, *, tff: FieldBasedLike | bool | None = None, **kwargs: Any
) -> ConstantFormatVideoNode:
    """
    Apply bob deinterlacing to a given clip using the selected resizer.

    Keyword arguments passed during initialization are automatically injected here,
    unless explicitly overridden by the arguments provided at call time.
    Only arguments that match named parameters in this method are injected.

    Args:
        clip: The source clip
        tff: Field order of the clip.

    Returns:
        The bobbed clip.
    """
    clip_fieldbased = FieldBased.from_param_or_video(tff, clip, True, self.__class__)

    assert check_variable(clip, self.__class__)

    return self.bob_function(clip, **self.get_bob_args(clip, tff=clip_fieldbased.is_tff, **kwargs))

deinterlace

deinterlace(
    clip: VideoNode,
    *,
    tff: FieldBasedLike | bool | None = None,
    double_rate: bool = True,
    **kwargs: Any
) -> ConstantFormatVideoNode

Apply deinterlacing to a given clip using the selected resizer.

Keyword arguments passed during initialization are automatically injected here, unless explicitly overridden by the arguments provided at call time. Only arguments that match named parameters in this method are injected.

Parameters:

  • clip

    (VideoNode) –

    The source clip

  • tff

    (FieldBasedLike | bool | None, default: None ) –

    Field order of the clip.

  • double_rate

    (bool, default: True ) –

    Whether to double the frame rate (True) or retain the original rate (False).

Returns:

  • ConstantFormatVideoNode

    The bobbed clip.

Source code in vskernels/kernels/zimg/abstract.py
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
def deinterlace(
    self, clip: vs.VideoNode, *, tff: FieldBasedLike | bool | None = None, double_rate: bool = True, **kwargs: Any
) -> ConstantFormatVideoNode:
    """
    Apply deinterlacing to a given clip using the selected resizer.

    Keyword arguments passed during initialization are automatically injected here,
    unless explicitly overridden by the arguments provided at call time.
    Only arguments that match named parameters in this method are injected.

    Args:
        clip: The source clip
        tff: Field order of the clip.
        double_rate: Whether to double the frame rate (True) or retain the original rate (False).

    Returns:
        The bobbed clip.
    """
    bobbed = self.bob(clip, tff=tff, **kwargs)

    if not double_rate:
        return bobbed[::2]

    return bobbed

descale

descale(
    clip: VideoNode,
    width: int | None = None,
    height: int | None = None,
    shift: ShiftT = (0, 0),
    *,
    linear: bool | None = None,
    sigmoid: bool | tuple[Slope, Center] = False,
    border_handling: int | BorderHandling = MIRROR,
    sample_grid_model: int | SampleGridModel = MATCH_EDGES,
    field_based: FieldBasedLike | None = None,
    ignore_mask: VideoNode | None = None,
    blur: float | None = None,
    **kwargs: Any
) -> ConstantFormatVideoNode

Descale a clip to the given resolution, with image borders handling and sampling grid alignment, optionally using linear light processing.

Supports both progressive and interlaced sources. When interlaced, it will separate fields, perform per-field descaling, and weave them back.

Keyword arguments passed during initialization are automatically injected here, unless explicitly overridden by the arguments provided at call time. Only arguments that match named parameters in this method are injected.

Parameters:

  • clip

    (VideoNode) –

    The source clip.

  • width

    (int | None, default: None ) –

    Target descaled width (defaults to clip width if None).

  • height

    (int | None, default: None ) –

    Target descaled height (defaults to clip height if None).

  • shift

    (ShiftT, default: (0, 0) ) –

    Subpixel shift (top, left) or per-field shifts.

  • linear

    (bool | None, default: None ) –

    Whether to linearize the input before descaling. If None, inferred from sigmoid.

  • sigmoid

    (bool | tuple[Slope, Center], default: False ) –

    Whether to use sigmoid transfer curve. Can be True, False, or a tuple of (slope, center). True applies the defaults values (6.5, 0.75). Keep in mind sigmoid slope has to be in range 1.0-20.0 (inclusive) and sigmoid center has to be in range 0.0-1.0 (inclusive).

  • border_handling

    (int | BorderHandling, default: MIRROR ) –

    Method for handling image borders during sampling.

  • sample_grid_model

    (int | SampleGridModel, default: MATCH_EDGES ) –

    Model used to align sampling grid.

  • field_based

    (FieldBasedLike | None, default: None ) –

    Field-based processing mode (interlaced or progressive).

  • ignore_mask

    (VideoNode | None, default: None ) –

    Optional mask specifying areas to ignore during descaling.

  • blur

    (float | None, default: None ) –

    Amount of blur to apply during scaling.

  • **kwargs

    (Any, default: {} ) –

    Additional arguments passed to descale_function.

Returns:

  • ConstantFormatVideoNode

    The descaled video node, optionally processed in linear light.

Source code in vskernels/abstract/complex.py
671
672
673
674
675
676
677
678
679
680
681
682
683
684
685
686
687
688
689
690
691
692
693
694
695
696
697
698
699
700
701
702
703
704
705
706
707
708
709
710
711
712
713
714
715
716
717
718
719
720
721
722
723
724
725
726
727
728
729
730
731
732
733
734
735
736
737
738
739
740
741
742
743
744
745
746
747
748
749
750
751
752
753
754
755
756
757
758
759
760
761
762
763
764
765
766
767
768
769
770
771
def descale(
    self,
    clip: vs.VideoNode,
    width: int | None = None,
    height: int | None = None,
    shift: ShiftT = (0, 0),
    *,
    # `linear` and `sigmoid` parameters from LinearDescaler
    linear: bool | None = None,
    sigmoid: bool | tuple[Slope, Center] = False,
    # ComplexDescaler adds border_handling, sample_grid_model, field_based,  ignore_mask and blur
    border_handling: int | BorderHandling = BorderHandling.MIRROR,
    sample_grid_model: int | SampleGridModel = SampleGridModel.MATCH_EDGES,
    field_based: FieldBasedLike | None = None,
    ignore_mask: vs.VideoNode | None = None,
    blur: float | None = None,
    **kwargs: Any,
) -> ConstantFormatVideoNode:
    """
    Descale a clip to the given resolution, with image borders handling and sampling grid alignment,
    optionally using linear light processing.

    Supports both progressive and interlaced sources. When interlaced, it will separate fields,
    perform per-field descaling, and weave them back.

    Keyword arguments passed during initialization are automatically injected here,
    unless explicitly overridden by the arguments provided at call time.
    Only arguments that match named parameters in this method are injected.

    Args:
        clip: The source clip.
        width: Target descaled width (defaults to clip width if None).
        height: Target descaled height (defaults to clip height if None).
        shift: Subpixel shift (top, left) or per-field shifts.
        linear: Whether to linearize the input before descaling. If None, inferred from sigmoid.
        sigmoid: Whether to use sigmoid transfer curve. Can be True, False, or a tuple of (slope, center). `True`
            applies the defaults values (6.5, 0.75). Keep in mind sigmoid slope has to be in range 1.0-20.0
            (inclusive) and sigmoid center has to be in range 0.0-1.0 (inclusive).
        border_handling: Method for handling image borders during sampling.
        sample_grid_model: Model used to align sampling grid.
        field_based: Field-based processing mode (interlaced or progressive).
        ignore_mask: Optional mask specifying areas to ignore during descaling.
        blur: Amount of blur to apply during scaling.
        **kwargs: Additional arguments passed to `descale_function`.

    Returns:
        The descaled video node, optionally processed in linear light.
    """
    width, height = self._wh_norm(clip, width, height)
    check_correct_subsampling(clip, width, height)

    field_based = FieldBased.from_param_or_video(field_based, clip)

    clip, bits = expect_bits(clip, 32)

    de_base_args = (width, height // (1 + field_based.is_inter))
    kwargs.update(
        linear=linear,
        sigmoid=sigmoid,
        border_handling=BorderHandling.from_param(border_handling, self.descale),
        ignore_mask=ignore_mask,
        blur=blur,
    )

    sample_grid_model = SampleGridModel(sample_grid_model)

    if field_based.is_inter:
        from vsdeinterlace import reweave

        shift_y, shift_x = _descale_shift_norm(shift, False, self.descale)

        kwargs_tf, shift = sample_grid_model.for_src(clip, width, height, (shift_y[0], shift_x[0]), **kwargs)
        kwargs_bf, shift = sample_grid_model.for_src(clip, width, height, (shift_y[1], shift_x[1]), **kwargs)

        de_kwargs_tf = self.get_descale_args(clip, (shift_y[0], shift_x[0]), *de_base_args, **kwargs_tf)
        de_kwargs_bf = self.get_descale_args(clip, (shift_y[1], shift_x[1]), *de_base_args, **kwargs_bf)

        if height % 2:
            raise CustomIndexError("You can't descale to odd resolution when crossconverted!", self.descale)

        field_shift = 0.125 * height / clip.height

        fields = clip.std.SeparateFields(field_based.is_tff)

        descaled_tf = super().descale(
            fields[0::2],
            **de_kwargs_tf | {"src_top": de_kwargs_tf.get("src_top", 0.0) + field_shift},
        )
        descaled_bf = super().descale(
            fields[1::2],
            **de_kwargs_bf | {"src_top": de_kwargs_bf.get("src_top", 0.0) - field_shift},
        )
        descaled = reweave(descaled_tf, descaled_bf, field_based)
    else:
        shift = _descale_shift_norm(shift, True, self.descale)

        kwargs, shift = sample_grid_model.for_src(clip, width, height, shift, **kwargs)

        descaled = super().descale(clip, **self.get_descale_args(clip, shift, *de_base_args, **kwargs))

    return depth(descaled, bits)

ensure_obj classmethod

ensure_obj(
    scaler: str | type[Self] | Self | None = None,
    /,
    func_except: FuncExcept | None = None,
) -> Self

Ensure that the input is a scaler instance, resolving it if necessary.

Parameters:

  • scaler

    (str | type[Self] | Self | None, default: None ) –

    Scaler identifier (string, class, or instance).

  • func_except

    (FuncExcept | None, default: None ) –

    Function returned for custom error handling.

Returns:

  • Self

    Scaler instance.

Source code in vskernels/abstract/base.py
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
@classmethod
def ensure_obj(
    cls,
    scaler: str | type[Self] | Self | None = None,
    /,
    func_except: FuncExcept | None = None,
) -> Self:
    """
    Ensure that the input is a scaler instance, resolving it if necessary.

    Args:
        scaler: Scaler identifier (string, class, or instance).
        func_except: Function returned for custom error handling.

    Returns:
        Scaler instance.
    """
    return _base_ensure_obj(cls, scaler, func_except)

from_param classmethod

from_param(
    scaler: str | type[Self] | Self | None = None,
    /,
    func_except: FuncExcept | None = None,
) -> type[Self]

Resolve and return a scaler type from a given input (string, type, or instance).

Parameters:

  • scaler

    (str | type[Self] | Self | None, default: None ) –

    Scaler identifier (string, class, or instance).

  • func_except

    (FuncExcept | None, default: None ) –

    Function returned for custom error handling.

Returns:

  • type[Self]

    Resolved scaler type.

Source code in vskernels/abstract/base.py
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
@classmethod
def from_param(
    cls,
    scaler: str | type[Self] | Self | None = None,
    /,
    func_except: FuncExcept | None = None,
) -> type[Self]:
    """
    Resolve and return a scaler type from a given input (string, type, or instance).

    Args:
        scaler: Scaler identifier (string, class, or instance).
        func_except: Function returned for custom error handling.

    Returns:
        Resolved scaler type.
    """
    return _base_from_param(cls, scaler, cls._err_class, func_except)

get_bob_args

get_bob_args(
    clip: VideoNode, shift: tuple[TopShift, LeftShift] = (0, 0), **kwargs: Any
) -> dict[str, Any]
Source code in vskernels/kernels/zimg/bicubic.py
60
61
62
63
64
65
66
67
68
def get_bob_args(
    self,
    clip: vs.VideoNode,
    shift: tuple[TopShift, LeftShift] = (0, 0),
    **kwargs: Any,
) -> dict[str, Any]:
    return super().get_bob_args(
        clip, shift, filter="bicubic", filter_param_a=self.b, filter_param_b=self.c, **kwargs
    )

get_descale_args

get_descale_args(
    clip: VideoNode,
    shift: tuple[TopShift, LeftShift] = (0, 0),
    width: int | None = None,
    height: int | None = None,
    **kwargs: Any
) -> dict[str, Any]

Generate and normalize argument dictionary for a descale operation.

Parameters:

  • clip

    (VideoNode) –

    The source clip.

  • shift

    (tuple[TopShift, LeftShift], default: (0, 0) ) –

    Vertical and horizontal shift to apply.

  • width

    (int | None, default: None ) –

    Target width.

  • height

    (int | None, default: None ) –

    Target height.

  • **kwargs

    (Any, default: {} ) –

    Additional arguments to pass to the descale function.

Returns:

  • dict[str, Any]

    Dictionary of keyword arguments for the descale function.

Source code in vskernels/abstract/base.py
946
947
948
949
950
951
952
953
954
955
956
957
958
959
960
961
962
963
964
965
966
967
def get_descale_args(
    self,
    clip: vs.VideoNode,
    shift: tuple[TopShift, LeftShift] = (0, 0),
    width: int | None = None,
    height: int | None = None,
    **kwargs: Any,
) -> dict[str, Any]:
    """
    Generate and normalize argument dictionary for a descale operation.

    Args:
        clip: The source clip.
        shift: Vertical and horizontal shift to apply.
        width: Target width.
        height: Target height.
        **kwargs: Additional arguments to pass to the descale function.

    Returns:
        Dictionary of keyword arguments for the descale function.
    """
    return {"src_top": shift[0], "src_left": shift[1]} | self.get_params_args(True, clip, width, height, **kwargs)

get_params_args

get_params_args(
    is_descale: bool,
    clip: VideoNode,
    width: int | None = None,
    height: int | None = None,
    **kwargs: Any
) -> dict[str, Any]
Source code in vskernels/kernels/zimg/bicubic.py
52
53
54
55
56
57
58
def get_params_args(
    self, is_descale: bool, clip: vs.VideoNode, width: int | None = None, height: int | None = None, **kwargs: Any
) -> dict[str, Any]:
    args = super().get_params_args(is_descale, clip, width, height, **kwargs)
    if is_descale:
        return args | {"b": self.b, "c": self.c}
    return args | {"filter_param_a": self.b, "filter_param_b": self.c}

get_resample_args

get_resample_args(
    clip: VideoNode,
    format: int | VideoFormatLike | HoldsVideoFormat,
    matrix: MatrixLike | None,
    matrix_in: MatrixLike | None,
    **kwargs: Any
) -> dict[str, Any]

Generate and normalize argument dictionary for a resample operation.

Parameters:

  • clip

    (VideoNode) –

    The source clip.

  • format

    (int | VideoFormatLike | HoldsVideoFormat) –

    The target video format, which can either be:

    • an integer format ID,
    • a vs.PresetVideoFormat or vs.VideoFormat,
    • or a source from which a valid VideoFormat can be extracted.
  • matrix

    (MatrixLike | None) –

    Target color matrix.

  • matrix_in

    (MatrixLike | None) –

    Source color matrix.

  • **kwargs

    (Any, default: {} ) –

    Additional arguments to pass to the resample function.

Returns:

  • dict[str, Any]

    Dictionary of keyword arguments for the resample function.

Source code in vskernels/abstract/base.py
 992
 993
 994
 995
 996
 997
 998
 999
1000
1001
1002
1003
1004
1005
1006
1007
1008
1009
1010
1011
1012
1013
1014
1015
1016
1017
1018
1019
1020
1021
def get_resample_args(
    self,
    clip: vs.VideoNode,
    format: int | VideoFormatLike | HoldsVideoFormat,
    matrix: MatrixLike | None,
    matrix_in: MatrixLike | None,
    **kwargs: Any,
) -> dict[str, Any]:
    """
    Generate and normalize argument dictionary for a resample operation.

    Args:
        clip: The source clip.
        format: The target video format, which can either be:

               - an integer format ID,
               - a `vs.PresetVideoFormat` or `vs.VideoFormat`,
               - or a source from which a valid `VideoFormat` can be extracted.
        matrix: Target color matrix.
        matrix_in: Source color matrix.
        **kwargs: Additional arguments to pass to the resample function.

    Returns:
        Dictionary of keyword arguments for the resample function.
    """
    return {
        "format": get_video_format(format).id,
        "matrix": Matrix.from_param(matrix),
        "matrix_in": Matrix.from_param(matrix_in),
    } | self.get_params_args(False, clip, **kwargs)

get_rescale_args

get_rescale_args(
    clip: VideoNode,
    shift: tuple[TopShift, LeftShift] = (0, 0),
    width: int | None = None,
    height: int | None = None,
    **kwargs: Any
) -> dict[str, Any]

Generate and normalize argument dictionary for a rescale operation.

Parameters:

  • clip

    (VideoNode) –

    The source clip.

  • shift

    (tuple[TopShift, LeftShift], default: (0, 0) ) –

    Vertical and horizontal shift to apply.

  • width

    (int | None, default: None ) –

    Target width.

  • height

    (int | None, default: None ) –

    Target height.

  • **kwargs

    (Any, default: {} ) –

    Additional arguments to pass to the rescale function.

Returns:

  • dict[str, Any]

    Dictionary of keyword arguments for the rescale function.

Source code in vskernels/abstract/base.py
969
970
971
972
973
974
975
976
977
978
979
980
981
982
983
984
985
986
987
988
989
990
def get_rescale_args(
    self,
    clip: vs.VideoNode,
    shift: tuple[TopShift, LeftShift] = (0, 0),
    width: int | None = None,
    height: int | None = None,
    **kwargs: Any,
) -> dict[str, Any]:
    """
    Generate and normalize argument dictionary for a rescale operation.

    Args:
        clip: The source clip.
        shift: Vertical and horizontal shift to apply.
        width: Target width.
        height: Target height.
        **kwargs: Additional arguments to pass to the rescale function.

    Returns:
        Dictionary of keyword arguments for the rescale function.
    """
    return {"src_top": shift[0], "src_left": shift[1]} | self.get_params_args(True, clip, width, height, **kwargs)

get_scale_args

get_scale_args(
    clip: VideoNode,
    shift: tuple[TopShift, LeftShift] = (0, 0),
    width: int | None = None,
    height: int | None = None,
    **kwargs: Any
) -> dict[str, Any]

Generate and normalize argument dictionary for a scale operation.

Parameters:

  • clip

    (VideoNode) –

    The source clip.

  • shift

    (tuple[TopShift, LeftShift], default: (0, 0) ) –

    Vertical and horizontal shift to apply.

  • width

    (int | None, default: None ) –

    Target width.

  • height

    (int | None, default: None ) –

    Target height.

  • **kwargs

    (Any, default: {} ) –

    Additional arguments to pass to the scale function.

Returns:

  • dict[str, Any]

    Dictionary of keyword arguments for the scale function.

Source code in vskernels/abstract/base.py
923
924
925
926
927
928
929
930
931
932
933
934
935
936
937
938
939
940
941
942
943
944
def get_scale_args(
    self,
    clip: vs.VideoNode,
    shift: tuple[TopShift, LeftShift] = (0, 0),
    width: int | None = None,
    height: int | None = None,
    **kwargs: Any,
) -> dict[str, Any]:
    """
    Generate and normalize argument dictionary for a scale operation.

    Args:
        clip: The source clip.
        shift: Vertical and horizontal shift to apply.
        width: Target width.
        height: Target height.
        **kwargs: Additional arguments to pass to the scale function.

    Returns:
        Dictionary of keyword arguments for the scale function.
    """
    return {"src_top": shift[0], "src_left": shift[1]} | self.get_params_args(False, clip, width, height, **kwargs)

implemented_funcs classmethod

implemented_funcs() -> frozenset[str]

Returns a set of function names that are implemented in the current class and the parent classes.

These functions determine which keyword arguments will be extracted from the init method.

Returns:

Source code in vskernels/abstract/base.py
431
432
433
434
435
436
437
438
439
440
441
442
@classproperty
@classmethod
def implemented_funcs(cls) -> frozenset[str]:
    """
    Returns a set of function names that are implemented in the current class and the parent classes.

    These functions determine which keyword arguments will be extracted from the __init__ method.

    Returns:
        Frozen set of function names.
    """
    return frozenset(func for klass in cls.mro() for func in getattr(klass, "_implemented_funcs", ()))

kernel_radius

kernel_radius() -> int
Source code in vskernels/kernels/zimg/bicubic.py
70
71
72
73
74
@ZimgComplexKernel.cachedproperty
def kernel_radius(self) -> int:
    if (self.b, self.c) == (0, 0):
        return 1
    return 2

multi

multi(
    clip: VideoNodeT,
    multi: float = 2.0,
    shift: tuple[TopShift, LeftShift] = (0, 0),
    **kwargs: Any
) -> VideoNodeT

Deprecated alias for supersample.

Keyword arguments passed during initialization are automatically injected here, unless explicitly overridden by the arguments provided at call time. Only arguments that match named parameters in this method are injected.

Parameters:

  • clip

    (VideoNodeT) –

    The source clip.

  • multi

    (float, default: 2.0 ) –

    Supersampling factor.

  • shift

    (tuple[TopShift, LeftShift], default: (0, 0) ) –

    Subpixel shift (top, left) applied during scaling.

  • **kwargs

    (Any, default: {} ) –

    Additional arguments forwarded to the scale function.

Returns:

Source code in vskernels/abstract/base.py
531
532
533
534
535
536
537
538
539
540
541
542
543
544
545
546
547
548
549
550
551
@deprecated('The "multi" method is deprecated. Use "supersample" instead.', category=DeprecationWarning)
def multi(
    self, clip: VideoNodeT, multi: float = 2.0, shift: tuple[TopShift, LeftShift] = (0, 0), **kwargs: Any
) -> VideoNodeT:
    """
    Deprecated alias for `supersample`.

    Keyword arguments passed during initialization are automatically injected here,
    unless explicitly overridden by the arguments provided at call time.
    Only arguments that match named parameters in this method are injected.

    Args:
        clip: The source clip.
        multi: Supersampling factor.
        shift: Subpixel shift (top, left) applied during scaling.
        **kwargs: Additional arguments forwarded to the scale function.

    Returns:
        The supersampled clip.
    """
    return self.supersample(clip, multi, shift, **kwargs)

pretty_string

pretty_string() -> str

Cached property returning a user-friendly string representation.

Returns:

  • str

    Pretty-printed string with arguments.

Source code in vskernels/abstract/base.py
421
422
423
424
425
426
427
428
429
@BaseScalerMeta.cachedproperty
def pretty_string(self) -> str:
    """
    Cached property returning a user-friendly string representation.

    Returns:
        Pretty-printed string with arguments.
    """
    return self._pretty_string()

resample

resample(
    clip: VideoNode,
    format: int | VideoFormatLike | HoldsVideoFormat,
    matrix: MatrixLike | None = None,
    matrix_in: MatrixLike | None = None,
    **kwargs: Any
) -> ConstantFormatVideoNode

Resample a video clip to the given format.

Keyword arguments passed during initialization are automatically injected here, unless explicitly overridden by the arguments provided at call time. Only arguments that match named parameters in this method are injected.

Parameters:

  • clip

    (VideoNode) –

    The source clip.

  • format

    (int | VideoFormatLike | HoldsVideoFormat) –

    The target video format, which can either be:

    • an integer format ID,
    • a vs.PresetVideoFormat or vs.VideoFormat,
    • or a source from which a valid VideoFormat can be extracted.
  • matrix

    (MatrixLike | None, default: None ) –

    An optional color transformation matrix to apply.

  • matrix_in

    (MatrixLike | None, default: None ) –

    An optional input matrix for color transformations.

  • **kwargs

    (Any, default: {} ) –

    Additional keyword arguments passed to the resample_function.

Returns:

  • ConstantFormatVideoNode

    The resampled clip.

Source code in vskernels/abstract/base.py
717
718
719
720
721
722
723
724
725
726
727
728
729
730
731
732
733
734
735
736
737
738
739
740
741
742
743
744
745
746
747
748
def resample(
    self,
    clip: vs.VideoNode,
    format: int | VideoFormatLike | HoldsVideoFormat,
    matrix: MatrixLike | None = None,
    matrix_in: MatrixLike | None = None,
    **kwargs: Any,
) -> ConstantFormatVideoNode:
    """
    Resample a video clip to the given format.

    Keyword arguments passed during initialization are automatically injected here,
    unless explicitly overridden by the arguments provided at call time.
    Only arguments that match named parameters in this method are injected.

    Args:
        clip: The source clip.
        format: The target video format, which can either be:

               - an integer format ID,
               - a `vs.PresetVideoFormat` or `vs.VideoFormat`,
               - or a source from which a valid `VideoFormat` can be extracted.
        matrix: An optional color transformation matrix to apply.
        matrix_in: An optional input matrix for color transformations.
        **kwargs: Additional keyword arguments passed to the `resample_function`.

    Returns:
        The resampled clip.
    """
    return self.resample_function(
        clip, **_norm_props_enums(self.get_resample_args(clip, format, matrix, matrix_in, **kwargs))
    )

rescale

rescale(
    clip: VideoNode,
    width: int | None = None,
    height: int | None = None,
    shift: ShiftT = (0, 0),
    *,
    linear: bool | None = None,
    sigmoid: bool | tuple[Slope, Center] = False,
    border_handling: int | BorderHandling = MIRROR,
    sample_grid_model: int | SampleGridModel = MATCH_EDGES,
    field_based: FieldBasedLike | None = None,
    ignore_mask: VideoNode | None = None,
    blur: float | None = None,
    **kwargs: Any
) -> ConstantFormatVideoNode

Rescale a clip to the given resolution from a previously descaled clip, with image borders handling and sampling grid alignment, optionally using linear light processing.

Keyword arguments passed during initialization are automatically injected here, unless explicitly overridden by the arguments provided at call time. Only arguments that match named parameters in this method are injected.

Parameters:

  • clip

    (VideoNode) –

    The source clip.

  • width

    (int | None, default: None ) –

    Target scaled width (defaults to clip width if None).

  • height

    (int | None, default: None ) –

    Target scaled height (defaults to clip height if None).

  • shift

    (ShiftT, default: (0, 0) ) –

    Subpixel shift (top, left) or per-field shifts.

  • linear

    (bool | None, default: None ) –

    Whether to linearize the input before rescaling. If None, inferred from sigmoid.

  • sigmoid

    (bool | tuple[Slope, Center], default: False ) –

    Whether to use sigmoid transfer curve. Can be True, False, or a tuple of (slope, center). True applies the defaults values (6.5, 0.75). Keep in mind sigmoid slope has to be in range 1.0-20.0 (inclusive) and sigmoid center has to be in range 0.0-1.0 (inclusive).

  • border_handling

    (int | BorderHandling, default: MIRROR ) –

    Method for handling image borders during sampling.

  • sample_grid_model

    (int | SampleGridModel, default: MATCH_EDGES ) –

    Model used to align sampling grid.

  • field_based

    (FieldBasedLike | None, default: None ) –

    Field-based processing mode (interlaced or progressive).

  • ignore_mask

    (VideoNode | None, default: None ) –

    Optional mask specifying areas to ignore during rescaling.

  • blur

    (float | None, default: None ) –

    Amount of blur to apply during rescaling.

  • **kwargs

    (Any, default: {} ) –

    Additional arguments passed to rescale_function.

Returns:

  • ConstantFormatVideoNode

    The scaled clip.

Source code in vskernels/abstract/complex.py
773
774
775
776
777
778
779
780
781
782
783
784
785
786
787
788
789
790
791
792
793
794
795
796
797
798
799
800
801
802
803
804
805
806
807
808
809
810
811
812
813
814
815
816
817
818
819
820
821
822
823
824
825
826
827
828
829
830
831
832
833
834
835
836
837
838
839
840
841
842
843
def rescale(
    self,
    clip: vs.VideoNode,
    width: int | None = None,
    height: int | None = None,
    shift: ShiftT = (0, 0),
    *,
    # `linear` and `sigmoid` parameters from LinearDescaler
    linear: bool | None = None,
    sigmoid: bool | tuple[Slope, Center] = False,
    # ComplexDescaler adds border_handling, sample_grid_model, field_based, ignore_mask and blur
    border_handling: int | BorderHandling = BorderHandling.MIRROR,
    sample_grid_model: int | SampleGridModel = SampleGridModel.MATCH_EDGES,
    field_based: FieldBasedLike | None = None,
    ignore_mask: vs.VideoNode | None = None,
    blur: float | None = None,
    **kwargs: Any,
) -> ConstantFormatVideoNode:
    """
    Rescale a clip to the given resolution from a previously descaled clip,
    with image borders handling and sampling grid alignment, optionally using linear light processing.

    Keyword arguments passed during initialization are automatically injected here,
    unless explicitly overridden by the arguments provided at call time.
    Only arguments that match named parameters in this method are injected.

    Args:
        clip: The source clip.
        width: Target scaled width (defaults to clip width if None).
        height: Target scaled height (defaults to clip height if None).
        shift: Subpixel shift (top, left) or per-field shifts.
        linear: Whether to linearize the input before rescaling. If None, inferred from sigmoid.
        sigmoid: Whether to use sigmoid transfer curve. Can be True, False, or a tuple of (slope, center). `True`
            applies the defaults values (6.5, 0.75). Keep in mind sigmoid slope has to be in range 1.0-20.0
            (inclusive) and sigmoid center has to be in range 0.0-1.0 (inclusive).
        border_handling: Method for handling image borders during sampling.
        sample_grid_model: Model used to align sampling grid.
        field_based: Field-based processing mode (interlaced or progressive).
        ignore_mask: Optional mask specifying areas to ignore during rescaling.
        blur: Amount of blur to apply during rescaling.
        **kwargs: Additional arguments passed to `rescale_function`.

    Returns:
        The scaled clip.
    """
    width, height = self._wh_norm(clip, width, height)
    check_correct_subsampling(clip, width, height)

    field_based = FieldBased.from_param_or_video(field_based, clip)

    clip, bits = expect_bits(clip, 32)

    de_base_args = (width, height // (1 + field_based.is_inter))
    kwargs.update(
        border_handling=BorderHandling.from_param(border_handling, self.rescale), ignore_mask=ignore_mask, blur=blur
    )

    sample_grid_model = SampleGridModel(sample_grid_model)

    if field_based.is_inter:
        raise NotImplementedError
    else:
        shift = _descale_shift_norm(shift, True, self.rescale)

        kwargs, shift = sample_grid_model.for_src(clip, width, height, shift, **kwargs)

        rescaled = super().rescale(
            clip, **self.get_rescale_args(clip, shift, *de_base_args, **kwargs), linear=linear, sigmoid=sigmoid
        )

    return depth(rescaled, bits)

scale

scale(
    clip: VideoNode,
    width: int | None = None,
    height: int | None = None,
    shift: tuple[TopShift | list[TopShift], LeftShift | list[LeftShift]] = (
        0,
        0,
    ),
    *,
    linear: bool | None = None,
    sigmoid: bool | tuple[Slope, Center] = False,
    border_handling: BorderHandling = MIRROR,
    sample_grid_model: SampleGridModel = MATCH_EDGES,
    sar: Sar | float | bool | None = None,
    dar: Dar | float | bool | None = None,
    dar_in: Dar | bool | float | None = None,
    keep_ar: bool | None = None,
    blur: float | None = None,
    **kwargs: Any
) -> VideoNode | ConstantFormatVideoNode

Scale a clip to the given resolution, with aspect ratio and linear light support.

Keyword arguments passed during initialization are automatically injected here, unless explicitly overridden by the arguments provided at call time. Only arguments that match named parameters in this method are injected.

Parameters:

  • clip

    (VideoNode) –

    The source clip.

  • width

    (int | None, default: None ) –

    Target width (defaults to clip width if None).

  • height

    (int | None, default: None ) –

    Target height (defaults to clip height if None).

  • shift

    (tuple[TopShift | list[TopShift], LeftShift | list[LeftShift]], default: (0, 0) ) –

    Subpixel shift (top, left) applied during scaling. If a tuple is provided, it is used uniformly. If a list is given, the shift is applied per plane.

  • linear

    (bool | None, default: None ) –

    Whether to linearize the input before descaling. If None, inferred from sigmoid.

  • sigmoid

    (bool | tuple[Slope, Center], default: False ) –

    Whether to use sigmoid transfer curve. Can be True, False, or a tuple of (slope, center). True applies the defaults values (6.5, 0.75). Keep in mind sigmoid slope has to be in range 1.0-20.0 (inclusive) and sigmoid center has to be in range 0.0-1.0 (inclusive).

  • border_handling

    (BorderHandling, default: MIRROR ) –

    Method for handling image borders during sampling.

  • sample_grid_model

    (SampleGridModel, default: MATCH_EDGES ) –

    Model used to align sampling grid.

  • sar

    (Sar | float | bool | None, default: None ) –

    Sample aspect ratio to assume or convert to.

  • dar

    (Dar | float | bool | None, default: None ) –

    Desired display aspect ratio.

  • dar_in

    (Dar | bool | float | None, default: None ) –

    Input display aspect ratio, if different from clip's.

  • keep_ar

    (bool | None, default: None ) –

    Whether to adjust dimensions to preserve aspect ratio.

  • blur

    (float | None, default: None ) –

    Amount of blur to apply during scaling.

Returns:

  • VideoNode | ConstantFormatVideoNode

    Scaled clip, optionally aspect-corrected and linearized.

Source code in vskernels/abstract/complex.py
527
528
529
530
531
532
533
534
535
536
537
538
539
540
541
542
543
544
545
546
547
548
549
550
551
552
553
554
555
556
557
558
559
560
561
562
563
564
565
566
567
568
569
570
571
572
573
574
575
576
577
578
579
580
581
582
583
584
585
586
587
588
589
590
591
592
593
594
595
596
597
598
599
600
601
602
603
604
605
606
607
608
609
610
611
612
613
614
615
616
617
618
619
620
621
622
623
624
625
626
627
628
629
630
631
632
633
634
635
636
637
638
639
640
641
642
643
644
645
646
647
648
649
650
651
652
653
654
655
656
657
658
659
660
def scale(
    self,
    clip: vs.VideoNode,
    width: int | None = None,
    height: int | None = None,
    # ComplexScaler adds shift per planes
    shift: tuple[TopShift | list[TopShift], LeftShift | list[LeftShift]] = (0, 0),
    *,
    # `linear` and `sigmoid` from LinearScaler
    linear: bool | None = None,
    sigmoid: bool | tuple[Slope, Center] = False,
    # `border_handling`, `sample_grid_model`, `sar`, `dar`, `dar_in` and `keep_ar` from KeepArScaler
    border_handling: BorderHandling = BorderHandling.MIRROR,
    sample_grid_model: SampleGridModel = SampleGridModel.MATCH_EDGES,
    sar: Sar | float | bool | None = None,
    dar: Dar | float | bool | None = None,
    dar_in: Dar | bool | float | None = None,
    keep_ar: bool | None = None,
    # ComplexScaler adds blur
    blur: float | None = None,
    **kwargs: Any,
) -> vs.VideoNode | ConstantFormatVideoNode:
    """
    Scale a clip to the given resolution, with aspect ratio and linear light support.

    Keyword arguments passed during initialization are automatically injected here,
    unless explicitly overridden by the arguments provided at call time.
    Only arguments that match named parameters in this method are injected.

    Args:
        clip: The source clip.
        width: Target width (defaults to clip width if None).
        height: Target height (defaults to clip height if None).
        shift: Subpixel shift (top, left) applied during scaling. If a tuple is provided, it is used uniformly. If a
            list is given, the shift is applied per plane.
        linear: Whether to linearize the input before descaling. If None, inferred from sigmoid.
        sigmoid: Whether to use sigmoid transfer curve. Can be True, False, or a tuple of (slope, center). `True`
            applies the defaults values (6.5, 0.75). Keep in mind sigmoid slope has to be in range 1.0-20.0
            (inclusive) and sigmoid center has to be in range 0.0-1.0 (inclusive).
        border_handling: Method for handling image borders during sampling.
        sample_grid_model: Model used to align sampling grid.
        sar: Sample aspect ratio to assume or convert to.
        dar: Desired display aspect ratio.
        dar_in: Input display aspect ratio, if different from clip's.
        keep_ar: Whether to adjust dimensions to preserve aspect ratio.
        blur: Amount of blur to apply during scaling.

    Returns:
        Scaled clip, optionally aspect-corrected and linearized.
    """
    kwargs.update(
        linear=linear,
        sigmoid=sigmoid,
        border_handling=border_handling,
        sample_grid_model=sample_grid_model,
        sar=sar,
        dar=dar,
        dar_in=dar_in,
        keep_ar=keep_ar,
        blur=blur,
    )

    shift_top, shift_left = shift

    if isinstance(shift_top, (int, float)) and isinstance(shift_left, (int, float)):
        return super().scale(clip, width, height, (shift_top, shift_left), **kwargs)

    assert check_variable_format(clip, self.scale)

    n_planes = clip.format.num_planes

    shift_top = normalize_seq(shift_top, n_planes)
    shift_left = normalize_seq(shift_left, n_planes)

    if n_planes == 1:
        return super().scale(clip, width, height, (shift_top[0], shift_left[0]), **kwargs)

    width, height = self._wh_norm(clip, width, height)

    format_in = clip.format
    format_out = get_video_format(fallback(kwargs.pop("format", None), self.kwargs.get("format"), clip.format))

    chromaloc = ChromaLocation.from_video(clip, func=self.scale)
    chromaloc_in = ChromaLocation(
        fallback(kwargs.pop("chromaloc_in", None), self.kwargs.get("chromaloc_in"), chromaloc)
    )
    chromaloc_out = ChromaLocation(fallback(kwargs.pop("chromaloc", None), self.kwargs.get("chromaloc"), chromaloc))

    off_left, off_top = chromaloc_in.get_offsets(format_in)
    off_left_out, off_top_out = chromaloc_out.get_offsets(format_out)

    factor_w = 1 / 2**format_in.subsampling_w
    factor_h = 1 / 2**format_in.subsampling_h

    # Offsets for format out
    offc_left = (abs(off_left) + off_left_out) * factor_w
    offc_top = (abs(off_top) + off_top_out) * factor_h

    # Offsets for scale out
    if format_out.subsampling_w:
        offc_left = ((abs(off_left) + off_left * (clip.width / width)) * factor_w) + offc_left
    if format_out.subsampling_h:
        offc_top = ((abs(off_top) + off_top * (clip.height / height)) * factor_h) + offc_top

    for i in range(1, n_planes):
        shift_left[i] += offc_left
        shift_top[i] += offc_top

    scaled_planes = list[vs.VideoNode]()

    for i, (plane, top, left) in enumerate(zip(split(clip), shift_top, shift_left)):
        if i:
            w = round(width * 1 / 2**format_out.subsampling_h)
            h = round(height * 1 / 2**format_out.subsampling_h)
        else:
            w, h = width, height

        scaled_planes.append(
            super().scale(
                plane,
                w,
                h,
                (top, left),
                format=format_out.replace(color_family=vs.GRAY, subsampling_w=0, subsampling_h=0),
                **kwargs,
            )
        )

    merged = vs.core.std.ShufflePlanes(scaled_planes, [0, 0, 0], format_out.color_family, clip)

    if chromaloc_in != chromaloc_out:
        return chromaloc_out.apply(merged)

    return merged

shift

shift(
    clip: VideoNode, shift: tuple[TopShift, LeftShift], /, **kwargs: Any
) -> ConstantFormatVideoNode
shift(
    clip: VideoNode,
    shift_top: float | list[float],
    shift_left: float | list[float],
    /,
    **kwargs: Any,
) -> ConstantFormatVideoNode
shift(
    clip: VideoNode,
    shifts_or_top: float | tuple[float, float] | list[float],
    shift_left: float | list[float] | None = None,
    /,
    **kwargs: Any,
) -> ConstantFormatVideoNode

Apply a subpixel shift to the clip using the kernel's scaling logic.

If a single float or tuple is provided, it is used uniformly. If a list is given, the shift is applied per plane.

Keyword arguments passed during initialization are automatically injected here, unless explicitly overridden by the arguments provided at call time. Only arguments that match named parameters in this method are injected.

Parameters:

  • clip

    (VideoNode) –

    The source clip.

  • shifts_or_top

    (float | tuple[float, float] | list[float]) –

    Either a single vertical shift, a (top, left) tuple, or a list of vertical shifts.

  • shift_left

    (float | list[float] | None, default: None ) –

    Horizontal shift or list of horizontal shifts. Ignored if shifts_or_top is a tuple.

  • **kwargs

    (Any, default: {} ) –

    Additional arguments passed to the internal scale call.

Returns:

  • ConstantFormatVideoNode

    A new clip with the applied shift.

Raises:

  • VariableFormatError

    If the input clip has variable format.

  • CustomValueError

    If the input clip is GRAY but lists of shift has been passed.

Source code in vskernels/abstract/base.py
845
846
847
848
849
850
851
852
853
854
855
856
857
858
859
860
861
862
863
864
865
866
867
868
869
870
871
872
873
874
875
876
877
878
879
880
881
882
883
884
885
886
887
888
889
890
891
892
893
894
895
896
897
898
899
900
901
902
903
def shift(
    self,
    clip: vs.VideoNode,
    shifts_or_top: float | tuple[float, float] | list[float],
    shift_left: float | list[float] | None = None,
    /,
    **kwargs: Any,
) -> ConstantFormatVideoNode:
    """
    Apply a subpixel shift to the clip using the kernel's scaling logic.

    If a single float or tuple is provided, it is used uniformly.
    If a list is given, the shift is applied per plane.

    Keyword arguments passed during initialization are automatically injected here,
    unless explicitly overridden by the arguments provided at call time.
    Only arguments that match named parameters in this method are injected.

    Args:
        clip: The source clip.
        shifts_or_top: Either a single vertical shift, a (top, left) tuple, or a list of vertical shifts.
        shift_left: Horizontal shift or list of horizontal shifts. Ignored if `shifts_or_top` is a tuple.
        **kwargs: Additional arguments passed to the internal `scale` call.

    Returns:
        A new clip with the applied shift.

    Raises:
        VariableFormatError: If the input clip has variable format.
        CustomValueError: If the input clip is GRAY but lists of shift has been passed.
    """
    assert check_variable_format(clip, self.shift)

    n_planes = clip.format.num_planes

    def _shift(src: vs.VideoNode, shift: tuple[TopShift, LeftShift] = (0, 0)) -> ConstantFormatVideoNode:
        return self.scale(src, shift=shift, **kwargs)  # type: ignore[return-value]

    if isinstance(shifts_or_top, tuple):
        return _shift(clip, shifts_or_top)

    if isinstance(shifts_or_top, (int, float)) and isinstance(shift_left, (int, float, NoneType)):
        return _shift(clip, (shifts_or_top, shift_left or 0))

    if shift_left is None:
        shift_left = 0.0

    shifts_top = normalize_seq(shifts_or_top, n_planes)
    shifts_left = normalize_seq(shift_left, n_planes)

    if n_planes == 1:
        return _shift(clip, (shifts_top[0], shifts_left[0]))

    shifted_planes = [
        plane if top == left == 0 else _shift(plane, (top, left))
        for plane, top, left in zip(split(clip), shifts_top, shifts_left)
    ]

    return core.std.ShufflePlanes(shifted_planes, [0, 0, 0], clip.format.color_family, clip)

supersample

supersample(
    clip: VideoNodeT,
    rfactor: float = 2.0,
    shift: tuple[TopShift, LeftShift] = (0, 0),
    **kwargs: Any
) -> VideoNodeT

Supersample a clip by a given scaling factor.

Keyword arguments passed during initialization are automatically injected here, unless explicitly overridden by the arguments provided at call time. Only arguments that match named parameters in this method are injected.

Parameters:

  • clip

    (VideoNodeT) –

    The source clip.

  • rfactor

    (float, default: 2.0 ) –

    Scaling factor for supersampling.

  • shift

    (tuple[TopShift, LeftShift], default: (0, 0) ) –

    Subpixel shift (top, left) applied during scaling.

  • **kwargs

    (Any, default: {} ) –

    Additional arguments forwarded to the scale function.

Raises:

  • CustomValueError

    If resulting resolution is non-positive.

Returns:

Source code in vskernels/abstract/base.py
496
497
498
499
500
501
502
503
504
505
506
507
508
509
510
511
512
513
514
515
516
517
518
519
520
521
522
523
524
525
526
527
528
529
def supersample(
    self, clip: VideoNodeT, rfactor: float = 2.0, shift: tuple[TopShift, LeftShift] = (0, 0), **kwargs: Any
) -> VideoNodeT:
    """
    Supersample a clip by a given scaling factor.

    Keyword arguments passed during initialization are automatically injected here,
    unless explicitly overridden by the arguments provided at call time.
    Only arguments that match named parameters in this method are injected.

    Args:
        clip: The source clip.
        rfactor: Scaling factor for supersampling.
        shift: Subpixel shift (top, left) applied during scaling.
        **kwargs: Additional arguments forwarded to the scale function.

    Raises:
        CustomValueError: If resulting resolution is non-positive.

    Returns:
        The supersampled clip.
    """
    assert check_variable_resolution(clip, self.supersample)

    dst_width, dst_height = ceil(clip.width * rfactor), ceil(clip.height * rfactor)

    if max(dst_width, dst_height) <= 0.0:
        raise CustomValueError(
            'Multiplying the resolution by "rfactor" must result in a positive resolution!',
            self.supersample,
            rfactor,
        )

    return self.scale(clip, dst_width, dst_height, shift, **kwargs)  # type: ignore[return-value]

Catrom

Catrom(**kwargs: Any)

Bases: Bicubic

Catrom resizer (b=0, c=0.5).

Initialize the scaler with optional arguments.

Parameters:

  • **kwargs

    (Any, default: {} ) –

    Keyword arguments that configure the internal scaling behavior.

Methods:

  • bob

    Apply bob deinterlacing to a given clip using the selected resizer.

  • deinterlace

    Apply deinterlacing to a given clip using the selected resizer.

  • descale

    Descale a clip to the given resolution, with image borders handling and sampling grid alignment,

  • ensure_obj

    Ensure that the input is a scaler instance, resolving it if necessary.

  • from_param

    Resolve and return a scaler type from a given input (string, type, or instance).

  • get_bob_args
  • get_descale_args

    Generate and normalize argument dictionary for a descale operation.

  • get_params_args
  • get_resample_args

    Generate and normalize argument dictionary for a resample operation.

  • get_rescale_args

    Generate and normalize argument dictionary for a rescale operation.

  • get_scale_args

    Generate and normalize argument dictionary for a scale operation.

  • implemented_funcs

    Returns a set of function names that are implemented in the current class and the parent classes.

  • kernel_radius
  • multi

    Deprecated alias for supersample.

  • pretty_string

    Cached property returning a user-friendly string representation.

  • resample

    Resample a video clip to the given format.

  • rescale

    Rescale a clip to the given resolution from a previously descaled clip,

  • scale

    Scale a clip to the given resolution, with aspect ratio and linear light support.

  • shift

    Apply a subpixel shift to the clip using the kernel's scaling logic.

  • supersample

    Supersample a clip by a given scaling factor.

Attributes:

Source code in vskernels/kernels/zimg/bicubic.py
130
131
132
133
134
135
136
137
def __init__(self, **kwargs: Any) -> None:
    """
    Initialize the scaler with optional arguments.

    Args:
        **kwargs: Keyword arguments that configure the internal scaling behavior.
    """
    super().__init__(b=0, c=0.5, **kwargs)

b instance-attribute

b = b

bob_function class-attribute instance-attribute

bob_function: Callable[..., ConstantFormatVideoNode] = Bob

Bob function called internally when performing bobbing operations.

c instance-attribute

c = c

descale_function class-attribute instance-attribute

descale_function: Callable[..., ConstantFormatVideoNode] = Debicubic

kwargs instance-attribute

kwargs: dict[str, Any] = kwargs

Arguments passed to the implemented funcs or internal scale function.

resample_function class-attribute instance-attribute

resample_function: Callable[..., ConstantFormatVideoNode] = Bicubic

rescale_function class-attribute instance-attribute

rescale_function: Callable[..., ConstantFormatVideoNode] = Bicubic

scale_function class-attribute instance-attribute

scale_function: Callable[..., VideoNode] = Bicubic

bob

bob(
    clip: VideoNode, *, tff: FieldBasedLike | bool | None = None, **kwargs: Any
) -> ConstantFormatVideoNode

Apply bob deinterlacing to a given clip using the selected resizer.

Keyword arguments passed during initialization are automatically injected here, unless explicitly overridden by the arguments provided at call time. Only arguments that match named parameters in this method are injected.

Parameters:

  • clip

    (VideoNode) –

    The source clip

  • tff

    (FieldBasedLike | bool | None, default: None ) –

    Field order of the clip.

Returns:

  • ConstantFormatVideoNode

    The bobbed clip.

Source code in vskernels/kernels/zimg/abstract.py
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
def bob(
    self, clip: vs.VideoNode, *, tff: FieldBasedLike | bool | None = None, **kwargs: Any
) -> ConstantFormatVideoNode:
    """
    Apply bob deinterlacing to a given clip using the selected resizer.

    Keyword arguments passed during initialization are automatically injected here,
    unless explicitly overridden by the arguments provided at call time.
    Only arguments that match named parameters in this method are injected.

    Args:
        clip: The source clip
        tff: Field order of the clip.

    Returns:
        The bobbed clip.
    """
    clip_fieldbased = FieldBased.from_param_or_video(tff, clip, True, self.__class__)

    assert check_variable(clip, self.__class__)

    return self.bob_function(clip, **self.get_bob_args(clip, tff=clip_fieldbased.is_tff, **kwargs))

deinterlace

deinterlace(
    clip: VideoNode,
    *,
    tff: FieldBasedLike | bool | None = None,
    double_rate: bool = True,
    **kwargs: Any
) -> ConstantFormatVideoNode

Apply deinterlacing to a given clip using the selected resizer.

Keyword arguments passed during initialization are automatically injected here, unless explicitly overridden by the arguments provided at call time. Only arguments that match named parameters in this method are injected.

Parameters:

  • clip

    (VideoNode) –

    The source clip

  • tff

    (FieldBasedLike | bool | None, default: None ) –

    Field order of the clip.

  • double_rate

    (bool, default: True ) –

    Whether to double the frame rate (True) or retain the original rate (False).

Returns:

  • ConstantFormatVideoNode

    The bobbed clip.

Source code in vskernels/kernels/zimg/abstract.py
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
def deinterlace(
    self, clip: vs.VideoNode, *, tff: FieldBasedLike | bool | None = None, double_rate: bool = True, **kwargs: Any
) -> ConstantFormatVideoNode:
    """
    Apply deinterlacing to a given clip using the selected resizer.

    Keyword arguments passed during initialization are automatically injected here,
    unless explicitly overridden by the arguments provided at call time.
    Only arguments that match named parameters in this method are injected.

    Args:
        clip: The source clip
        tff: Field order of the clip.
        double_rate: Whether to double the frame rate (True) or retain the original rate (False).

    Returns:
        The bobbed clip.
    """
    bobbed = self.bob(clip, tff=tff, **kwargs)

    if not double_rate:
        return bobbed[::2]

    return bobbed

descale

descale(
    clip: VideoNode,
    width: int | None = None,
    height: int | None = None,
    shift: ShiftT = (0, 0),
    *,
    linear: bool | None = None,
    sigmoid: bool | tuple[Slope, Center] = False,
    border_handling: int | BorderHandling = MIRROR,
    sample_grid_model: int | SampleGridModel = MATCH_EDGES,
    field_based: FieldBasedLike | None = None,
    ignore_mask: VideoNode | None = None,
    blur: float | None = None,
    **kwargs: Any
) -> ConstantFormatVideoNode

Descale a clip to the given resolution, with image borders handling and sampling grid alignment, optionally using linear light processing.

Supports both progressive and interlaced sources. When interlaced, it will separate fields, perform per-field descaling, and weave them back.

Keyword arguments passed during initialization are automatically injected here, unless explicitly overridden by the arguments provided at call time. Only arguments that match named parameters in this method are injected.

Parameters:

  • clip

    (VideoNode) –

    The source clip.

  • width

    (int | None, default: None ) –

    Target descaled width (defaults to clip width if None).

  • height

    (int | None, default: None ) –

    Target descaled height (defaults to clip height if None).

  • shift

    (ShiftT, default: (0, 0) ) –

    Subpixel shift (top, left) or per-field shifts.

  • linear

    (bool | None, default: None ) –

    Whether to linearize the input before descaling. If None, inferred from sigmoid.

  • sigmoid

    (bool | tuple[Slope, Center], default: False ) –

    Whether to use sigmoid transfer curve. Can be True, False, or a tuple of (slope, center). True applies the defaults values (6.5, 0.75). Keep in mind sigmoid slope has to be in range 1.0-20.0 (inclusive) and sigmoid center has to be in range 0.0-1.0 (inclusive).

  • border_handling

    (int | BorderHandling, default: MIRROR ) –

    Method for handling image borders during sampling.

  • sample_grid_model

    (int | SampleGridModel, default: MATCH_EDGES ) –

    Model used to align sampling grid.

  • field_based

    (FieldBasedLike | None, default: None ) –

    Field-based processing mode (interlaced or progressive).

  • ignore_mask

    (VideoNode | None, default: None ) –

    Optional mask specifying areas to ignore during descaling.

  • blur

    (float | None, default: None ) –

    Amount of blur to apply during scaling.

  • **kwargs

    (Any, default: {} ) –

    Additional arguments passed to descale_function.

Returns:

  • ConstantFormatVideoNode

    The descaled video node, optionally processed in linear light.

Source code in vskernels/abstract/complex.py
671
672
673
674
675
676
677
678
679
680
681
682
683
684
685
686
687
688
689
690
691
692
693
694
695
696
697
698
699
700
701
702
703
704
705
706
707
708
709
710
711
712
713
714
715
716
717
718
719
720
721
722
723
724
725
726
727
728
729
730
731
732
733
734
735
736
737
738
739
740
741
742
743
744
745
746
747
748
749
750
751
752
753
754
755
756
757
758
759
760
761
762
763
764
765
766
767
768
769
770
771
def descale(
    self,
    clip: vs.VideoNode,
    width: int | None = None,
    height: int | None = None,
    shift: ShiftT = (0, 0),
    *,
    # `linear` and `sigmoid` parameters from LinearDescaler
    linear: bool | None = None,
    sigmoid: bool | tuple[Slope, Center] = False,
    # ComplexDescaler adds border_handling, sample_grid_model, field_based,  ignore_mask and blur
    border_handling: int | BorderHandling = BorderHandling.MIRROR,
    sample_grid_model: int | SampleGridModel = SampleGridModel.MATCH_EDGES,
    field_based: FieldBasedLike | None = None,
    ignore_mask: vs.VideoNode | None = None,
    blur: float | None = None,
    **kwargs: Any,
) -> ConstantFormatVideoNode:
    """
    Descale a clip to the given resolution, with image borders handling and sampling grid alignment,
    optionally using linear light processing.

    Supports both progressive and interlaced sources. When interlaced, it will separate fields,
    perform per-field descaling, and weave them back.

    Keyword arguments passed during initialization are automatically injected here,
    unless explicitly overridden by the arguments provided at call time.
    Only arguments that match named parameters in this method are injected.

    Args:
        clip: The source clip.
        width: Target descaled width (defaults to clip width if None).
        height: Target descaled height (defaults to clip height if None).
        shift: Subpixel shift (top, left) or per-field shifts.
        linear: Whether to linearize the input before descaling. If None, inferred from sigmoid.
        sigmoid: Whether to use sigmoid transfer curve. Can be True, False, or a tuple of (slope, center). `True`
            applies the defaults values (6.5, 0.75). Keep in mind sigmoid slope has to be in range 1.0-20.0
            (inclusive) and sigmoid center has to be in range 0.0-1.0 (inclusive).
        border_handling: Method for handling image borders during sampling.
        sample_grid_model: Model used to align sampling grid.
        field_based: Field-based processing mode (interlaced or progressive).
        ignore_mask: Optional mask specifying areas to ignore during descaling.
        blur: Amount of blur to apply during scaling.
        **kwargs: Additional arguments passed to `descale_function`.

    Returns:
        The descaled video node, optionally processed in linear light.
    """
    width, height = self._wh_norm(clip, width, height)
    check_correct_subsampling(clip, width, height)

    field_based = FieldBased.from_param_or_video(field_based, clip)

    clip, bits = expect_bits(clip, 32)

    de_base_args = (width, height // (1 + field_based.is_inter))
    kwargs.update(
        linear=linear,
        sigmoid=sigmoid,
        border_handling=BorderHandling.from_param(border_handling, self.descale),
        ignore_mask=ignore_mask,
        blur=blur,
    )

    sample_grid_model = SampleGridModel(sample_grid_model)

    if field_based.is_inter:
        from vsdeinterlace import reweave

        shift_y, shift_x = _descale_shift_norm(shift, False, self.descale)

        kwargs_tf, shift = sample_grid_model.for_src(clip, width, height, (shift_y[0], shift_x[0]), **kwargs)
        kwargs_bf, shift = sample_grid_model.for_src(clip, width, height, (shift_y[1], shift_x[1]), **kwargs)

        de_kwargs_tf = self.get_descale_args(clip, (shift_y[0], shift_x[0]), *de_base_args, **kwargs_tf)
        de_kwargs_bf = self.get_descale_args(clip, (shift_y[1], shift_x[1]), *de_base_args, **kwargs_bf)

        if height % 2:
            raise CustomIndexError("You can't descale to odd resolution when crossconverted!", self.descale)

        field_shift = 0.125 * height / clip.height

        fields = clip.std.SeparateFields(field_based.is_tff)

        descaled_tf = super().descale(
            fields[0::2],
            **de_kwargs_tf | {"src_top": de_kwargs_tf.get("src_top", 0.0) + field_shift},
        )
        descaled_bf = super().descale(
            fields[1::2],
            **de_kwargs_bf | {"src_top": de_kwargs_bf.get("src_top", 0.0) - field_shift},
        )
        descaled = reweave(descaled_tf, descaled_bf, field_based)
    else:
        shift = _descale_shift_norm(shift, True, self.descale)

        kwargs, shift = sample_grid_model.for_src(clip, width, height, shift, **kwargs)

        descaled = super().descale(clip, **self.get_descale_args(clip, shift, *de_base_args, **kwargs))

    return depth(descaled, bits)

ensure_obj classmethod

ensure_obj(
    scaler: str | type[Self] | Self | None = None,
    /,
    func_except: FuncExcept | None = None,
) -> Self

Ensure that the input is a scaler instance, resolving it if necessary.

Parameters:

  • scaler

    (str | type[Self] | Self | None, default: None ) –

    Scaler identifier (string, class, or instance).

  • func_except

    (FuncExcept | None, default: None ) –

    Function returned for custom error handling.

Returns:

  • Self

    Scaler instance.

Source code in vskernels/abstract/base.py
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
@classmethod
def ensure_obj(
    cls,
    scaler: str | type[Self] | Self | None = None,
    /,
    func_except: FuncExcept | None = None,
) -> Self:
    """
    Ensure that the input is a scaler instance, resolving it if necessary.

    Args:
        scaler: Scaler identifier (string, class, or instance).
        func_except: Function returned for custom error handling.

    Returns:
        Scaler instance.
    """
    return _base_ensure_obj(cls, scaler, func_except)

from_param classmethod

from_param(
    scaler: str | type[Self] | Self | None = None,
    /,
    func_except: FuncExcept | None = None,
) -> type[Self]

Resolve and return a scaler type from a given input (string, type, or instance).

Parameters:

  • scaler

    (str | type[Self] | Self | None, default: None ) –

    Scaler identifier (string, class, or instance).

  • func_except

    (FuncExcept | None, default: None ) –

    Function returned for custom error handling.

Returns:

  • type[Self]

    Resolved scaler type.

Source code in vskernels/abstract/base.py
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
@classmethod
def from_param(
    cls,
    scaler: str | type[Self] | Self | None = None,
    /,
    func_except: FuncExcept | None = None,
) -> type[Self]:
    """
    Resolve and return a scaler type from a given input (string, type, or instance).

    Args:
        scaler: Scaler identifier (string, class, or instance).
        func_except: Function returned for custom error handling.

    Returns:
        Resolved scaler type.
    """
    return _base_from_param(cls, scaler, cls._err_class, func_except)

get_bob_args

get_bob_args(
    clip: VideoNode, shift: tuple[TopShift, LeftShift] = (0, 0), **kwargs: Any
) -> dict[str, Any]
Source code in vskernels/kernels/zimg/bicubic.py
60
61
62
63
64
65
66
67
68
def get_bob_args(
    self,
    clip: vs.VideoNode,
    shift: tuple[TopShift, LeftShift] = (0, 0),
    **kwargs: Any,
) -> dict[str, Any]:
    return super().get_bob_args(
        clip, shift, filter="bicubic", filter_param_a=self.b, filter_param_b=self.c, **kwargs
    )

get_descale_args

get_descale_args(
    clip: VideoNode,
    shift: tuple[TopShift, LeftShift] = (0, 0),
    width: int | None = None,
    height: int | None = None,
    **kwargs: Any
) -> dict[str, Any]

Generate and normalize argument dictionary for a descale operation.

Parameters:

  • clip

    (VideoNode) –

    The source clip.

  • shift

    (tuple[TopShift, LeftShift], default: (0, 0) ) –

    Vertical and horizontal shift to apply.

  • width

    (int | None, default: None ) –

    Target width.

  • height

    (int | None, default: None ) –

    Target height.

  • **kwargs

    (Any, default: {} ) –

    Additional arguments to pass to the descale function.

Returns:

  • dict[str, Any]

    Dictionary of keyword arguments for the descale function.

Source code in vskernels/abstract/base.py
946
947
948
949
950
951
952
953
954
955
956
957
958
959
960
961
962
963
964
965
966
967
def get_descale_args(
    self,
    clip: vs.VideoNode,
    shift: tuple[TopShift, LeftShift] = (0, 0),
    width: int | None = None,
    height: int | None = None,
    **kwargs: Any,
) -> dict[str, Any]:
    """
    Generate and normalize argument dictionary for a descale operation.

    Args:
        clip: The source clip.
        shift: Vertical and horizontal shift to apply.
        width: Target width.
        height: Target height.
        **kwargs: Additional arguments to pass to the descale function.

    Returns:
        Dictionary of keyword arguments for the descale function.
    """
    return {"src_top": shift[0], "src_left": shift[1]} | self.get_params_args(True, clip, width, height, **kwargs)

get_params_args

get_params_args(
    is_descale: bool,
    clip: VideoNode,
    width: int | None = None,
    height: int | None = None,
    **kwargs: Any
) -> dict[str, Any]
Source code in vskernels/kernels/zimg/bicubic.py
52
53
54
55
56
57
58
def get_params_args(
    self, is_descale: bool, clip: vs.VideoNode, width: int | None = None, height: int | None = None, **kwargs: Any
) -> dict[str, Any]:
    args = super().get_params_args(is_descale, clip, width, height, **kwargs)
    if is_descale:
        return args | {"b": self.b, "c": self.c}
    return args | {"filter_param_a": self.b, "filter_param_b": self.c}

get_resample_args

get_resample_args(
    clip: VideoNode,
    format: int | VideoFormatLike | HoldsVideoFormat,
    matrix: MatrixLike | None,
    matrix_in: MatrixLike | None,
    **kwargs: Any
) -> dict[str, Any]

Generate and normalize argument dictionary for a resample operation.

Parameters:

  • clip

    (VideoNode) –

    The source clip.

  • format

    (int | VideoFormatLike | HoldsVideoFormat) –

    The target video format, which can either be:

    • an integer format ID,
    • a vs.PresetVideoFormat or vs.VideoFormat,
    • or a source from which a valid VideoFormat can be extracted.
  • matrix

    (MatrixLike | None) –

    Target color matrix.

  • matrix_in

    (MatrixLike | None) –

    Source color matrix.

  • **kwargs

    (Any, default: {} ) –

    Additional arguments to pass to the resample function.

Returns:

  • dict[str, Any]

    Dictionary of keyword arguments for the resample function.

Source code in vskernels/abstract/base.py
 992
 993
 994
 995
 996
 997
 998
 999
1000
1001
1002
1003
1004
1005
1006
1007
1008
1009
1010
1011
1012
1013
1014
1015
1016
1017
1018
1019
1020
1021
def get_resample_args(
    self,
    clip: vs.VideoNode,
    format: int | VideoFormatLike | HoldsVideoFormat,
    matrix: MatrixLike | None,
    matrix_in: MatrixLike | None,
    **kwargs: Any,
) -> dict[str, Any]:
    """
    Generate and normalize argument dictionary for a resample operation.

    Args:
        clip: The source clip.
        format: The target video format, which can either be:

               - an integer format ID,
               - a `vs.PresetVideoFormat` or `vs.VideoFormat`,
               - or a source from which a valid `VideoFormat` can be extracted.
        matrix: Target color matrix.
        matrix_in: Source color matrix.
        **kwargs: Additional arguments to pass to the resample function.

    Returns:
        Dictionary of keyword arguments for the resample function.
    """
    return {
        "format": get_video_format(format).id,
        "matrix": Matrix.from_param(matrix),
        "matrix_in": Matrix.from_param(matrix_in),
    } | self.get_params_args(False, clip, **kwargs)

get_rescale_args

get_rescale_args(
    clip: VideoNode,
    shift: tuple[TopShift, LeftShift] = (0, 0),
    width: int | None = None,
    height: int | None = None,
    **kwargs: Any
) -> dict[str, Any]

Generate and normalize argument dictionary for a rescale operation.

Parameters:

  • clip

    (VideoNode) –

    The source clip.

  • shift

    (tuple[TopShift, LeftShift], default: (0, 0) ) –

    Vertical and horizontal shift to apply.

  • width

    (int | None, default: None ) –

    Target width.

  • height

    (int | None, default: None ) –

    Target height.

  • **kwargs

    (Any, default: {} ) –

    Additional arguments to pass to the rescale function.

Returns:

  • dict[str, Any]

    Dictionary of keyword arguments for the rescale function.

Source code in vskernels/abstract/base.py
969
970
971
972
973
974
975
976
977
978
979
980
981
982
983
984
985
986
987
988
989
990
def get_rescale_args(
    self,
    clip: vs.VideoNode,
    shift: tuple[TopShift, LeftShift] = (0, 0),
    width: int | None = None,
    height: int | None = None,
    **kwargs: Any,
) -> dict[str, Any]:
    """
    Generate and normalize argument dictionary for a rescale operation.

    Args:
        clip: The source clip.
        shift: Vertical and horizontal shift to apply.
        width: Target width.
        height: Target height.
        **kwargs: Additional arguments to pass to the rescale function.

    Returns:
        Dictionary of keyword arguments for the rescale function.
    """
    return {"src_top": shift[0], "src_left": shift[1]} | self.get_params_args(True, clip, width, height, **kwargs)

get_scale_args

get_scale_args(
    clip: VideoNode,
    shift: tuple[TopShift, LeftShift] = (0, 0),
    width: int | None = None,
    height: int | None = None,
    **kwargs: Any
) -> dict[str, Any]

Generate and normalize argument dictionary for a scale operation.

Parameters:

  • clip

    (VideoNode) –

    The source clip.

  • shift

    (tuple[TopShift, LeftShift], default: (0, 0) ) –

    Vertical and horizontal shift to apply.

  • width

    (int | None, default: None ) –

    Target width.

  • height

    (int | None, default: None ) –

    Target height.

  • **kwargs

    (Any, default: {} ) –

    Additional arguments to pass to the scale function.

Returns:

  • dict[str, Any]

    Dictionary of keyword arguments for the scale function.

Source code in vskernels/abstract/base.py
923
924
925
926
927
928
929
930
931
932
933
934
935
936
937
938
939
940
941
942
943
944
def get_scale_args(
    self,
    clip: vs.VideoNode,
    shift: tuple[TopShift, LeftShift] = (0, 0),
    width: int | None = None,
    height: int | None = None,
    **kwargs: Any,
) -> dict[str, Any]:
    """
    Generate and normalize argument dictionary for a scale operation.

    Args:
        clip: The source clip.
        shift: Vertical and horizontal shift to apply.
        width: Target width.
        height: Target height.
        **kwargs: Additional arguments to pass to the scale function.

    Returns:
        Dictionary of keyword arguments for the scale function.
    """
    return {"src_top": shift[0], "src_left": shift[1]} | self.get_params_args(False, clip, width, height, **kwargs)

implemented_funcs classmethod

implemented_funcs() -> frozenset[str]

Returns a set of function names that are implemented in the current class and the parent classes.

These functions determine which keyword arguments will be extracted from the init method.

Returns:

Source code in vskernels/abstract/base.py
431
432
433
434
435
436
437
438
439
440
441
442
@classproperty
@classmethod
def implemented_funcs(cls) -> frozenset[str]:
    """
    Returns a set of function names that are implemented in the current class and the parent classes.

    These functions determine which keyword arguments will be extracted from the __init__ method.

    Returns:
        Frozen set of function names.
    """
    return frozenset(func for klass in cls.mro() for func in getattr(klass, "_implemented_funcs", ()))

kernel_radius

kernel_radius() -> int
Source code in vskernels/kernels/zimg/bicubic.py
70
71
72
73
74
@ZimgComplexKernel.cachedproperty
def kernel_radius(self) -> int:
    if (self.b, self.c) == (0, 0):
        return 1
    return 2

multi

multi(
    clip: VideoNodeT,
    multi: float = 2.0,
    shift: tuple[TopShift, LeftShift] = (0, 0),
    **kwargs: Any
) -> VideoNodeT

Deprecated alias for supersample.

Keyword arguments passed during initialization are automatically injected here, unless explicitly overridden by the arguments provided at call time. Only arguments that match named parameters in this method are injected.

Parameters:

  • clip

    (VideoNodeT) –

    The source clip.

  • multi

    (float, default: 2.0 ) –

    Supersampling factor.

  • shift

    (tuple[TopShift, LeftShift], default: (0, 0) ) –

    Subpixel shift (top, left) applied during scaling.

  • **kwargs

    (Any, default: {} ) –

    Additional arguments forwarded to the scale function.

Returns:

Source code in vskernels/abstract/base.py
531
532
533
534
535
536
537
538
539
540
541
542
543
544
545
546
547
548
549
550
551
@deprecated('The "multi" method is deprecated. Use "supersample" instead.', category=DeprecationWarning)
def multi(
    self, clip: VideoNodeT, multi: float = 2.0, shift: tuple[TopShift, LeftShift] = (0, 0), **kwargs: Any
) -> VideoNodeT:
    """
    Deprecated alias for `supersample`.

    Keyword arguments passed during initialization are automatically injected here,
    unless explicitly overridden by the arguments provided at call time.
    Only arguments that match named parameters in this method are injected.

    Args:
        clip: The source clip.
        multi: Supersampling factor.
        shift: Subpixel shift (top, left) applied during scaling.
        **kwargs: Additional arguments forwarded to the scale function.

    Returns:
        The supersampled clip.
    """
    return self.supersample(clip, multi, shift, **kwargs)

pretty_string

pretty_string() -> str

Cached property returning a user-friendly string representation.

Returns:

  • str

    Pretty-printed string with arguments.

Source code in vskernels/abstract/base.py
421
422
423
424
425
426
427
428
429
@BaseScalerMeta.cachedproperty
def pretty_string(self) -> str:
    """
    Cached property returning a user-friendly string representation.

    Returns:
        Pretty-printed string with arguments.
    """
    return self._pretty_string()

resample

resample(
    clip: VideoNode,
    format: int | VideoFormatLike | HoldsVideoFormat,
    matrix: MatrixLike | None = None,
    matrix_in: MatrixLike | None = None,
    **kwargs: Any
) -> ConstantFormatVideoNode

Resample a video clip to the given format.

Keyword arguments passed during initialization are automatically injected here, unless explicitly overridden by the arguments provided at call time. Only arguments that match named parameters in this method are injected.

Parameters:

  • clip

    (VideoNode) –

    The source clip.

  • format

    (int | VideoFormatLike | HoldsVideoFormat) –

    The target video format, which can either be:

    • an integer format ID,
    • a vs.PresetVideoFormat or vs.VideoFormat,
    • or a source from which a valid VideoFormat can be extracted.
  • matrix

    (MatrixLike | None, default: None ) –

    An optional color transformation matrix to apply.

  • matrix_in

    (MatrixLike | None, default: None ) –

    An optional input matrix for color transformations.

  • **kwargs

    (Any, default: {} ) –

    Additional keyword arguments passed to the resample_function.

Returns:

  • ConstantFormatVideoNode

    The resampled clip.

Source code in vskernels/abstract/base.py
717
718
719
720
721
722
723
724
725
726
727
728
729
730
731
732
733
734
735
736
737
738
739
740
741
742
743
744
745
746
747
748
def resample(
    self,
    clip: vs.VideoNode,
    format: int | VideoFormatLike | HoldsVideoFormat,
    matrix: MatrixLike | None = None,
    matrix_in: MatrixLike | None = None,
    **kwargs: Any,
) -> ConstantFormatVideoNode:
    """
    Resample a video clip to the given format.

    Keyword arguments passed during initialization are automatically injected here,
    unless explicitly overridden by the arguments provided at call time.
    Only arguments that match named parameters in this method are injected.

    Args:
        clip: The source clip.
        format: The target video format, which can either be:

               - an integer format ID,
               - a `vs.PresetVideoFormat` or `vs.VideoFormat`,
               - or a source from which a valid `VideoFormat` can be extracted.
        matrix: An optional color transformation matrix to apply.
        matrix_in: An optional input matrix for color transformations.
        **kwargs: Additional keyword arguments passed to the `resample_function`.

    Returns:
        The resampled clip.
    """
    return self.resample_function(
        clip, **_norm_props_enums(self.get_resample_args(clip, format, matrix, matrix_in, **kwargs))
    )

rescale

rescale(
    clip: VideoNode,
    width: int | None = None,
    height: int | None = None,
    shift: ShiftT = (0, 0),
    *,
    linear: bool | None = None,
    sigmoid: bool | tuple[Slope, Center] = False,
    border_handling: int | BorderHandling = MIRROR,
    sample_grid_model: int | SampleGridModel = MATCH_EDGES,
    field_based: FieldBasedLike | None = None,
    ignore_mask: VideoNode | None = None,
    blur: float | None = None,
    **kwargs: Any
) -> ConstantFormatVideoNode

Rescale a clip to the given resolution from a previously descaled clip, with image borders handling and sampling grid alignment, optionally using linear light processing.

Keyword arguments passed during initialization are automatically injected here, unless explicitly overridden by the arguments provided at call time. Only arguments that match named parameters in this method are injected.

Parameters:

  • clip

    (VideoNode) –

    The source clip.

  • width

    (int | None, default: None ) –

    Target scaled width (defaults to clip width if None).

  • height

    (int | None, default: None ) –

    Target scaled height (defaults to clip height if None).

  • shift

    (ShiftT, default: (0, 0) ) –

    Subpixel shift (top, left) or per-field shifts.

  • linear

    (bool | None, default: None ) –

    Whether to linearize the input before rescaling. If None, inferred from sigmoid.

  • sigmoid

    (bool | tuple[Slope, Center], default: False ) –

    Whether to use sigmoid transfer curve. Can be True, False, or a tuple of (slope, center). True applies the defaults values (6.5, 0.75). Keep in mind sigmoid slope has to be in range 1.0-20.0 (inclusive) and sigmoid center has to be in range 0.0-1.0 (inclusive).

  • border_handling

    (int | BorderHandling, default: MIRROR ) –

    Method for handling image borders during sampling.

  • sample_grid_model

    (int | SampleGridModel, default: MATCH_EDGES ) –

    Model used to align sampling grid.

  • field_based

    (FieldBasedLike | None, default: None ) –

    Field-based processing mode (interlaced or progressive).

  • ignore_mask

    (VideoNode | None, default: None ) –

    Optional mask specifying areas to ignore during rescaling.

  • blur

    (float | None, default: None ) –

    Amount of blur to apply during rescaling.

  • **kwargs

    (Any, default: {} ) –

    Additional arguments passed to rescale_function.

Returns:

  • ConstantFormatVideoNode

    The scaled clip.

Source code in vskernels/abstract/complex.py
773
774
775
776
777
778
779
780
781
782
783
784
785
786
787
788
789
790
791
792
793
794
795
796
797
798
799
800
801
802
803
804
805
806
807
808
809
810
811
812
813
814
815
816
817
818
819
820
821
822
823
824
825
826
827
828
829
830
831
832
833
834
835
836
837
838
839
840
841
842
843
def rescale(
    self,
    clip: vs.VideoNode,
    width: int | None = None,
    height: int | None = None,
    shift: ShiftT = (0, 0),
    *,
    # `linear` and `sigmoid` parameters from LinearDescaler
    linear: bool | None = None,
    sigmoid: bool | tuple[Slope, Center] = False,
    # ComplexDescaler adds border_handling, sample_grid_model, field_based, ignore_mask and blur
    border_handling: int | BorderHandling = BorderHandling.MIRROR,
    sample_grid_model: int | SampleGridModel = SampleGridModel.MATCH_EDGES,
    field_based: FieldBasedLike | None = None,
    ignore_mask: vs.VideoNode | None = None,
    blur: float | None = None,
    **kwargs: Any,
) -> ConstantFormatVideoNode:
    """
    Rescale a clip to the given resolution from a previously descaled clip,
    with image borders handling and sampling grid alignment, optionally using linear light processing.

    Keyword arguments passed during initialization are automatically injected here,
    unless explicitly overridden by the arguments provided at call time.
    Only arguments that match named parameters in this method are injected.

    Args:
        clip: The source clip.
        width: Target scaled width (defaults to clip width if None).
        height: Target scaled height (defaults to clip height if None).
        shift: Subpixel shift (top, left) or per-field shifts.
        linear: Whether to linearize the input before rescaling. If None, inferred from sigmoid.
        sigmoid: Whether to use sigmoid transfer curve. Can be True, False, or a tuple of (slope, center). `True`
            applies the defaults values (6.5, 0.75). Keep in mind sigmoid slope has to be in range 1.0-20.0
            (inclusive) and sigmoid center has to be in range 0.0-1.0 (inclusive).
        border_handling: Method for handling image borders during sampling.
        sample_grid_model: Model used to align sampling grid.
        field_based: Field-based processing mode (interlaced or progressive).
        ignore_mask: Optional mask specifying areas to ignore during rescaling.
        blur: Amount of blur to apply during rescaling.
        **kwargs: Additional arguments passed to `rescale_function`.

    Returns:
        The scaled clip.
    """
    width, height = self._wh_norm(clip, width, height)
    check_correct_subsampling(clip, width, height)

    field_based = FieldBased.from_param_or_video(field_based, clip)

    clip, bits = expect_bits(clip, 32)

    de_base_args = (width, height // (1 + field_based.is_inter))
    kwargs.update(
        border_handling=BorderHandling.from_param(border_handling, self.rescale), ignore_mask=ignore_mask, blur=blur
    )

    sample_grid_model = SampleGridModel(sample_grid_model)

    if field_based.is_inter:
        raise NotImplementedError
    else:
        shift = _descale_shift_norm(shift, True, self.rescale)

        kwargs, shift = sample_grid_model.for_src(clip, width, height, shift, **kwargs)

        rescaled = super().rescale(
            clip, **self.get_rescale_args(clip, shift, *de_base_args, **kwargs), linear=linear, sigmoid=sigmoid
        )

    return depth(rescaled, bits)

scale

scale(
    clip: VideoNode,
    width: int | None = None,
    height: int | None = None,
    shift: tuple[TopShift | list[TopShift], LeftShift | list[LeftShift]] = (
        0,
        0,
    ),
    *,
    linear: bool | None = None,
    sigmoid: bool | tuple[Slope, Center] = False,
    border_handling: BorderHandling = MIRROR,
    sample_grid_model: SampleGridModel = MATCH_EDGES,
    sar: Sar | float | bool | None = None,
    dar: Dar | float | bool | None = None,
    dar_in: Dar | bool | float | None = None,
    keep_ar: bool | None = None,
    blur: float | None = None,
    **kwargs: Any
) -> VideoNode | ConstantFormatVideoNode

Scale a clip to the given resolution, with aspect ratio and linear light support.

Keyword arguments passed during initialization are automatically injected here, unless explicitly overridden by the arguments provided at call time. Only arguments that match named parameters in this method are injected.

Parameters:

  • clip

    (VideoNode) –

    The source clip.

  • width

    (int | None, default: None ) –

    Target width (defaults to clip width if None).

  • height

    (int | None, default: None ) –

    Target height (defaults to clip height if None).

  • shift

    (tuple[TopShift | list[TopShift], LeftShift | list[LeftShift]], default: (0, 0) ) –

    Subpixel shift (top, left) applied during scaling. If a tuple is provided, it is used uniformly. If a list is given, the shift is applied per plane.

  • linear

    (bool | None, default: None ) –

    Whether to linearize the input before descaling. If None, inferred from sigmoid.

  • sigmoid

    (bool | tuple[Slope, Center], default: False ) –

    Whether to use sigmoid transfer curve. Can be True, False, or a tuple of (slope, center). True applies the defaults values (6.5, 0.75). Keep in mind sigmoid slope has to be in range 1.0-20.0 (inclusive) and sigmoid center has to be in range 0.0-1.0 (inclusive).

  • border_handling

    (BorderHandling, default: MIRROR ) –

    Method for handling image borders during sampling.

  • sample_grid_model

    (SampleGridModel, default: MATCH_EDGES ) –

    Model used to align sampling grid.

  • sar

    (Sar | float | bool | None, default: None ) –

    Sample aspect ratio to assume or convert to.

  • dar

    (Dar | float | bool | None, default: None ) –

    Desired display aspect ratio.

  • dar_in

    (Dar | bool | float | None, default: None ) –

    Input display aspect ratio, if different from clip's.

  • keep_ar

    (bool | None, default: None ) –

    Whether to adjust dimensions to preserve aspect ratio.

  • blur

    (float | None, default: None ) –

    Amount of blur to apply during scaling.

Returns:

  • VideoNode | ConstantFormatVideoNode

    Scaled clip, optionally aspect-corrected and linearized.

Source code in vskernels/abstract/complex.py
527
528
529
530
531
532
533
534
535
536
537
538
539
540
541
542
543
544
545
546
547
548
549
550
551
552
553
554
555
556
557
558
559
560
561
562
563
564
565
566
567
568
569
570
571
572
573
574
575
576
577
578
579
580
581
582
583
584
585
586
587
588
589
590
591
592
593
594
595
596
597
598
599
600
601
602
603
604
605
606
607
608
609
610
611
612
613
614
615
616
617
618
619
620
621
622
623
624
625
626
627
628
629
630
631
632
633
634
635
636
637
638
639
640
641
642
643
644
645
646
647
648
649
650
651
652
653
654
655
656
657
658
659
660
def scale(
    self,
    clip: vs.VideoNode,
    width: int | None = None,
    height: int | None = None,
    # ComplexScaler adds shift per planes
    shift: tuple[TopShift | list[TopShift], LeftShift | list[LeftShift]] = (0, 0),
    *,
    # `linear` and `sigmoid` from LinearScaler
    linear: bool | None = None,
    sigmoid: bool | tuple[Slope, Center] = False,
    # `border_handling`, `sample_grid_model`, `sar`, `dar`, `dar_in` and `keep_ar` from KeepArScaler
    border_handling: BorderHandling = BorderHandling.MIRROR,
    sample_grid_model: SampleGridModel = SampleGridModel.MATCH_EDGES,
    sar: Sar | float | bool | None = None,
    dar: Dar | float | bool | None = None,
    dar_in: Dar | bool | float | None = None,
    keep_ar: bool | None = None,
    # ComplexScaler adds blur
    blur: float | None = None,
    **kwargs: Any,
) -> vs.VideoNode | ConstantFormatVideoNode:
    """
    Scale a clip to the given resolution, with aspect ratio and linear light support.

    Keyword arguments passed during initialization are automatically injected here,
    unless explicitly overridden by the arguments provided at call time.
    Only arguments that match named parameters in this method are injected.

    Args:
        clip: The source clip.
        width: Target width (defaults to clip width if None).
        height: Target height (defaults to clip height if None).
        shift: Subpixel shift (top, left) applied during scaling. If a tuple is provided, it is used uniformly. If a
            list is given, the shift is applied per plane.
        linear: Whether to linearize the input before descaling. If None, inferred from sigmoid.
        sigmoid: Whether to use sigmoid transfer curve. Can be True, False, or a tuple of (slope, center). `True`
            applies the defaults values (6.5, 0.75). Keep in mind sigmoid slope has to be in range 1.0-20.0
            (inclusive) and sigmoid center has to be in range 0.0-1.0 (inclusive).
        border_handling: Method for handling image borders during sampling.
        sample_grid_model: Model used to align sampling grid.
        sar: Sample aspect ratio to assume or convert to.
        dar: Desired display aspect ratio.
        dar_in: Input display aspect ratio, if different from clip's.
        keep_ar: Whether to adjust dimensions to preserve aspect ratio.
        blur: Amount of blur to apply during scaling.

    Returns:
        Scaled clip, optionally aspect-corrected and linearized.
    """
    kwargs.update(
        linear=linear,
        sigmoid=sigmoid,
        border_handling=border_handling,
        sample_grid_model=sample_grid_model,
        sar=sar,
        dar=dar,
        dar_in=dar_in,
        keep_ar=keep_ar,
        blur=blur,
    )

    shift_top, shift_left = shift

    if isinstance(shift_top, (int, float)) and isinstance(shift_left, (int, float)):
        return super().scale(clip, width, height, (shift_top, shift_left), **kwargs)

    assert check_variable_format(clip, self.scale)

    n_planes = clip.format.num_planes

    shift_top = normalize_seq(shift_top, n_planes)
    shift_left = normalize_seq(shift_left, n_planes)

    if n_planes == 1:
        return super().scale(clip, width, height, (shift_top[0], shift_left[0]), **kwargs)

    width, height = self._wh_norm(clip, width, height)

    format_in = clip.format
    format_out = get_video_format(fallback(kwargs.pop("format", None), self.kwargs.get("format"), clip.format))

    chromaloc = ChromaLocation.from_video(clip, func=self.scale)
    chromaloc_in = ChromaLocation(
        fallback(kwargs.pop("chromaloc_in", None), self.kwargs.get("chromaloc_in"), chromaloc)
    )
    chromaloc_out = ChromaLocation(fallback(kwargs.pop("chromaloc", None), self.kwargs.get("chromaloc"), chromaloc))

    off_left, off_top = chromaloc_in.get_offsets(format_in)
    off_left_out, off_top_out = chromaloc_out.get_offsets(format_out)

    factor_w = 1 / 2**format_in.subsampling_w
    factor_h = 1 / 2**format_in.subsampling_h

    # Offsets for format out
    offc_left = (abs(off_left) + off_left_out) * factor_w
    offc_top = (abs(off_top) + off_top_out) * factor_h

    # Offsets for scale out
    if format_out.subsampling_w:
        offc_left = ((abs(off_left) + off_left * (clip.width / width)) * factor_w) + offc_left
    if format_out.subsampling_h:
        offc_top = ((abs(off_top) + off_top * (clip.height / height)) * factor_h) + offc_top

    for i in range(1, n_planes):
        shift_left[i] += offc_left
        shift_top[i] += offc_top

    scaled_planes = list[vs.VideoNode]()

    for i, (plane, top, left) in enumerate(zip(split(clip), shift_top, shift_left)):
        if i:
            w = round(width * 1 / 2**format_out.subsampling_h)
            h = round(height * 1 / 2**format_out.subsampling_h)
        else:
            w, h = width, height

        scaled_planes.append(
            super().scale(
                plane,
                w,
                h,
                (top, left),
                format=format_out.replace(color_family=vs.GRAY, subsampling_w=0, subsampling_h=0),
                **kwargs,
            )
        )

    merged = vs.core.std.ShufflePlanes(scaled_planes, [0, 0, 0], format_out.color_family, clip)

    if chromaloc_in != chromaloc_out:
        return chromaloc_out.apply(merged)

    return merged

shift

shift(
    clip: VideoNode, shift: tuple[TopShift, LeftShift], /, **kwargs: Any
) -> ConstantFormatVideoNode
shift(
    clip: VideoNode,
    shift_top: float | list[float],
    shift_left: float | list[float],
    /,
    **kwargs: Any,
) -> ConstantFormatVideoNode
shift(
    clip: VideoNode,
    shifts_or_top: float | tuple[float, float] | list[float],
    shift_left: float | list[float] | None = None,
    /,
    **kwargs: Any,
) -> ConstantFormatVideoNode

Apply a subpixel shift to the clip using the kernel's scaling logic.

If a single float or tuple is provided, it is used uniformly. If a list is given, the shift is applied per plane.

Keyword arguments passed during initialization are automatically injected here, unless explicitly overridden by the arguments provided at call time. Only arguments that match named parameters in this method are injected.

Parameters:

  • clip

    (VideoNode) –

    The source clip.

  • shifts_or_top

    (float | tuple[float, float] | list[float]) –

    Either a single vertical shift, a (top, left) tuple, or a list of vertical shifts.

  • shift_left

    (float | list[float] | None, default: None ) –

    Horizontal shift or list of horizontal shifts. Ignored if shifts_or_top is a tuple.

  • **kwargs

    (Any, default: {} ) –

    Additional arguments passed to the internal scale call.

Returns:

  • ConstantFormatVideoNode

    A new clip with the applied shift.

Raises:

  • VariableFormatError

    If the input clip has variable format.

  • CustomValueError

    If the input clip is GRAY but lists of shift has been passed.

Source code in vskernels/abstract/base.py
845
846
847
848
849
850
851
852
853
854
855
856
857
858
859
860
861
862
863
864
865
866
867
868
869
870
871
872
873
874
875
876
877
878
879
880
881
882
883
884
885
886
887
888
889
890
891
892
893
894
895
896
897
898
899
900
901
902
903
def shift(
    self,
    clip: vs.VideoNode,
    shifts_or_top: float | tuple[float, float] | list[float],
    shift_left: float | list[float] | None = None,
    /,
    **kwargs: Any,
) -> ConstantFormatVideoNode:
    """
    Apply a subpixel shift to the clip using the kernel's scaling logic.

    If a single float or tuple is provided, it is used uniformly.
    If a list is given, the shift is applied per plane.

    Keyword arguments passed during initialization are automatically injected here,
    unless explicitly overridden by the arguments provided at call time.
    Only arguments that match named parameters in this method are injected.

    Args:
        clip: The source clip.
        shifts_or_top: Either a single vertical shift, a (top, left) tuple, or a list of vertical shifts.
        shift_left: Horizontal shift or list of horizontal shifts. Ignored if `shifts_or_top` is a tuple.
        **kwargs: Additional arguments passed to the internal `scale` call.

    Returns:
        A new clip with the applied shift.

    Raises:
        VariableFormatError: If the input clip has variable format.
        CustomValueError: If the input clip is GRAY but lists of shift has been passed.
    """
    assert check_variable_format(clip, self.shift)

    n_planes = clip.format.num_planes

    def _shift(src: vs.VideoNode, shift: tuple[TopShift, LeftShift] = (0, 0)) -> ConstantFormatVideoNode:
        return self.scale(src, shift=shift, **kwargs)  # type: ignore[return-value]

    if isinstance(shifts_or_top, tuple):
        return _shift(clip, shifts_or_top)

    if isinstance(shifts_or_top, (int, float)) and isinstance(shift_left, (int, float, NoneType)):
        return _shift(clip, (shifts_or_top, shift_left or 0))

    if shift_left is None:
        shift_left = 0.0

    shifts_top = normalize_seq(shifts_or_top, n_planes)
    shifts_left = normalize_seq(shift_left, n_planes)

    if n_planes == 1:
        return _shift(clip, (shifts_top[0], shifts_left[0]))

    shifted_planes = [
        plane if top == left == 0 else _shift(plane, (top, left))
        for plane, top, left in zip(split(clip), shifts_top, shifts_left)
    ]

    return core.std.ShufflePlanes(shifted_planes, [0, 0, 0], clip.format.color_family, clip)

supersample

supersample(
    clip: VideoNodeT,
    rfactor: float = 2.0,
    shift: tuple[TopShift, LeftShift] = (0, 0),
    **kwargs: Any
) -> VideoNodeT

Supersample a clip by a given scaling factor.

Keyword arguments passed during initialization are automatically injected here, unless explicitly overridden by the arguments provided at call time. Only arguments that match named parameters in this method are injected.

Parameters:

  • clip

    (VideoNodeT) –

    The source clip.

  • rfactor

    (float, default: 2.0 ) –

    Scaling factor for supersampling.

  • shift

    (tuple[TopShift, LeftShift], default: (0, 0) ) –

    Subpixel shift (top, left) applied during scaling.

  • **kwargs

    (Any, default: {} ) –

    Additional arguments forwarded to the scale function.

Raises:

  • CustomValueError

    If resulting resolution is non-positive.

Returns:

Source code in vskernels/abstract/base.py
496
497
498
499
500
501
502
503
504
505
506
507
508
509
510
511
512
513
514
515
516
517
518
519
520
521
522
523
524
525
526
527
528
529
def supersample(
    self, clip: VideoNodeT, rfactor: float = 2.0, shift: tuple[TopShift, LeftShift] = (0, 0), **kwargs: Any
) -> VideoNodeT:
    """
    Supersample a clip by a given scaling factor.

    Keyword arguments passed during initialization are automatically injected here,
    unless explicitly overridden by the arguments provided at call time.
    Only arguments that match named parameters in this method are injected.

    Args:
        clip: The source clip.
        rfactor: Scaling factor for supersampling.
        shift: Subpixel shift (top, left) applied during scaling.
        **kwargs: Additional arguments forwarded to the scale function.

    Raises:
        CustomValueError: If resulting resolution is non-positive.

    Returns:
        The supersampled clip.
    """
    assert check_variable_resolution(clip, self.supersample)

    dst_width, dst_height = ceil(clip.width * rfactor), ceil(clip.height * rfactor)

    if max(dst_width, dst_height) <= 0.0:
        raise CustomValueError(
            'Multiplying the resolution by "rfactor" must result in a positive resolution!',
            self.supersample,
            rfactor,
        )

    return self.scale(clip, dst_width, dst_height, shift, **kwargs)  # type: ignore[return-value]

FFmpegBicubic

FFmpegBicubic(**kwargs: Any)

Bases: Bicubic

FFmpeg's swscale default resizer (b=0, c=0.6).

Initialize the scaler with optional arguments.

Parameters:

  • **kwargs

    (Any, default: {} ) –

    Keyword arguments that configure the internal scaling behavior.

Methods:

  • bob

    Apply bob deinterlacing to a given clip using the selected resizer.

  • deinterlace

    Apply deinterlacing to a given clip using the selected resizer.

  • descale

    Descale a clip to the given resolution, with image borders handling and sampling grid alignment,

  • ensure_obj

    Ensure that the input is a scaler instance, resolving it if necessary.

  • from_param

    Resolve and return a scaler type from a given input (string, type, or instance).

  • get_bob_args
  • get_descale_args

    Generate and normalize argument dictionary for a descale operation.

  • get_params_args
  • get_resample_args

    Generate and normalize argument dictionary for a resample operation.

  • get_rescale_args

    Generate and normalize argument dictionary for a rescale operation.

  • get_scale_args

    Generate and normalize argument dictionary for a scale operation.

  • implemented_funcs

    Returns a set of function names that are implemented in the current class and the parent classes.

  • kernel_radius
  • multi

    Deprecated alias for supersample.

  • pretty_string

    Cached property returning a user-friendly string representation.

  • resample

    Resample a video clip to the given format.

  • rescale

    Rescale a clip to the given resolution from a previously descaled clip,

  • scale

    Scale a clip to the given resolution, with aspect ratio and linear light support.

  • shift

    Apply a subpixel shift to the clip using the kernel's scaling logic.

  • supersample

    Supersample a clip by a given scaling factor.

Attributes:

Source code in vskernels/kernels/zimg/bicubic.py
145
146
147
148
149
150
151
152
def __init__(self, **kwargs: Any) -> None:
    """
    Initialize the scaler with optional arguments.

    Args:
        **kwargs: Keyword arguments that configure the internal scaling behavior.
    """
    super().__init__(b=0, c=0.6, **kwargs)

b instance-attribute

b = b

bob_function class-attribute instance-attribute

bob_function: Callable[..., ConstantFormatVideoNode] = Bob

Bob function called internally when performing bobbing operations.

c instance-attribute

c = c

descale_function class-attribute instance-attribute

descale_function: Callable[..., ConstantFormatVideoNode] = Debicubic

kwargs instance-attribute

kwargs: dict[str, Any] = kwargs

Arguments passed to the implemented funcs or internal scale function.

resample_function class-attribute instance-attribute

resample_function: Callable[..., ConstantFormatVideoNode] = Bicubic

rescale_function class-attribute instance-attribute

rescale_function: Callable[..., ConstantFormatVideoNode] = Bicubic

scale_function class-attribute instance-attribute

scale_function: Callable[..., VideoNode] = Bicubic

bob

bob(
    clip: VideoNode, *, tff: FieldBasedLike | bool | None = None, **kwargs: Any
) -> ConstantFormatVideoNode

Apply bob deinterlacing to a given clip using the selected resizer.

Keyword arguments passed during initialization are automatically injected here, unless explicitly overridden by the arguments provided at call time. Only arguments that match named parameters in this method are injected.

Parameters:

  • clip

    (VideoNode) –

    The source clip

  • tff

    (FieldBasedLike | bool | None, default: None ) –

    Field order of the clip.

Returns:

  • ConstantFormatVideoNode

    The bobbed clip.

Source code in vskernels/kernels/zimg/abstract.py
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
def bob(
    self, clip: vs.VideoNode, *, tff: FieldBasedLike | bool | None = None, **kwargs: Any
) -> ConstantFormatVideoNode:
    """
    Apply bob deinterlacing to a given clip using the selected resizer.

    Keyword arguments passed during initialization are automatically injected here,
    unless explicitly overridden by the arguments provided at call time.
    Only arguments that match named parameters in this method are injected.

    Args:
        clip: The source clip
        tff: Field order of the clip.

    Returns:
        The bobbed clip.
    """
    clip_fieldbased = FieldBased.from_param_or_video(tff, clip, True, self.__class__)

    assert check_variable(clip, self.__class__)

    return self.bob_function(clip, **self.get_bob_args(clip, tff=clip_fieldbased.is_tff, **kwargs))

deinterlace

deinterlace(
    clip: VideoNode,
    *,
    tff: FieldBasedLike | bool | None = None,
    double_rate: bool = True,
    **kwargs: Any
) -> ConstantFormatVideoNode

Apply deinterlacing to a given clip using the selected resizer.

Keyword arguments passed during initialization are automatically injected here, unless explicitly overridden by the arguments provided at call time. Only arguments that match named parameters in this method are injected.

Parameters:

  • clip

    (VideoNode) –

    The source clip

  • tff

    (FieldBasedLike | bool | None, default: None ) –

    Field order of the clip.

  • double_rate

    (bool, default: True ) –

    Whether to double the frame rate (True) or retain the original rate (False).

Returns:

  • ConstantFormatVideoNode

    The bobbed clip.

Source code in vskernels/kernels/zimg/abstract.py
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
def deinterlace(
    self, clip: vs.VideoNode, *, tff: FieldBasedLike | bool | None = None, double_rate: bool = True, **kwargs: Any
) -> ConstantFormatVideoNode:
    """
    Apply deinterlacing to a given clip using the selected resizer.

    Keyword arguments passed during initialization are automatically injected here,
    unless explicitly overridden by the arguments provided at call time.
    Only arguments that match named parameters in this method are injected.

    Args:
        clip: The source clip
        tff: Field order of the clip.
        double_rate: Whether to double the frame rate (True) or retain the original rate (False).

    Returns:
        The bobbed clip.
    """
    bobbed = self.bob(clip, tff=tff, **kwargs)

    if not double_rate:
        return bobbed[::2]

    return bobbed

descale

descale(
    clip: VideoNode,
    width: int | None = None,
    height: int | None = None,
    shift: ShiftT = (0, 0),
    *,
    linear: bool | None = None,
    sigmoid: bool | tuple[Slope, Center] = False,
    border_handling: int | BorderHandling = MIRROR,
    sample_grid_model: int | SampleGridModel = MATCH_EDGES,
    field_based: FieldBasedLike | None = None,
    ignore_mask: VideoNode | None = None,
    blur: float | None = None,
    **kwargs: Any
) -> ConstantFormatVideoNode

Descale a clip to the given resolution, with image borders handling and sampling grid alignment, optionally using linear light processing.

Supports both progressive and interlaced sources. When interlaced, it will separate fields, perform per-field descaling, and weave them back.

Keyword arguments passed during initialization are automatically injected here, unless explicitly overridden by the arguments provided at call time. Only arguments that match named parameters in this method are injected.

Parameters:

  • clip

    (VideoNode) –

    The source clip.

  • width

    (int | None, default: None ) –

    Target descaled width (defaults to clip width if None).

  • height

    (int | None, default: None ) –

    Target descaled height (defaults to clip height if None).

  • shift

    (ShiftT, default: (0, 0) ) –

    Subpixel shift (top, left) or per-field shifts.

  • linear

    (bool | None, default: None ) –

    Whether to linearize the input before descaling. If None, inferred from sigmoid.

  • sigmoid

    (bool | tuple[Slope, Center], default: False ) –

    Whether to use sigmoid transfer curve. Can be True, False, or a tuple of (slope, center). True applies the defaults values (6.5, 0.75). Keep in mind sigmoid slope has to be in range 1.0-20.0 (inclusive) and sigmoid center has to be in range 0.0-1.0 (inclusive).

  • border_handling

    (int | BorderHandling, default: MIRROR ) –

    Method for handling image borders during sampling.

  • sample_grid_model

    (int | SampleGridModel, default: MATCH_EDGES ) –

    Model used to align sampling grid.

  • field_based

    (FieldBasedLike | None, default: None ) –

    Field-based processing mode (interlaced or progressive).

  • ignore_mask

    (VideoNode | None, default: None ) –

    Optional mask specifying areas to ignore during descaling.

  • blur

    (float | None, default: None ) –

    Amount of blur to apply during scaling.

  • **kwargs

    (Any, default: {} ) –

    Additional arguments passed to descale_function.

Returns:

  • ConstantFormatVideoNode

    The descaled video node, optionally processed in linear light.

Source code in vskernels/abstract/complex.py
671
672
673
674
675
676
677
678
679
680
681
682
683
684
685
686
687
688
689
690
691
692
693
694
695
696
697
698
699
700
701
702
703
704
705
706
707
708
709
710
711
712
713
714
715
716
717
718
719
720
721
722
723
724
725
726
727
728
729
730
731
732
733
734
735
736
737
738
739
740
741
742
743
744
745
746
747
748
749
750
751
752
753
754
755
756
757
758
759
760
761
762
763
764
765
766
767
768
769
770
771
def descale(
    self,
    clip: vs.VideoNode,
    width: int | None = None,
    height: int | None = None,
    shift: ShiftT = (0, 0),
    *,
    # `linear` and `sigmoid` parameters from LinearDescaler
    linear: bool | None = None,
    sigmoid: bool | tuple[Slope, Center] = False,
    # ComplexDescaler adds border_handling, sample_grid_model, field_based,  ignore_mask and blur
    border_handling: int | BorderHandling = BorderHandling.MIRROR,
    sample_grid_model: int | SampleGridModel = SampleGridModel.MATCH_EDGES,
    field_based: FieldBasedLike | None = None,
    ignore_mask: vs.VideoNode | None = None,
    blur: float | None = None,
    **kwargs: Any,
) -> ConstantFormatVideoNode:
    """
    Descale a clip to the given resolution, with image borders handling and sampling grid alignment,
    optionally using linear light processing.

    Supports both progressive and interlaced sources. When interlaced, it will separate fields,
    perform per-field descaling, and weave them back.

    Keyword arguments passed during initialization are automatically injected here,
    unless explicitly overridden by the arguments provided at call time.
    Only arguments that match named parameters in this method are injected.

    Args:
        clip: The source clip.
        width: Target descaled width (defaults to clip width if None).
        height: Target descaled height (defaults to clip height if None).
        shift: Subpixel shift (top, left) or per-field shifts.
        linear: Whether to linearize the input before descaling. If None, inferred from sigmoid.
        sigmoid: Whether to use sigmoid transfer curve. Can be True, False, or a tuple of (slope, center). `True`
            applies the defaults values (6.5, 0.75). Keep in mind sigmoid slope has to be in range 1.0-20.0
            (inclusive) and sigmoid center has to be in range 0.0-1.0 (inclusive).
        border_handling: Method for handling image borders during sampling.
        sample_grid_model: Model used to align sampling grid.
        field_based: Field-based processing mode (interlaced or progressive).
        ignore_mask: Optional mask specifying areas to ignore during descaling.
        blur: Amount of blur to apply during scaling.
        **kwargs: Additional arguments passed to `descale_function`.

    Returns:
        The descaled video node, optionally processed in linear light.
    """
    width, height = self._wh_norm(clip, width, height)
    check_correct_subsampling(clip, width, height)

    field_based = FieldBased.from_param_or_video(field_based, clip)

    clip, bits = expect_bits(clip, 32)

    de_base_args = (width, height // (1 + field_based.is_inter))
    kwargs.update(
        linear=linear,
        sigmoid=sigmoid,
        border_handling=BorderHandling.from_param(border_handling, self.descale),
        ignore_mask=ignore_mask,
        blur=blur,
    )

    sample_grid_model = SampleGridModel(sample_grid_model)

    if field_based.is_inter:
        from vsdeinterlace import reweave

        shift_y, shift_x = _descale_shift_norm(shift, False, self.descale)

        kwargs_tf, shift = sample_grid_model.for_src(clip, width, height, (shift_y[0], shift_x[0]), **kwargs)
        kwargs_bf, shift = sample_grid_model.for_src(clip, width, height, (shift_y[1], shift_x[1]), **kwargs)

        de_kwargs_tf = self.get_descale_args(clip, (shift_y[0], shift_x[0]), *de_base_args, **kwargs_tf)
        de_kwargs_bf = self.get_descale_args(clip, (shift_y[1], shift_x[1]), *de_base_args, **kwargs_bf)

        if height % 2:
            raise CustomIndexError("You can't descale to odd resolution when crossconverted!", self.descale)

        field_shift = 0.125 * height / clip.height

        fields = clip.std.SeparateFields(field_based.is_tff)

        descaled_tf = super().descale(
            fields[0::2],
            **de_kwargs_tf | {"src_top": de_kwargs_tf.get("src_top", 0.0) + field_shift},
        )
        descaled_bf = super().descale(
            fields[1::2],
            **de_kwargs_bf | {"src_top": de_kwargs_bf.get("src_top", 0.0) - field_shift},
        )
        descaled = reweave(descaled_tf, descaled_bf, field_based)
    else:
        shift = _descale_shift_norm(shift, True, self.descale)

        kwargs, shift = sample_grid_model.for_src(clip, width, height, shift, **kwargs)

        descaled = super().descale(clip, **self.get_descale_args(clip, shift, *de_base_args, **kwargs))

    return depth(descaled, bits)

ensure_obj classmethod

ensure_obj(
    scaler: str | type[Self] | Self | None = None,
    /,
    func_except: FuncExcept | None = None,
) -> Self

Ensure that the input is a scaler instance, resolving it if necessary.

Parameters:

  • scaler

    (str | type[Self] | Self | None, default: None ) –

    Scaler identifier (string, class, or instance).

  • func_except

    (FuncExcept | None, default: None ) –

    Function returned for custom error handling.

Returns:

  • Self

    Scaler instance.

Source code in vskernels/abstract/base.py
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
@classmethod
def ensure_obj(
    cls,
    scaler: str | type[Self] | Self | None = None,
    /,
    func_except: FuncExcept | None = None,
) -> Self:
    """
    Ensure that the input is a scaler instance, resolving it if necessary.

    Args:
        scaler: Scaler identifier (string, class, or instance).
        func_except: Function returned for custom error handling.

    Returns:
        Scaler instance.
    """
    return _base_ensure_obj(cls, scaler, func_except)

from_param classmethod

from_param(
    scaler: str | type[Self] | Self | None = None,
    /,
    func_except: FuncExcept | None = None,
) -> type[Self]

Resolve and return a scaler type from a given input (string, type, or instance).

Parameters:

  • scaler

    (str | type[Self] | Self | None, default: None ) –

    Scaler identifier (string, class, or instance).

  • func_except

    (FuncExcept | None, default: None ) –

    Function returned for custom error handling.

Returns:

  • type[Self]

    Resolved scaler type.

Source code in vskernels/abstract/base.py
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
@classmethod
def from_param(
    cls,
    scaler: str | type[Self] | Self | None = None,
    /,
    func_except: FuncExcept | None = None,
) -> type[Self]:
    """
    Resolve and return a scaler type from a given input (string, type, or instance).

    Args:
        scaler: Scaler identifier (string, class, or instance).
        func_except: Function returned for custom error handling.

    Returns:
        Resolved scaler type.
    """
    return _base_from_param(cls, scaler, cls._err_class, func_except)

get_bob_args

get_bob_args(
    clip: VideoNode, shift: tuple[TopShift, LeftShift] = (0, 0), **kwargs: Any
) -> dict[str, Any]
Source code in vskernels/kernels/zimg/bicubic.py
60
61
62
63
64
65
66
67
68
def get_bob_args(
    self,
    clip: vs.VideoNode,
    shift: tuple[TopShift, LeftShift] = (0, 0),
    **kwargs: Any,
) -> dict[str, Any]:
    return super().get_bob_args(
        clip, shift, filter="bicubic", filter_param_a=self.b, filter_param_b=self.c, **kwargs
    )

get_descale_args

get_descale_args(
    clip: VideoNode,
    shift: tuple[TopShift, LeftShift] = (0, 0),
    width: int | None = None,
    height: int | None = None,
    **kwargs: Any
) -> dict[str, Any]

Generate and normalize argument dictionary for a descale operation.

Parameters:

  • clip

    (VideoNode) –

    The source clip.

  • shift

    (tuple[TopShift, LeftShift], default: (0, 0) ) –

    Vertical and horizontal shift to apply.

  • width

    (int | None, default: None ) –

    Target width.

  • height

    (int | None, default: None ) –

    Target height.

  • **kwargs

    (Any, default: {} ) –

    Additional arguments to pass to the descale function.

Returns:

  • dict[str, Any]

    Dictionary of keyword arguments for the descale function.

Source code in vskernels/abstract/base.py
946
947
948
949
950
951
952
953
954
955
956
957
958
959
960
961
962
963
964
965
966
967
def get_descale_args(
    self,
    clip: vs.VideoNode,
    shift: tuple[TopShift, LeftShift] = (0, 0),
    width: int | None = None,
    height: int | None = None,
    **kwargs: Any,
) -> dict[str, Any]:
    """
    Generate and normalize argument dictionary for a descale operation.

    Args:
        clip: The source clip.
        shift: Vertical and horizontal shift to apply.
        width: Target width.
        height: Target height.
        **kwargs: Additional arguments to pass to the descale function.

    Returns:
        Dictionary of keyword arguments for the descale function.
    """
    return {"src_top": shift[0], "src_left": shift[1]} | self.get_params_args(True, clip, width, height, **kwargs)

get_params_args

get_params_args(
    is_descale: bool,
    clip: VideoNode,
    width: int | None = None,
    height: int | None = None,
    **kwargs: Any
) -> dict[str, Any]
Source code in vskernels/kernels/zimg/bicubic.py
52
53
54
55
56
57
58
def get_params_args(
    self, is_descale: bool, clip: vs.VideoNode, width: int | None = None, height: int | None = None, **kwargs: Any
) -> dict[str, Any]:
    args = super().get_params_args(is_descale, clip, width, height, **kwargs)
    if is_descale:
        return args | {"b": self.b, "c": self.c}
    return args | {"filter_param_a": self.b, "filter_param_b": self.c}

get_resample_args

get_resample_args(
    clip: VideoNode,
    format: int | VideoFormatLike | HoldsVideoFormat,
    matrix: MatrixLike | None,
    matrix_in: MatrixLike | None,
    **kwargs: Any
) -> dict[str, Any]

Generate and normalize argument dictionary for a resample operation.

Parameters:

  • clip

    (VideoNode) –

    The source clip.

  • format

    (int | VideoFormatLike | HoldsVideoFormat) –

    The target video format, which can either be:

    • an integer format ID,
    • a vs.PresetVideoFormat or vs.VideoFormat,
    • or a source from which a valid VideoFormat can be extracted.
  • matrix

    (MatrixLike | None) –

    Target color matrix.

  • matrix_in

    (MatrixLike | None) –

    Source color matrix.

  • **kwargs

    (Any, default: {} ) –

    Additional arguments to pass to the resample function.

Returns:

  • dict[str, Any]

    Dictionary of keyword arguments for the resample function.

Source code in vskernels/abstract/base.py
 992
 993
 994
 995
 996
 997
 998
 999
1000
1001
1002
1003
1004
1005
1006
1007
1008
1009
1010
1011
1012
1013
1014
1015
1016
1017
1018
1019
1020
1021
def get_resample_args(
    self,
    clip: vs.VideoNode,
    format: int | VideoFormatLike | HoldsVideoFormat,
    matrix: MatrixLike | None,
    matrix_in: MatrixLike | None,
    **kwargs: Any,
) -> dict[str, Any]:
    """
    Generate and normalize argument dictionary for a resample operation.

    Args:
        clip: The source clip.
        format: The target video format, which can either be:

               - an integer format ID,
               - a `vs.PresetVideoFormat` or `vs.VideoFormat`,
               - or a source from which a valid `VideoFormat` can be extracted.
        matrix: Target color matrix.
        matrix_in: Source color matrix.
        **kwargs: Additional arguments to pass to the resample function.

    Returns:
        Dictionary of keyword arguments for the resample function.
    """
    return {
        "format": get_video_format(format).id,
        "matrix": Matrix.from_param(matrix),
        "matrix_in": Matrix.from_param(matrix_in),
    } | self.get_params_args(False, clip, **kwargs)

get_rescale_args

get_rescale_args(
    clip: VideoNode,
    shift: tuple[TopShift, LeftShift] = (0, 0),
    width: int | None = None,
    height: int | None = None,
    **kwargs: Any
) -> dict[str, Any]

Generate and normalize argument dictionary for a rescale operation.

Parameters:

  • clip

    (VideoNode) –

    The source clip.

  • shift

    (tuple[TopShift, LeftShift], default: (0, 0) ) –

    Vertical and horizontal shift to apply.

  • width

    (int | None, default: None ) –

    Target width.

  • height

    (int | None, default: None ) –

    Target height.

  • **kwargs

    (Any, default: {} ) –

    Additional arguments to pass to the rescale function.

Returns:

  • dict[str, Any]

    Dictionary of keyword arguments for the rescale function.

Source code in vskernels/abstract/base.py
969
970
971
972
973
974
975
976
977
978
979
980
981
982
983
984
985
986
987
988
989
990
def get_rescale_args(
    self,
    clip: vs.VideoNode,
    shift: tuple[TopShift, LeftShift] = (0, 0),
    width: int | None = None,
    height: int | None = None,
    **kwargs: Any,
) -> dict[str, Any]:
    """
    Generate and normalize argument dictionary for a rescale operation.

    Args:
        clip: The source clip.
        shift: Vertical and horizontal shift to apply.
        width: Target width.
        height: Target height.
        **kwargs: Additional arguments to pass to the rescale function.

    Returns:
        Dictionary of keyword arguments for the rescale function.
    """
    return {"src_top": shift[0], "src_left": shift[1]} | self.get_params_args(True, clip, width, height, **kwargs)

get_scale_args

get_scale_args(
    clip: VideoNode,
    shift: tuple[TopShift, LeftShift] = (0, 0),
    width: int | None = None,
    height: int | None = None,
    **kwargs: Any
) -> dict[str, Any]

Generate and normalize argument dictionary for a scale operation.

Parameters:

  • clip

    (VideoNode) –

    The source clip.

  • shift

    (tuple[TopShift, LeftShift], default: (0, 0) ) –

    Vertical and horizontal shift to apply.

  • width

    (int | None, default: None ) –

    Target width.

  • height

    (int | None, default: None ) –

    Target height.

  • **kwargs

    (Any, default: {} ) –

    Additional arguments to pass to the scale function.

Returns:

  • dict[str, Any]

    Dictionary of keyword arguments for the scale function.

Source code in vskernels/abstract/base.py
923
924
925
926
927
928
929
930
931
932
933
934
935
936
937
938
939
940
941
942
943
944
def get_scale_args(
    self,
    clip: vs.VideoNode,
    shift: tuple[TopShift, LeftShift] = (0, 0),
    width: int | None = None,
    height: int | None = None,
    **kwargs: Any,
) -> dict[str, Any]:
    """
    Generate and normalize argument dictionary for a scale operation.

    Args:
        clip: The source clip.
        shift: Vertical and horizontal shift to apply.
        width: Target width.
        height: Target height.
        **kwargs: Additional arguments to pass to the scale function.

    Returns:
        Dictionary of keyword arguments for the scale function.
    """
    return {"src_top": shift[0], "src_left": shift[1]} | self.get_params_args(False, clip, width, height, **kwargs)

implemented_funcs classmethod

implemented_funcs() -> frozenset[str]

Returns a set of function names that are implemented in the current class and the parent classes.

These functions determine which keyword arguments will be extracted from the init method.

Returns:

Source code in vskernels/abstract/base.py
431
432
433
434
435
436
437
438
439
440
441
442
@classproperty
@classmethod
def implemented_funcs(cls) -> frozenset[str]:
    """
    Returns a set of function names that are implemented in the current class and the parent classes.

    These functions determine which keyword arguments will be extracted from the __init__ method.

    Returns:
        Frozen set of function names.
    """
    return frozenset(func for klass in cls.mro() for func in getattr(klass, "_implemented_funcs", ()))

kernel_radius

kernel_radius() -> int
Source code in vskernels/kernels/zimg/bicubic.py
70
71
72
73
74
@ZimgComplexKernel.cachedproperty
def kernel_radius(self) -> int:
    if (self.b, self.c) == (0, 0):
        return 1
    return 2

multi

multi(
    clip: VideoNodeT,
    multi: float = 2.0,
    shift: tuple[TopShift, LeftShift] = (0, 0),
    **kwargs: Any
) -> VideoNodeT

Deprecated alias for supersample.

Keyword arguments passed during initialization are automatically injected here, unless explicitly overridden by the arguments provided at call time. Only arguments that match named parameters in this method are injected.

Parameters:

  • clip

    (VideoNodeT) –

    The source clip.

  • multi

    (float, default: 2.0 ) –

    Supersampling factor.

  • shift

    (tuple[TopShift, LeftShift], default: (0, 0) ) –

    Subpixel shift (top, left) applied during scaling.

  • **kwargs

    (Any, default: {} ) –

    Additional arguments forwarded to the scale function.

Returns:

Source code in vskernels/abstract/base.py
531
532
533
534
535
536
537
538
539
540
541
542
543
544
545
546
547
548
549
550
551
@deprecated('The "multi" method is deprecated. Use "supersample" instead.', category=DeprecationWarning)
def multi(
    self, clip: VideoNodeT, multi: float = 2.0, shift: tuple[TopShift, LeftShift] = (0, 0), **kwargs: Any
) -> VideoNodeT:
    """
    Deprecated alias for `supersample`.

    Keyword arguments passed during initialization are automatically injected here,
    unless explicitly overridden by the arguments provided at call time.
    Only arguments that match named parameters in this method are injected.

    Args:
        clip: The source clip.
        multi: Supersampling factor.
        shift: Subpixel shift (top, left) applied during scaling.
        **kwargs: Additional arguments forwarded to the scale function.

    Returns:
        The supersampled clip.
    """
    return self.supersample(clip, multi, shift, **kwargs)

pretty_string

pretty_string() -> str

Cached property returning a user-friendly string representation.

Returns:

  • str

    Pretty-printed string with arguments.

Source code in vskernels/abstract/base.py
421
422
423
424
425
426
427
428
429
@BaseScalerMeta.cachedproperty
def pretty_string(self) -> str:
    """
    Cached property returning a user-friendly string representation.

    Returns:
        Pretty-printed string with arguments.
    """
    return self._pretty_string()

resample

resample(
    clip: VideoNode,
    format: int | VideoFormatLike | HoldsVideoFormat,
    matrix: MatrixLike | None = None,
    matrix_in: MatrixLike | None = None,
    **kwargs: Any
) -> ConstantFormatVideoNode

Resample a video clip to the given format.

Keyword arguments passed during initialization are automatically injected here, unless explicitly overridden by the arguments provided at call time. Only arguments that match named parameters in this method are injected.

Parameters:

  • clip

    (VideoNode) –

    The source clip.

  • format

    (int | VideoFormatLike | HoldsVideoFormat) –

    The target video format, which can either be:

    • an integer format ID,
    • a vs.PresetVideoFormat or vs.VideoFormat,
    • or a source from which a valid VideoFormat can be extracted.
  • matrix

    (MatrixLike | None, default: None ) –

    An optional color transformation matrix to apply.

  • matrix_in

    (MatrixLike | None, default: None ) –

    An optional input matrix for color transformations.

  • **kwargs

    (Any, default: {} ) –

    Additional keyword arguments passed to the resample_function.

Returns:

  • ConstantFormatVideoNode

    The resampled clip.

Source code in vskernels/abstract/base.py
717
718
719
720
721
722
723
724
725
726
727
728
729
730
731
732
733
734
735
736
737
738
739
740
741
742
743
744
745
746
747
748
def resample(
    self,
    clip: vs.VideoNode,
    format: int | VideoFormatLike | HoldsVideoFormat,
    matrix: MatrixLike | None = None,
    matrix_in: MatrixLike | None = None,
    **kwargs: Any,
) -> ConstantFormatVideoNode:
    """
    Resample a video clip to the given format.

    Keyword arguments passed during initialization are automatically injected here,
    unless explicitly overridden by the arguments provided at call time.
    Only arguments that match named parameters in this method are injected.

    Args:
        clip: The source clip.
        format: The target video format, which can either be:

               - an integer format ID,
               - a `vs.PresetVideoFormat` or `vs.VideoFormat`,
               - or a source from which a valid `VideoFormat` can be extracted.
        matrix: An optional color transformation matrix to apply.
        matrix_in: An optional input matrix for color transformations.
        **kwargs: Additional keyword arguments passed to the `resample_function`.

    Returns:
        The resampled clip.
    """
    return self.resample_function(
        clip, **_norm_props_enums(self.get_resample_args(clip, format, matrix, matrix_in, **kwargs))
    )

rescale

rescale(
    clip: VideoNode,
    width: int | None = None,
    height: int | None = None,
    shift: ShiftT = (0, 0),
    *,
    linear: bool | None = None,
    sigmoid: bool | tuple[Slope, Center] = False,
    border_handling: int | BorderHandling = MIRROR,
    sample_grid_model: int | SampleGridModel = MATCH_EDGES,
    field_based: FieldBasedLike | None = None,
    ignore_mask: VideoNode | None = None,
    blur: float | None = None,
    **kwargs: Any
) -> ConstantFormatVideoNode

Rescale a clip to the given resolution from a previously descaled clip, with image borders handling and sampling grid alignment, optionally using linear light processing.

Keyword arguments passed during initialization are automatically injected here, unless explicitly overridden by the arguments provided at call time. Only arguments that match named parameters in this method are injected.

Parameters:

  • clip

    (VideoNode) –

    The source clip.

  • width

    (int | None, default: None ) –

    Target scaled width (defaults to clip width if None).

  • height

    (int | None, default: None ) –

    Target scaled height (defaults to clip height if None).

  • shift

    (ShiftT, default: (0, 0) ) –

    Subpixel shift (top, left) or per-field shifts.

  • linear

    (bool | None, default: None ) –

    Whether to linearize the input before rescaling. If None, inferred from sigmoid.

  • sigmoid

    (bool | tuple[Slope, Center], default: False ) –

    Whether to use sigmoid transfer curve. Can be True, False, or a tuple of (slope, center). True applies the defaults values (6.5, 0.75). Keep in mind sigmoid slope has to be in range 1.0-20.0 (inclusive) and sigmoid center has to be in range 0.0-1.0 (inclusive).

  • border_handling

    (int | BorderHandling, default: MIRROR ) –

    Method for handling image borders during sampling.

  • sample_grid_model

    (int | SampleGridModel, default: MATCH_EDGES ) –

    Model used to align sampling grid.

  • field_based

    (FieldBasedLike | None, default: None ) –

    Field-based processing mode (interlaced or progressive).

  • ignore_mask

    (VideoNode | None, default: None ) –

    Optional mask specifying areas to ignore during rescaling.

  • blur

    (float | None, default: None ) –

    Amount of blur to apply during rescaling.

  • **kwargs

    (Any, default: {} ) –

    Additional arguments passed to rescale_function.

Returns:

  • ConstantFormatVideoNode

    The scaled clip.

Source code in vskernels/abstract/complex.py
773
774
775
776
777
778
779
780
781
782
783
784
785
786
787
788
789
790
791
792
793
794
795
796
797
798
799
800
801
802
803
804
805
806
807
808
809
810
811
812
813
814
815
816
817
818
819
820
821
822
823
824
825
826
827
828
829
830
831
832
833
834
835
836
837
838
839
840
841
842
843
def rescale(
    self,
    clip: vs.VideoNode,
    width: int | None = None,
    height: int | None = None,
    shift: ShiftT = (0, 0),
    *,
    # `linear` and `sigmoid` parameters from LinearDescaler
    linear: bool | None = None,
    sigmoid: bool | tuple[Slope, Center] = False,
    # ComplexDescaler adds border_handling, sample_grid_model, field_based, ignore_mask and blur
    border_handling: int | BorderHandling = BorderHandling.MIRROR,
    sample_grid_model: int | SampleGridModel = SampleGridModel.MATCH_EDGES,
    field_based: FieldBasedLike | None = None,
    ignore_mask: vs.VideoNode | None = None,
    blur: float | None = None,
    **kwargs: Any,
) -> ConstantFormatVideoNode:
    """
    Rescale a clip to the given resolution from a previously descaled clip,
    with image borders handling and sampling grid alignment, optionally using linear light processing.

    Keyword arguments passed during initialization are automatically injected here,
    unless explicitly overridden by the arguments provided at call time.
    Only arguments that match named parameters in this method are injected.

    Args:
        clip: The source clip.
        width: Target scaled width (defaults to clip width if None).
        height: Target scaled height (defaults to clip height if None).
        shift: Subpixel shift (top, left) or per-field shifts.
        linear: Whether to linearize the input before rescaling. If None, inferred from sigmoid.
        sigmoid: Whether to use sigmoid transfer curve. Can be True, False, or a tuple of (slope, center). `True`
            applies the defaults values (6.5, 0.75). Keep in mind sigmoid slope has to be in range 1.0-20.0
            (inclusive) and sigmoid center has to be in range 0.0-1.0 (inclusive).
        border_handling: Method for handling image borders during sampling.
        sample_grid_model: Model used to align sampling grid.
        field_based: Field-based processing mode (interlaced or progressive).
        ignore_mask: Optional mask specifying areas to ignore during rescaling.
        blur: Amount of blur to apply during rescaling.
        **kwargs: Additional arguments passed to `rescale_function`.

    Returns:
        The scaled clip.
    """
    width, height = self._wh_norm(clip, width, height)
    check_correct_subsampling(clip, width, height)

    field_based = FieldBased.from_param_or_video(field_based, clip)

    clip, bits = expect_bits(clip, 32)

    de_base_args = (width, height // (1 + field_based.is_inter))
    kwargs.update(
        border_handling=BorderHandling.from_param(border_handling, self.rescale), ignore_mask=ignore_mask, blur=blur
    )

    sample_grid_model = SampleGridModel(sample_grid_model)

    if field_based.is_inter:
        raise NotImplementedError
    else:
        shift = _descale_shift_norm(shift, True, self.rescale)

        kwargs, shift = sample_grid_model.for_src(clip, width, height, shift, **kwargs)

        rescaled = super().rescale(
            clip, **self.get_rescale_args(clip, shift, *de_base_args, **kwargs), linear=linear, sigmoid=sigmoid
        )

    return depth(rescaled, bits)

scale

scale(
    clip: VideoNode,
    width: int | None = None,
    height: int | None = None,
    shift: tuple[TopShift | list[TopShift], LeftShift | list[LeftShift]] = (
        0,
        0,
    ),
    *,
    linear: bool | None = None,
    sigmoid: bool | tuple[Slope, Center] = False,
    border_handling: BorderHandling = MIRROR,
    sample_grid_model: SampleGridModel = MATCH_EDGES,
    sar: Sar | float | bool | None = None,
    dar: Dar | float | bool | None = None,
    dar_in: Dar | bool | float | None = None,
    keep_ar: bool | None = None,
    blur: float | None = None,
    **kwargs: Any
) -> VideoNode | ConstantFormatVideoNode

Scale a clip to the given resolution, with aspect ratio and linear light support.

Keyword arguments passed during initialization are automatically injected here, unless explicitly overridden by the arguments provided at call time. Only arguments that match named parameters in this method are injected.

Parameters:

  • clip

    (VideoNode) –

    The source clip.

  • width

    (int | None, default: None ) –

    Target width (defaults to clip width if None).

  • height

    (int | None, default: None ) –

    Target height (defaults to clip height if None).

  • shift

    (tuple[TopShift | list[TopShift], LeftShift | list[LeftShift]], default: (0, 0) ) –

    Subpixel shift (top, left) applied during scaling. If a tuple is provided, it is used uniformly. If a list is given, the shift is applied per plane.

  • linear

    (bool | None, default: None ) –

    Whether to linearize the input before descaling. If None, inferred from sigmoid.

  • sigmoid

    (bool | tuple[Slope, Center], default: False ) –

    Whether to use sigmoid transfer curve. Can be True, False, or a tuple of (slope, center). True applies the defaults values (6.5, 0.75). Keep in mind sigmoid slope has to be in range 1.0-20.0 (inclusive) and sigmoid center has to be in range 0.0-1.0 (inclusive).

  • border_handling

    (BorderHandling, default: MIRROR ) –

    Method for handling image borders during sampling.

  • sample_grid_model

    (SampleGridModel, default: MATCH_EDGES ) –

    Model used to align sampling grid.

  • sar

    (Sar | float | bool | None, default: None ) –

    Sample aspect ratio to assume or convert to.

  • dar

    (Dar | float | bool | None, default: None ) –

    Desired display aspect ratio.

  • dar_in

    (Dar | bool | float | None, default: None ) –

    Input display aspect ratio, if different from clip's.

  • keep_ar

    (bool | None, default: None ) –

    Whether to adjust dimensions to preserve aspect ratio.

  • blur

    (float | None, default: None ) –

    Amount of blur to apply during scaling.

Returns:

  • VideoNode | ConstantFormatVideoNode

    Scaled clip, optionally aspect-corrected and linearized.

Source code in vskernels/abstract/complex.py
527
528
529
530
531
532
533
534
535
536
537
538
539
540
541
542
543
544
545
546
547
548
549
550
551
552
553
554
555
556
557
558
559
560
561
562
563
564
565
566
567
568
569
570
571
572
573
574
575
576
577
578
579
580
581
582
583
584
585
586
587
588
589
590
591
592
593
594
595
596
597
598
599
600
601
602
603
604
605
606
607
608
609
610
611
612
613
614
615
616
617
618
619
620
621
622
623
624
625
626
627
628
629
630
631
632
633
634
635
636
637
638
639
640
641
642
643
644
645
646
647
648
649
650
651
652
653
654
655
656
657
658
659
660
def scale(
    self,
    clip: vs.VideoNode,
    width: int | None = None,
    height: int | None = None,
    # ComplexScaler adds shift per planes
    shift: tuple[TopShift | list[TopShift], LeftShift | list[LeftShift]] = (0, 0),
    *,
    # `linear` and `sigmoid` from LinearScaler
    linear: bool | None = None,
    sigmoid: bool | tuple[Slope, Center] = False,
    # `border_handling`, `sample_grid_model`, `sar`, `dar`, `dar_in` and `keep_ar` from KeepArScaler
    border_handling: BorderHandling = BorderHandling.MIRROR,
    sample_grid_model: SampleGridModel = SampleGridModel.MATCH_EDGES,
    sar: Sar | float | bool | None = None,
    dar: Dar | float | bool | None = None,
    dar_in: Dar | bool | float | None = None,
    keep_ar: bool | None = None,
    # ComplexScaler adds blur
    blur: float | None = None,
    **kwargs: Any,
) -> vs.VideoNode | ConstantFormatVideoNode:
    """
    Scale a clip to the given resolution, with aspect ratio and linear light support.

    Keyword arguments passed during initialization are automatically injected here,
    unless explicitly overridden by the arguments provided at call time.
    Only arguments that match named parameters in this method are injected.

    Args:
        clip: The source clip.
        width: Target width (defaults to clip width if None).
        height: Target height (defaults to clip height if None).
        shift: Subpixel shift (top, left) applied during scaling. If a tuple is provided, it is used uniformly. If a
            list is given, the shift is applied per plane.
        linear: Whether to linearize the input before descaling. If None, inferred from sigmoid.
        sigmoid: Whether to use sigmoid transfer curve. Can be True, False, or a tuple of (slope, center). `True`
            applies the defaults values (6.5, 0.75). Keep in mind sigmoid slope has to be in range 1.0-20.0
            (inclusive) and sigmoid center has to be in range 0.0-1.0 (inclusive).
        border_handling: Method for handling image borders during sampling.
        sample_grid_model: Model used to align sampling grid.
        sar: Sample aspect ratio to assume or convert to.
        dar: Desired display aspect ratio.
        dar_in: Input display aspect ratio, if different from clip's.
        keep_ar: Whether to adjust dimensions to preserve aspect ratio.
        blur: Amount of blur to apply during scaling.

    Returns:
        Scaled clip, optionally aspect-corrected and linearized.
    """
    kwargs.update(
        linear=linear,
        sigmoid=sigmoid,
        border_handling=border_handling,
        sample_grid_model=sample_grid_model,
        sar=sar,
        dar=dar,
        dar_in=dar_in,
        keep_ar=keep_ar,
        blur=blur,
    )

    shift_top, shift_left = shift

    if isinstance(shift_top, (int, float)) and isinstance(shift_left, (int, float)):
        return super().scale(clip, width, height, (shift_top, shift_left), **kwargs)

    assert check_variable_format(clip, self.scale)

    n_planes = clip.format.num_planes

    shift_top = normalize_seq(shift_top, n_planes)
    shift_left = normalize_seq(shift_left, n_planes)

    if n_planes == 1:
        return super().scale(clip, width, height, (shift_top[0], shift_left[0]), **kwargs)

    width, height = self._wh_norm(clip, width, height)

    format_in = clip.format
    format_out = get_video_format(fallback(kwargs.pop("format", None), self.kwargs.get("format"), clip.format))

    chromaloc = ChromaLocation.from_video(clip, func=self.scale)
    chromaloc_in = ChromaLocation(
        fallback(kwargs.pop("chromaloc_in", None), self.kwargs.get("chromaloc_in"), chromaloc)
    )
    chromaloc_out = ChromaLocation(fallback(kwargs.pop("chromaloc", None), self.kwargs.get("chromaloc"), chromaloc))

    off_left, off_top = chromaloc_in.get_offsets(format_in)
    off_left_out, off_top_out = chromaloc_out.get_offsets(format_out)

    factor_w = 1 / 2**format_in.subsampling_w
    factor_h = 1 / 2**format_in.subsampling_h

    # Offsets for format out
    offc_left = (abs(off_left) + off_left_out) * factor_w
    offc_top = (abs(off_top) + off_top_out) * factor_h

    # Offsets for scale out
    if format_out.subsampling_w:
        offc_left = ((abs(off_left) + off_left * (clip.width / width)) * factor_w) + offc_left
    if format_out.subsampling_h:
        offc_top = ((abs(off_top) + off_top * (clip.height / height)) * factor_h) + offc_top

    for i in range(1, n_planes):
        shift_left[i] += offc_left
        shift_top[i] += offc_top

    scaled_planes = list[vs.VideoNode]()

    for i, (plane, top, left) in enumerate(zip(split(clip), shift_top, shift_left)):
        if i:
            w = round(width * 1 / 2**format_out.subsampling_h)
            h = round(height * 1 / 2**format_out.subsampling_h)
        else:
            w, h = width, height

        scaled_planes.append(
            super().scale(
                plane,
                w,
                h,
                (top, left),
                format=format_out.replace(color_family=vs.GRAY, subsampling_w=0, subsampling_h=0),
                **kwargs,
            )
        )

    merged = vs.core.std.ShufflePlanes(scaled_planes, [0, 0, 0], format_out.color_family, clip)

    if chromaloc_in != chromaloc_out:
        return chromaloc_out.apply(merged)

    return merged

shift

shift(
    clip: VideoNode, shift: tuple[TopShift, LeftShift], /, **kwargs: Any
) -> ConstantFormatVideoNode
shift(
    clip: VideoNode,
    shift_top: float | list[float],
    shift_left: float | list[float],
    /,
    **kwargs: Any,
) -> ConstantFormatVideoNode
shift(
    clip: VideoNode,
    shifts_or_top: float | tuple[float, float] | list[float],
    shift_left: float | list[float] | None = None,
    /,
    **kwargs: Any,
) -> ConstantFormatVideoNode

Apply a subpixel shift to the clip using the kernel's scaling logic.

If a single float or tuple is provided, it is used uniformly. If a list is given, the shift is applied per plane.

Keyword arguments passed during initialization are automatically injected here, unless explicitly overridden by the arguments provided at call time. Only arguments that match named parameters in this method are injected.

Parameters:

  • clip

    (VideoNode) –

    The source clip.

  • shifts_or_top

    (float | tuple[float, float] | list[float]) –

    Either a single vertical shift, a (top, left) tuple, or a list of vertical shifts.

  • shift_left

    (float | list[float] | None, default: None ) –

    Horizontal shift or list of horizontal shifts. Ignored if shifts_or_top is a tuple.

  • **kwargs

    (Any, default: {} ) –

    Additional arguments passed to the internal scale call.

Returns:

  • ConstantFormatVideoNode

    A new clip with the applied shift.

Raises:

  • VariableFormatError

    If the input clip has variable format.

  • CustomValueError

    If the input clip is GRAY but lists of shift has been passed.

Source code in vskernels/abstract/base.py
845
846
847
848
849
850
851
852
853
854
855
856
857
858
859
860
861
862
863
864
865
866
867
868
869
870
871
872
873
874
875
876
877
878
879
880
881
882
883
884
885
886
887
888
889
890
891
892
893
894
895
896
897
898
899
900
901
902
903
def shift(
    self,
    clip: vs.VideoNode,
    shifts_or_top: float | tuple[float, float] | list[float],
    shift_left: float | list[float] | None = None,
    /,
    **kwargs: Any,
) -> ConstantFormatVideoNode:
    """
    Apply a subpixel shift to the clip using the kernel's scaling logic.

    If a single float or tuple is provided, it is used uniformly.
    If a list is given, the shift is applied per plane.

    Keyword arguments passed during initialization are automatically injected here,
    unless explicitly overridden by the arguments provided at call time.
    Only arguments that match named parameters in this method are injected.

    Args:
        clip: The source clip.
        shifts_or_top: Either a single vertical shift, a (top, left) tuple, or a list of vertical shifts.
        shift_left: Horizontal shift or list of horizontal shifts. Ignored if `shifts_or_top` is a tuple.
        **kwargs: Additional arguments passed to the internal `scale` call.

    Returns:
        A new clip with the applied shift.

    Raises:
        VariableFormatError: If the input clip has variable format.
        CustomValueError: If the input clip is GRAY but lists of shift has been passed.
    """
    assert check_variable_format(clip, self.shift)

    n_planes = clip.format.num_planes

    def _shift(src: vs.VideoNode, shift: tuple[TopShift, LeftShift] = (0, 0)) -> ConstantFormatVideoNode:
        return self.scale(src, shift=shift, **kwargs)  # type: ignore[return-value]

    if isinstance(shifts_or_top, tuple):
        return _shift(clip, shifts_or_top)

    if isinstance(shifts_or_top, (int, float)) and isinstance(shift_left, (int, float, NoneType)):
        return _shift(clip, (shifts_or_top, shift_left or 0))

    if shift_left is None:
        shift_left = 0.0

    shifts_top = normalize_seq(shifts_or_top, n_planes)
    shifts_left = normalize_seq(shift_left, n_planes)

    if n_planes == 1:
        return _shift(clip, (shifts_top[0], shifts_left[0]))

    shifted_planes = [
        plane if top == left == 0 else _shift(plane, (top, left))
        for plane, top, left in zip(split(clip), shifts_top, shifts_left)
    ]

    return core.std.ShufflePlanes(shifted_planes, [0, 0, 0], clip.format.color_family, clip)

supersample

supersample(
    clip: VideoNodeT,
    rfactor: float = 2.0,
    shift: tuple[TopShift, LeftShift] = (0, 0),
    **kwargs: Any
) -> VideoNodeT

Supersample a clip by a given scaling factor.

Keyword arguments passed during initialization are automatically injected here, unless explicitly overridden by the arguments provided at call time. Only arguments that match named parameters in this method are injected.

Parameters:

  • clip

    (VideoNodeT) –

    The source clip.

  • rfactor

    (float, default: 2.0 ) –

    Scaling factor for supersampling.

  • shift

    (tuple[TopShift, LeftShift], default: (0, 0) ) –

    Subpixel shift (top, left) applied during scaling.

  • **kwargs

    (Any, default: {} ) –

    Additional arguments forwarded to the scale function.

Raises:

  • CustomValueError

    If resulting resolution is non-positive.

Returns:

Source code in vskernels/abstract/base.py
496
497
498
499
500
501
502
503
504
505
506
507
508
509
510
511
512
513
514
515
516
517
518
519
520
521
522
523
524
525
526
527
528
529
def supersample(
    self, clip: VideoNodeT, rfactor: float = 2.0, shift: tuple[TopShift, LeftShift] = (0, 0), **kwargs: Any
) -> VideoNodeT:
    """
    Supersample a clip by a given scaling factor.

    Keyword arguments passed during initialization are automatically injected here,
    unless explicitly overridden by the arguments provided at call time.
    Only arguments that match named parameters in this method are injected.

    Args:
        clip: The source clip.
        rfactor: Scaling factor for supersampling.
        shift: Subpixel shift (top, left) applied during scaling.
        **kwargs: Additional arguments forwarded to the scale function.

    Raises:
        CustomValueError: If resulting resolution is non-positive.

    Returns:
        The supersampled clip.
    """
    assert check_variable_resolution(clip, self.supersample)

    dst_width, dst_height = ceil(clip.width * rfactor), ceil(clip.height * rfactor)

    if max(dst_width, dst_height) <= 0.0:
        raise CustomValueError(
            'Multiplying the resolution by "rfactor" must result in a positive resolution!',
            self.supersample,
            rfactor,
        )

    return self.scale(clip, dst_width, dst_height, shift, **kwargs)  # type: ignore[return-value]

Hermite

Hermite(**kwargs: Any)

Bases: Bicubic

Hermite resizer (b=0, c=0).

Initialize the scaler with optional arguments.

Parameters:

  • **kwargs

    (Any, default: {} ) –

    Keyword arguments that configure the internal scaling behavior.

Methods:

  • bob

    Apply bob deinterlacing to a given clip using the selected resizer.

  • deinterlace

    Apply deinterlacing to a given clip using the selected resizer.

  • descale

    Descale a clip to the given resolution, with image borders handling and sampling grid alignment,

  • ensure_obj

    Ensure that the input is a scaler instance, resolving it if necessary.

  • from_param

    Resolve and return a scaler type from a given input (string, type, or instance).

  • get_bob_args
  • get_descale_args

    Generate and normalize argument dictionary for a descale operation.

  • get_params_args
  • get_resample_args

    Generate and normalize argument dictionary for a resample operation.

  • get_rescale_args

    Generate and normalize argument dictionary for a rescale operation.

  • get_scale_args

    Generate and normalize argument dictionary for a scale operation.

  • implemented_funcs

    Returns a set of function names that are implemented in the current class and the parent classes.

  • kernel_radius
  • multi

    Deprecated alias for supersample.

  • pretty_string

    Cached property returning a user-friendly string representation.

  • resample

    Resample a video clip to the given format.

  • rescale

    Rescale a clip to the given resolution from a previously descaled clip,

  • scale

    Scale a clip to the given resolution, with aspect ratio and linear light support.

  • shift

    Apply a subpixel shift to the clip using the kernel's scaling logic.

  • supersample

    Supersample a clip by a given scaling factor.

Attributes:

Source code in vskernels/kernels/zimg/bicubic.py
100
101
102
103
104
105
106
107
def __init__(self, **kwargs: Any) -> None:
    """
    Initialize the scaler with optional arguments.

    Args:
        **kwargs: Keyword arguments that configure the internal scaling behavior.
    """
    super().__init__(b=0, c=0, **kwargs)

b instance-attribute

b = b

bob_function class-attribute instance-attribute

bob_function: Callable[..., ConstantFormatVideoNode] = Bob

Bob function called internally when performing bobbing operations.

c instance-attribute

c = c

descale_function class-attribute instance-attribute

descale_function: Callable[..., ConstantFormatVideoNode] = Debicubic

kwargs instance-attribute

kwargs: dict[str, Any] = kwargs

Arguments passed to the implemented funcs or internal scale function.

resample_function class-attribute instance-attribute

resample_function: Callable[..., ConstantFormatVideoNode] = Bicubic

rescale_function class-attribute instance-attribute

rescale_function: Callable[..., ConstantFormatVideoNode] = Bicubic

scale_function class-attribute instance-attribute

scale_function: Callable[..., VideoNode] = Bicubic

bob

bob(
    clip: VideoNode, *, tff: FieldBasedLike | bool | None = None, **kwargs: Any
) -> ConstantFormatVideoNode

Apply bob deinterlacing to a given clip using the selected resizer.

Keyword arguments passed during initialization are automatically injected here, unless explicitly overridden by the arguments provided at call time. Only arguments that match named parameters in this method are injected.

Parameters:

  • clip

    (VideoNode) –

    The source clip

  • tff

    (FieldBasedLike | bool | None, default: None ) –

    Field order of the clip.

Returns:

  • ConstantFormatVideoNode

    The bobbed clip.

Source code in vskernels/kernels/zimg/abstract.py
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
def bob(
    self, clip: vs.VideoNode, *, tff: FieldBasedLike | bool | None = None, **kwargs: Any
) -> ConstantFormatVideoNode:
    """
    Apply bob deinterlacing to a given clip using the selected resizer.

    Keyword arguments passed during initialization are automatically injected here,
    unless explicitly overridden by the arguments provided at call time.
    Only arguments that match named parameters in this method are injected.

    Args:
        clip: The source clip
        tff: Field order of the clip.

    Returns:
        The bobbed clip.
    """
    clip_fieldbased = FieldBased.from_param_or_video(tff, clip, True, self.__class__)

    assert check_variable(clip, self.__class__)

    return self.bob_function(clip, **self.get_bob_args(clip, tff=clip_fieldbased.is_tff, **kwargs))

deinterlace

deinterlace(
    clip: VideoNode,
    *,
    tff: FieldBasedLike | bool | None = None,
    double_rate: bool = True,
    **kwargs: Any
) -> ConstantFormatVideoNode

Apply deinterlacing to a given clip using the selected resizer.

Keyword arguments passed during initialization are automatically injected here, unless explicitly overridden by the arguments provided at call time. Only arguments that match named parameters in this method are injected.

Parameters:

  • clip

    (VideoNode) –

    The source clip

  • tff

    (FieldBasedLike | bool | None, default: None ) –

    Field order of the clip.

  • double_rate

    (bool, default: True ) –

    Whether to double the frame rate (True) or retain the original rate (False).

Returns:

  • ConstantFormatVideoNode

    The bobbed clip.

Source code in vskernels/kernels/zimg/abstract.py
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
def deinterlace(
    self, clip: vs.VideoNode, *, tff: FieldBasedLike | bool | None = None, double_rate: bool = True, **kwargs: Any
) -> ConstantFormatVideoNode:
    """
    Apply deinterlacing to a given clip using the selected resizer.

    Keyword arguments passed during initialization are automatically injected here,
    unless explicitly overridden by the arguments provided at call time.
    Only arguments that match named parameters in this method are injected.

    Args:
        clip: The source clip
        tff: Field order of the clip.
        double_rate: Whether to double the frame rate (True) or retain the original rate (False).

    Returns:
        The bobbed clip.
    """
    bobbed = self.bob(clip, tff=tff, **kwargs)

    if not double_rate:
        return bobbed[::2]

    return bobbed

descale

descale(
    clip: VideoNode,
    width: int | None = None,
    height: int | None = None,
    shift: ShiftT = (0, 0),
    *,
    linear: bool | None = None,
    sigmoid: bool | tuple[Slope, Center] = False,
    border_handling: int | BorderHandling = MIRROR,
    sample_grid_model: int | SampleGridModel = MATCH_EDGES,
    field_based: FieldBasedLike | None = None,
    ignore_mask: VideoNode | None = None,
    blur: float | None = None,
    **kwargs: Any
) -> ConstantFormatVideoNode

Descale a clip to the given resolution, with image borders handling and sampling grid alignment, optionally using linear light processing.

Supports both progressive and interlaced sources. When interlaced, it will separate fields, perform per-field descaling, and weave them back.

Keyword arguments passed during initialization are automatically injected here, unless explicitly overridden by the arguments provided at call time. Only arguments that match named parameters in this method are injected.

Parameters:

  • clip

    (VideoNode) –

    The source clip.

  • width

    (int | None, default: None ) –

    Target descaled width (defaults to clip width if None).

  • height

    (int | None, default: None ) –

    Target descaled height (defaults to clip height if None).

  • shift

    (ShiftT, default: (0, 0) ) –

    Subpixel shift (top, left) or per-field shifts.

  • linear

    (bool | None, default: None ) –

    Whether to linearize the input before descaling. If None, inferred from sigmoid.

  • sigmoid

    (bool | tuple[Slope, Center], default: False ) –

    Whether to use sigmoid transfer curve. Can be True, False, or a tuple of (slope, center). True applies the defaults values (6.5, 0.75). Keep in mind sigmoid slope has to be in range 1.0-20.0 (inclusive) and sigmoid center has to be in range 0.0-1.0 (inclusive).

  • border_handling

    (int | BorderHandling, default: MIRROR ) –

    Method for handling image borders during sampling.

  • sample_grid_model

    (int | SampleGridModel, default: MATCH_EDGES ) –

    Model used to align sampling grid.

  • field_based

    (FieldBasedLike | None, default: None ) –

    Field-based processing mode (interlaced or progressive).

  • ignore_mask

    (VideoNode | None, default: None ) –

    Optional mask specifying areas to ignore during descaling.

  • blur

    (float | None, default: None ) –

    Amount of blur to apply during scaling.

  • **kwargs

    (Any, default: {} ) –

    Additional arguments passed to descale_function.

Returns:

  • ConstantFormatVideoNode

    The descaled video node, optionally processed in linear light.

Source code in vskernels/abstract/complex.py
671
672
673
674
675
676
677
678
679
680
681
682
683
684
685
686
687
688
689
690
691
692
693
694
695
696
697
698
699
700
701
702
703
704
705
706
707
708
709
710
711
712
713
714
715
716
717
718
719
720
721
722
723
724
725
726
727
728
729
730
731
732
733
734
735
736
737
738
739
740
741
742
743
744
745
746
747
748
749
750
751
752
753
754
755
756
757
758
759
760
761
762
763
764
765
766
767
768
769
770
771
def descale(
    self,
    clip: vs.VideoNode,
    width: int | None = None,
    height: int | None = None,
    shift: ShiftT = (0, 0),
    *,
    # `linear` and `sigmoid` parameters from LinearDescaler
    linear: bool | None = None,
    sigmoid: bool | tuple[Slope, Center] = False,
    # ComplexDescaler adds border_handling, sample_grid_model, field_based,  ignore_mask and blur
    border_handling: int | BorderHandling = BorderHandling.MIRROR,
    sample_grid_model: int | SampleGridModel = SampleGridModel.MATCH_EDGES,
    field_based: FieldBasedLike | None = None,
    ignore_mask: vs.VideoNode | None = None,
    blur: float | None = None,
    **kwargs: Any,
) -> ConstantFormatVideoNode:
    """
    Descale a clip to the given resolution, with image borders handling and sampling grid alignment,
    optionally using linear light processing.

    Supports both progressive and interlaced sources. When interlaced, it will separate fields,
    perform per-field descaling, and weave them back.

    Keyword arguments passed during initialization are automatically injected here,
    unless explicitly overridden by the arguments provided at call time.
    Only arguments that match named parameters in this method are injected.

    Args:
        clip: The source clip.
        width: Target descaled width (defaults to clip width if None).
        height: Target descaled height (defaults to clip height if None).
        shift: Subpixel shift (top, left) or per-field shifts.
        linear: Whether to linearize the input before descaling. If None, inferred from sigmoid.
        sigmoid: Whether to use sigmoid transfer curve. Can be True, False, or a tuple of (slope, center). `True`
            applies the defaults values (6.5, 0.75). Keep in mind sigmoid slope has to be in range 1.0-20.0
            (inclusive) and sigmoid center has to be in range 0.0-1.0 (inclusive).
        border_handling: Method for handling image borders during sampling.
        sample_grid_model: Model used to align sampling grid.
        field_based: Field-based processing mode (interlaced or progressive).
        ignore_mask: Optional mask specifying areas to ignore during descaling.
        blur: Amount of blur to apply during scaling.
        **kwargs: Additional arguments passed to `descale_function`.

    Returns:
        The descaled video node, optionally processed in linear light.
    """
    width, height = self._wh_norm(clip, width, height)
    check_correct_subsampling(clip, width, height)

    field_based = FieldBased.from_param_or_video(field_based, clip)

    clip, bits = expect_bits(clip, 32)

    de_base_args = (width, height // (1 + field_based.is_inter))
    kwargs.update(
        linear=linear,
        sigmoid=sigmoid,
        border_handling=BorderHandling.from_param(border_handling, self.descale),
        ignore_mask=ignore_mask,
        blur=blur,
    )

    sample_grid_model = SampleGridModel(sample_grid_model)

    if field_based.is_inter:
        from vsdeinterlace import reweave

        shift_y, shift_x = _descale_shift_norm(shift, False, self.descale)

        kwargs_tf, shift = sample_grid_model.for_src(clip, width, height, (shift_y[0], shift_x[0]), **kwargs)
        kwargs_bf, shift = sample_grid_model.for_src(clip, width, height, (shift_y[1], shift_x[1]), **kwargs)

        de_kwargs_tf = self.get_descale_args(clip, (shift_y[0], shift_x[0]), *de_base_args, **kwargs_tf)
        de_kwargs_bf = self.get_descale_args(clip, (shift_y[1], shift_x[1]), *de_base_args, **kwargs_bf)

        if height % 2:
            raise CustomIndexError("You can't descale to odd resolution when crossconverted!", self.descale)

        field_shift = 0.125 * height / clip.height

        fields = clip.std.SeparateFields(field_based.is_tff)

        descaled_tf = super().descale(
            fields[0::2],
            **de_kwargs_tf | {"src_top": de_kwargs_tf.get("src_top", 0.0) + field_shift},
        )
        descaled_bf = super().descale(
            fields[1::2],
            **de_kwargs_bf | {"src_top": de_kwargs_bf.get("src_top", 0.0) - field_shift},
        )
        descaled = reweave(descaled_tf, descaled_bf, field_based)
    else:
        shift = _descale_shift_norm(shift, True, self.descale)

        kwargs, shift = sample_grid_model.for_src(clip, width, height, shift, **kwargs)

        descaled = super().descale(clip, **self.get_descale_args(clip, shift, *de_base_args, **kwargs))

    return depth(descaled, bits)

ensure_obj classmethod

ensure_obj(
    scaler: str | type[Self] | Self | None = None,
    /,
    func_except: FuncExcept | None = None,
) -> Self

Ensure that the input is a scaler instance, resolving it if necessary.

Parameters:

  • scaler

    (str | type[Self] | Self | None, default: None ) –

    Scaler identifier (string, class, or instance).

  • func_except

    (FuncExcept | None, default: None ) –

    Function returned for custom error handling.

Returns:

  • Self

    Scaler instance.

Source code in vskernels/abstract/base.py
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
@classmethod
def ensure_obj(
    cls,
    scaler: str | type[Self] | Self | None = None,
    /,
    func_except: FuncExcept | None = None,
) -> Self:
    """
    Ensure that the input is a scaler instance, resolving it if necessary.

    Args:
        scaler: Scaler identifier (string, class, or instance).
        func_except: Function returned for custom error handling.

    Returns:
        Scaler instance.
    """
    return _base_ensure_obj(cls, scaler, func_except)

from_param classmethod

from_param(
    scaler: str | type[Self] | Self | None = None,
    /,
    func_except: FuncExcept | None = None,
) -> type[Self]

Resolve and return a scaler type from a given input (string, type, or instance).

Parameters:

  • scaler

    (str | type[Self] | Self | None, default: None ) –

    Scaler identifier (string, class, or instance).

  • func_except

    (FuncExcept | None, default: None ) –

    Function returned for custom error handling.

Returns:

  • type[Self]

    Resolved scaler type.

Source code in vskernels/abstract/base.py
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
@classmethod
def from_param(
    cls,
    scaler: str | type[Self] | Self | None = None,
    /,
    func_except: FuncExcept | None = None,
) -> type[Self]:
    """
    Resolve and return a scaler type from a given input (string, type, or instance).

    Args:
        scaler: Scaler identifier (string, class, or instance).
        func_except: Function returned for custom error handling.

    Returns:
        Resolved scaler type.
    """
    return _base_from_param(cls, scaler, cls._err_class, func_except)

get_bob_args

get_bob_args(
    clip: VideoNode, shift: tuple[TopShift, LeftShift] = (0, 0), **kwargs: Any
) -> dict[str, Any]
Source code in vskernels/kernels/zimg/bicubic.py
60
61
62
63
64
65
66
67
68
def get_bob_args(
    self,
    clip: vs.VideoNode,
    shift: tuple[TopShift, LeftShift] = (0, 0),
    **kwargs: Any,
) -> dict[str, Any]:
    return super().get_bob_args(
        clip, shift, filter="bicubic", filter_param_a=self.b, filter_param_b=self.c, **kwargs
    )

get_descale_args

get_descale_args(
    clip: VideoNode,
    shift: tuple[TopShift, LeftShift] = (0, 0),
    width: int | None = None,
    height: int | None = None,
    **kwargs: Any
) -> dict[str, Any]

Generate and normalize argument dictionary for a descale operation.

Parameters:

  • clip

    (VideoNode) –

    The source clip.

  • shift

    (tuple[TopShift, LeftShift], default: (0, 0) ) –

    Vertical and horizontal shift to apply.

  • width

    (int | None, default: None ) –

    Target width.

  • height

    (int | None, default: None ) –

    Target height.

  • **kwargs

    (Any, default: {} ) –

    Additional arguments to pass to the descale function.

Returns:

  • dict[str, Any]

    Dictionary of keyword arguments for the descale function.

Source code in vskernels/abstract/base.py
946
947
948
949
950
951
952
953
954
955
956
957
958
959
960
961
962
963
964
965
966
967
def get_descale_args(
    self,
    clip: vs.VideoNode,
    shift: tuple[TopShift, LeftShift] = (0, 0),
    width: int | None = None,
    height: int | None = None,
    **kwargs: Any,
) -> dict[str, Any]:
    """
    Generate and normalize argument dictionary for a descale operation.

    Args:
        clip: The source clip.
        shift: Vertical and horizontal shift to apply.
        width: Target width.
        height: Target height.
        **kwargs: Additional arguments to pass to the descale function.

    Returns:
        Dictionary of keyword arguments for the descale function.
    """
    return {"src_top": shift[0], "src_left": shift[1]} | self.get_params_args(True, clip, width, height, **kwargs)

get_params_args

get_params_args(
    is_descale: bool,
    clip: VideoNode,
    width: int | None = None,
    height: int | None = None,
    **kwargs: Any
) -> dict[str, Any]
Source code in vskernels/kernels/zimg/bicubic.py
52
53
54
55
56
57
58
def get_params_args(
    self, is_descale: bool, clip: vs.VideoNode, width: int | None = None, height: int | None = None, **kwargs: Any
) -> dict[str, Any]:
    args = super().get_params_args(is_descale, clip, width, height, **kwargs)
    if is_descale:
        return args | {"b": self.b, "c": self.c}
    return args | {"filter_param_a": self.b, "filter_param_b": self.c}

get_resample_args

get_resample_args(
    clip: VideoNode,
    format: int | VideoFormatLike | HoldsVideoFormat,
    matrix: MatrixLike | None,
    matrix_in: MatrixLike | None,
    **kwargs: Any
) -> dict[str, Any]

Generate and normalize argument dictionary for a resample operation.

Parameters:

  • clip

    (VideoNode) –

    The source clip.

  • format

    (int | VideoFormatLike | HoldsVideoFormat) –

    The target video format, which can either be:

    • an integer format ID,
    • a vs.PresetVideoFormat or vs.VideoFormat,
    • or a source from which a valid VideoFormat can be extracted.
  • matrix

    (MatrixLike | None) –

    Target color matrix.

  • matrix_in

    (MatrixLike | None) –

    Source color matrix.

  • **kwargs

    (Any, default: {} ) –

    Additional arguments to pass to the resample function.

Returns:

  • dict[str, Any]

    Dictionary of keyword arguments for the resample function.

Source code in vskernels/abstract/base.py
 992
 993
 994
 995
 996
 997
 998
 999
1000
1001
1002
1003
1004
1005
1006
1007
1008
1009
1010
1011
1012
1013
1014
1015
1016
1017
1018
1019
1020
1021
def get_resample_args(
    self,
    clip: vs.VideoNode,
    format: int | VideoFormatLike | HoldsVideoFormat,
    matrix: MatrixLike | None,
    matrix_in: MatrixLike | None,
    **kwargs: Any,
) -> dict[str, Any]:
    """
    Generate and normalize argument dictionary for a resample operation.

    Args:
        clip: The source clip.
        format: The target video format, which can either be:

               - an integer format ID,
               - a `vs.PresetVideoFormat` or `vs.VideoFormat`,
               - or a source from which a valid `VideoFormat` can be extracted.
        matrix: Target color matrix.
        matrix_in: Source color matrix.
        **kwargs: Additional arguments to pass to the resample function.

    Returns:
        Dictionary of keyword arguments for the resample function.
    """
    return {
        "format": get_video_format(format).id,
        "matrix": Matrix.from_param(matrix),
        "matrix_in": Matrix.from_param(matrix_in),
    } | self.get_params_args(False, clip, **kwargs)

get_rescale_args

get_rescale_args(
    clip: VideoNode,
    shift: tuple[TopShift, LeftShift] = (0, 0),
    width: int | None = None,
    height: int | None = None,
    **kwargs: Any
) -> dict[str, Any]

Generate and normalize argument dictionary for a rescale operation.

Parameters:

  • clip

    (VideoNode) –

    The source clip.

  • shift

    (tuple[TopShift, LeftShift], default: (0, 0) ) –

    Vertical and horizontal shift to apply.

  • width

    (int | None, default: None ) –

    Target width.

  • height

    (int | None, default: None ) –

    Target height.

  • **kwargs

    (Any, default: {} ) –

    Additional arguments to pass to the rescale function.

Returns:

  • dict[str, Any]

    Dictionary of keyword arguments for the rescale function.

Source code in vskernels/abstract/base.py
969
970
971
972
973
974
975
976
977
978
979
980
981
982
983
984
985
986
987
988
989
990
def get_rescale_args(
    self,
    clip: vs.VideoNode,
    shift: tuple[TopShift, LeftShift] = (0, 0),
    width: int | None = None,
    height: int | None = None,
    **kwargs: Any,
) -> dict[str, Any]:
    """
    Generate and normalize argument dictionary for a rescale operation.

    Args:
        clip: The source clip.
        shift: Vertical and horizontal shift to apply.
        width: Target width.
        height: Target height.
        **kwargs: Additional arguments to pass to the rescale function.

    Returns:
        Dictionary of keyword arguments for the rescale function.
    """
    return {"src_top": shift[0], "src_left": shift[1]} | self.get_params_args(True, clip, width, height, **kwargs)

get_scale_args

get_scale_args(
    clip: VideoNode,
    shift: tuple[TopShift, LeftShift] = (0, 0),
    width: int | None = None,
    height: int | None = None,
    **kwargs: Any
) -> dict[str, Any]

Generate and normalize argument dictionary for a scale operation.

Parameters:

  • clip

    (VideoNode) –

    The source clip.

  • shift

    (tuple[TopShift, LeftShift], default: (0, 0) ) –

    Vertical and horizontal shift to apply.

  • width

    (int | None, default: None ) –

    Target width.

  • height

    (int | None, default: None ) –

    Target height.

  • **kwargs

    (Any, default: {} ) –

    Additional arguments to pass to the scale function.

Returns:

  • dict[str, Any]

    Dictionary of keyword arguments for the scale function.

Source code in vskernels/abstract/base.py
923
924
925
926
927
928
929
930
931
932
933
934
935
936
937
938
939
940
941
942
943
944
def get_scale_args(
    self,
    clip: vs.VideoNode,
    shift: tuple[TopShift, LeftShift] = (0, 0),
    width: int | None = None,
    height: int | None = None,
    **kwargs: Any,
) -> dict[str, Any]:
    """
    Generate and normalize argument dictionary for a scale operation.

    Args:
        clip: The source clip.
        shift: Vertical and horizontal shift to apply.
        width: Target width.
        height: Target height.
        **kwargs: Additional arguments to pass to the scale function.

    Returns:
        Dictionary of keyword arguments for the scale function.
    """
    return {"src_top": shift[0], "src_left": shift[1]} | self.get_params_args(False, clip, width, height, **kwargs)

implemented_funcs classmethod

implemented_funcs() -> frozenset[str]

Returns a set of function names that are implemented in the current class and the parent classes.

These functions determine which keyword arguments will be extracted from the init method.

Returns:

Source code in vskernels/abstract/base.py
431
432
433
434
435
436
437
438
439
440
441
442
@classproperty
@classmethod
def implemented_funcs(cls) -> frozenset[str]:
    """
    Returns a set of function names that are implemented in the current class and the parent classes.

    These functions determine which keyword arguments will be extracted from the __init__ method.

    Returns:
        Frozen set of function names.
    """
    return frozenset(func for klass in cls.mro() for func in getattr(klass, "_implemented_funcs", ()))

kernel_radius

kernel_radius() -> int
Source code in vskernels/kernels/zimg/bicubic.py
70
71
72
73
74
@ZimgComplexKernel.cachedproperty
def kernel_radius(self) -> int:
    if (self.b, self.c) == (0, 0):
        return 1
    return 2

multi

multi(
    clip: VideoNodeT,
    multi: float = 2.0,
    shift: tuple[TopShift, LeftShift] = (0, 0),
    **kwargs: Any
) -> VideoNodeT

Deprecated alias for supersample.

Keyword arguments passed during initialization are automatically injected here, unless explicitly overridden by the arguments provided at call time. Only arguments that match named parameters in this method are injected.

Parameters:

  • clip

    (VideoNodeT) –

    The source clip.

  • multi

    (float, default: 2.0 ) –

    Supersampling factor.

  • shift

    (tuple[TopShift, LeftShift], default: (0, 0) ) –

    Subpixel shift (top, left) applied during scaling.

  • **kwargs

    (Any, default: {} ) –

    Additional arguments forwarded to the scale function.

Returns:

Source code in vskernels/abstract/base.py
531
532
533
534
535
536
537
538
539
540
541
542
543
544
545
546
547
548
549
550
551
@deprecated('The "multi" method is deprecated. Use "supersample" instead.', category=DeprecationWarning)
def multi(
    self, clip: VideoNodeT, multi: float = 2.0, shift: tuple[TopShift, LeftShift] = (0, 0), **kwargs: Any
) -> VideoNodeT:
    """
    Deprecated alias for `supersample`.

    Keyword arguments passed during initialization are automatically injected here,
    unless explicitly overridden by the arguments provided at call time.
    Only arguments that match named parameters in this method are injected.

    Args:
        clip: The source clip.
        multi: Supersampling factor.
        shift: Subpixel shift (top, left) applied during scaling.
        **kwargs: Additional arguments forwarded to the scale function.

    Returns:
        The supersampled clip.
    """
    return self.supersample(clip, multi, shift, **kwargs)

pretty_string

pretty_string() -> str

Cached property returning a user-friendly string representation.

Returns:

  • str

    Pretty-printed string with arguments.

Source code in vskernels/abstract/base.py
421
422
423
424
425
426
427
428
429
@BaseScalerMeta.cachedproperty
def pretty_string(self) -> str:
    """
    Cached property returning a user-friendly string representation.

    Returns:
        Pretty-printed string with arguments.
    """
    return self._pretty_string()

resample

resample(
    clip: VideoNode,
    format: int | VideoFormatLike | HoldsVideoFormat,
    matrix: MatrixLike | None = None,
    matrix_in: MatrixLike | None = None,
    **kwargs: Any
) -> ConstantFormatVideoNode

Resample a video clip to the given format.

Keyword arguments passed during initialization are automatically injected here, unless explicitly overridden by the arguments provided at call time. Only arguments that match named parameters in this method are injected.

Parameters:

  • clip

    (VideoNode) –

    The source clip.

  • format

    (int | VideoFormatLike | HoldsVideoFormat) –

    The target video format, which can either be:

    • an integer format ID,
    • a vs.PresetVideoFormat or vs.VideoFormat,
    • or a source from which a valid VideoFormat can be extracted.
  • matrix

    (MatrixLike | None, default: None ) –

    An optional color transformation matrix to apply.

  • matrix_in

    (MatrixLike | None, default: None ) –

    An optional input matrix for color transformations.

  • **kwargs

    (Any, default: {} ) –

    Additional keyword arguments passed to the resample_function.

Returns:

  • ConstantFormatVideoNode

    The resampled clip.

Source code in vskernels/abstract/base.py
717
718
719
720
721
722
723
724
725
726
727
728
729
730
731
732
733
734
735
736
737
738
739
740
741
742
743
744
745
746
747
748
def resample(
    self,
    clip: vs.VideoNode,
    format: int | VideoFormatLike | HoldsVideoFormat,
    matrix: MatrixLike | None = None,
    matrix_in: MatrixLike | None = None,
    **kwargs: Any,
) -> ConstantFormatVideoNode:
    """
    Resample a video clip to the given format.

    Keyword arguments passed during initialization are automatically injected here,
    unless explicitly overridden by the arguments provided at call time.
    Only arguments that match named parameters in this method are injected.

    Args:
        clip: The source clip.
        format: The target video format, which can either be:

               - an integer format ID,
               - a `vs.PresetVideoFormat` or `vs.VideoFormat`,
               - or a source from which a valid `VideoFormat` can be extracted.
        matrix: An optional color transformation matrix to apply.
        matrix_in: An optional input matrix for color transformations.
        **kwargs: Additional keyword arguments passed to the `resample_function`.

    Returns:
        The resampled clip.
    """
    return self.resample_function(
        clip, **_norm_props_enums(self.get_resample_args(clip, format, matrix, matrix_in, **kwargs))
    )

rescale

rescale(
    clip: VideoNode,
    width: int | None = None,
    height: int | None = None,
    shift: ShiftT = (0, 0),
    *,
    linear: bool | None = None,
    sigmoid: bool | tuple[Slope, Center] = False,
    border_handling: int | BorderHandling = MIRROR,
    sample_grid_model: int | SampleGridModel = MATCH_EDGES,
    field_based: FieldBasedLike | None = None,
    ignore_mask: VideoNode | None = None,
    blur: float | None = None,
    **kwargs: Any
) -> ConstantFormatVideoNode

Rescale a clip to the given resolution from a previously descaled clip, with image borders handling and sampling grid alignment, optionally using linear light processing.

Keyword arguments passed during initialization are automatically injected here, unless explicitly overridden by the arguments provided at call time. Only arguments that match named parameters in this method are injected.

Parameters:

  • clip

    (VideoNode) –

    The source clip.

  • width

    (int | None, default: None ) –

    Target scaled width (defaults to clip width if None).

  • height

    (int | None, default: None ) –

    Target scaled height (defaults to clip height if None).

  • shift

    (ShiftT, default: (0, 0) ) –

    Subpixel shift (top, left) or per-field shifts.

  • linear

    (bool | None, default: None ) –

    Whether to linearize the input before rescaling. If None, inferred from sigmoid.

  • sigmoid

    (bool | tuple[Slope, Center], default: False ) –

    Whether to use sigmoid transfer curve. Can be True, False, or a tuple of (slope, center). True applies the defaults values (6.5, 0.75). Keep in mind sigmoid slope has to be in range 1.0-20.0 (inclusive) and sigmoid center has to be in range 0.0-1.0 (inclusive).

  • border_handling

    (int | BorderHandling, default: MIRROR ) –

    Method for handling image borders during sampling.

  • sample_grid_model

    (int | SampleGridModel, default: MATCH_EDGES ) –

    Model used to align sampling grid.

  • field_based

    (FieldBasedLike | None, default: None ) –

    Field-based processing mode (interlaced or progressive).

  • ignore_mask

    (VideoNode | None, default: None ) –

    Optional mask specifying areas to ignore during rescaling.

  • blur

    (float | None, default: None ) –

    Amount of blur to apply during rescaling.

  • **kwargs

    (Any, default: {} ) –

    Additional arguments passed to rescale_function.

Returns:

  • ConstantFormatVideoNode

    The scaled clip.

Source code in vskernels/abstract/complex.py
773
774
775
776
777
778
779
780
781
782
783
784
785
786
787
788
789
790
791
792
793
794
795
796
797
798
799
800
801
802
803
804
805
806
807
808
809
810
811
812
813
814
815
816
817
818
819
820
821
822
823
824
825
826
827
828
829
830
831
832
833
834
835
836
837
838
839
840
841
842
843
def rescale(
    self,
    clip: vs.VideoNode,
    width: int | None = None,
    height: int | None = None,
    shift: ShiftT = (0, 0),
    *,
    # `linear` and `sigmoid` parameters from LinearDescaler
    linear: bool | None = None,
    sigmoid: bool | tuple[Slope, Center] = False,
    # ComplexDescaler adds border_handling, sample_grid_model, field_based, ignore_mask and blur
    border_handling: int | BorderHandling = BorderHandling.MIRROR,
    sample_grid_model: int | SampleGridModel = SampleGridModel.MATCH_EDGES,
    field_based: FieldBasedLike | None = None,
    ignore_mask: vs.VideoNode | None = None,
    blur: float | None = None,
    **kwargs: Any,
) -> ConstantFormatVideoNode:
    """
    Rescale a clip to the given resolution from a previously descaled clip,
    with image borders handling and sampling grid alignment, optionally using linear light processing.

    Keyword arguments passed during initialization are automatically injected here,
    unless explicitly overridden by the arguments provided at call time.
    Only arguments that match named parameters in this method are injected.

    Args:
        clip: The source clip.
        width: Target scaled width (defaults to clip width if None).
        height: Target scaled height (defaults to clip height if None).
        shift: Subpixel shift (top, left) or per-field shifts.
        linear: Whether to linearize the input before rescaling. If None, inferred from sigmoid.
        sigmoid: Whether to use sigmoid transfer curve. Can be True, False, or a tuple of (slope, center). `True`
            applies the defaults values (6.5, 0.75). Keep in mind sigmoid slope has to be in range 1.0-20.0
            (inclusive) and sigmoid center has to be in range 0.0-1.0 (inclusive).
        border_handling: Method for handling image borders during sampling.
        sample_grid_model: Model used to align sampling grid.
        field_based: Field-based processing mode (interlaced or progressive).
        ignore_mask: Optional mask specifying areas to ignore during rescaling.
        blur: Amount of blur to apply during rescaling.
        **kwargs: Additional arguments passed to `rescale_function`.

    Returns:
        The scaled clip.
    """
    width, height = self._wh_norm(clip, width, height)
    check_correct_subsampling(clip, width, height)

    field_based = FieldBased.from_param_or_video(field_based, clip)

    clip, bits = expect_bits(clip, 32)

    de_base_args = (width, height // (1 + field_based.is_inter))
    kwargs.update(
        border_handling=BorderHandling.from_param(border_handling, self.rescale), ignore_mask=ignore_mask, blur=blur
    )

    sample_grid_model = SampleGridModel(sample_grid_model)

    if field_based.is_inter:
        raise NotImplementedError
    else:
        shift = _descale_shift_norm(shift, True, self.rescale)

        kwargs, shift = sample_grid_model.for_src(clip, width, height, shift, **kwargs)

        rescaled = super().rescale(
            clip, **self.get_rescale_args(clip, shift, *de_base_args, **kwargs), linear=linear, sigmoid=sigmoid
        )

    return depth(rescaled, bits)

scale

scale(
    clip: VideoNode,
    width: int | None = None,
    height: int | None = None,
    shift: tuple[TopShift | list[TopShift], LeftShift | list[LeftShift]] = (
        0,
        0,
    ),
    *,
    linear: bool | None = None,
    sigmoid: bool | tuple[Slope, Center] = False,
    border_handling: BorderHandling = MIRROR,
    sample_grid_model: SampleGridModel = MATCH_EDGES,
    sar: Sar | float | bool | None = None,
    dar: Dar | float | bool | None = None,
    dar_in: Dar | bool | float | None = None,
    keep_ar: bool | None = None,
    blur: float | None = None,
    **kwargs: Any
) -> VideoNode | ConstantFormatVideoNode

Scale a clip to the given resolution, with aspect ratio and linear light support.

Keyword arguments passed during initialization are automatically injected here, unless explicitly overridden by the arguments provided at call time. Only arguments that match named parameters in this method are injected.

Parameters:

  • clip

    (VideoNode) –

    The source clip.

  • width

    (int | None, default: None ) –

    Target width (defaults to clip width if None).

  • height

    (int | None, default: None ) –

    Target height (defaults to clip height if None).

  • shift

    (tuple[TopShift | list[TopShift], LeftShift | list[LeftShift]], default: (0, 0) ) –

    Subpixel shift (top, left) applied during scaling. If a tuple is provided, it is used uniformly. If a list is given, the shift is applied per plane.

  • linear

    (bool | None, default: None ) –

    Whether to linearize the input before descaling. If None, inferred from sigmoid.

  • sigmoid

    (bool | tuple[Slope, Center], default: False ) –

    Whether to use sigmoid transfer curve. Can be True, False, or a tuple of (slope, center). True applies the defaults values (6.5, 0.75). Keep in mind sigmoid slope has to be in range 1.0-20.0 (inclusive) and sigmoid center has to be in range 0.0-1.0 (inclusive).

  • border_handling

    (BorderHandling, default: MIRROR ) –

    Method for handling image borders during sampling.

  • sample_grid_model

    (SampleGridModel, default: MATCH_EDGES ) –

    Model used to align sampling grid.

  • sar

    (Sar | float | bool | None, default: None ) –

    Sample aspect ratio to assume or convert to.

  • dar

    (Dar | float | bool | None, default: None ) –

    Desired display aspect ratio.

  • dar_in

    (Dar | bool | float | None, default: None ) –

    Input display aspect ratio, if different from clip's.

  • keep_ar

    (bool | None, default: None ) –

    Whether to adjust dimensions to preserve aspect ratio.

  • blur

    (float | None, default: None ) –

    Amount of blur to apply during scaling.

Returns:

  • VideoNode | ConstantFormatVideoNode

    Scaled clip, optionally aspect-corrected and linearized.

Source code in vskernels/abstract/complex.py
527
528
529
530
531
532
533
534
535
536
537
538
539
540
541
542
543
544
545
546
547
548
549
550
551
552
553
554
555
556
557
558
559
560
561
562
563
564
565
566
567
568
569
570
571
572
573
574
575
576
577
578
579
580
581
582
583
584
585
586
587
588
589
590
591
592
593
594
595
596
597
598
599
600
601
602
603
604
605
606
607
608
609
610
611
612
613
614
615
616
617
618
619
620
621
622
623
624
625
626
627
628
629
630
631
632
633
634
635
636
637
638
639
640
641
642
643
644
645
646
647
648
649
650
651
652
653
654
655
656
657
658
659
660
def scale(
    self,
    clip: vs.VideoNode,
    width: int | None = None,
    height: int | None = None,
    # ComplexScaler adds shift per planes
    shift: tuple[TopShift | list[TopShift], LeftShift | list[LeftShift]] = (0, 0),
    *,
    # `linear` and `sigmoid` from LinearScaler
    linear: bool | None = None,
    sigmoid: bool | tuple[Slope, Center] = False,
    # `border_handling`, `sample_grid_model`, `sar`, `dar`, `dar_in` and `keep_ar` from KeepArScaler
    border_handling: BorderHandling = BorderHandling.MIRROR,
    sample_grid_model: SampleGridModel = SampleGridModel.MATCH_EDGES,
    sar: Sar | float | bool | None = None,
    dar: Dar | float | bool | None = None,
    dar_in: Dar | bool | float | None = None,
    keep_ar: bool | None = None,
    # ComplexScaler adds blur
    blur: float | None = None,
    **kwargs: Any,
) -> vs.VideoNode | ConstantFormatVideoNode:
    """
    Scale a clip to the given resolution, with aspect ratio and linear light support.

    Keyword arguments passed during initialization are automatically injected here,
    unless explicitly overridden by the arguments provided at call time.
    Only arguments that match named parameters in this method are injected.

    Args:
        clip: The source clip.
        width: Target width (defaults to clip width if None).
        height: Target height (defaults to clip height if None).
        shift: Subpixel shift (top, left) applied during scaling. If a tuple is provided, it is used uniformly. If a
            list is given, the shift is applied per plane.
        linear: Whether to linearize the input before descaling. If None, inferred from sigmoid.
        sigmoid: Whether to use sigmoid transfer curve. Can be True, False, or a tuple of (slope, center). `True`
            applies the defaults values (6.5, 0.75). Keep in mind sigmoid slope has to be in range 1.0-20.0
            (inclusive) and sigmoid center has to be in range 0.0-1.0 (inclusive).
        border_handling: Method for handling image borders during sampling.
        sample_grid_model: Model used to align sampling grid.
        sar: Sample aspect ratio to assume or convert to.
        dar: Desired display aspect ratio.
        dar_in: Input display aspect ratio, if different from clip's.
        keep_ar: Whether to adjust dimensions to preserve aspect ratio.
        blur: Amount of blur to apply during scaling.

    Returns:
        Scaled clip, optionally aspect-corrected and linearized.
    """
    kwargs.update(
        linear=linear,
        sigmoid=sigmoid,
        border_handling=border_handling,
        sample_grid_model=sample_grid_model,
        sar=sar,
        dar=dar,
        dar_in=dar_in,
        keep_ar=keep_ar,
        blur=blur,
    )

    shift_top, shift_left = shift

    if isinstance(shift_top, (int, float)) and isinstance(shift_left, (int, float)):
        return super().scale(clip, width, height, (shift_top, shift_left), **kwargs)

    assert check_variable_format(clip, self.scale)

    n_planes = clip.format.num_planes

    shift_top = normalize_seq(shift_top, n_planes)
    shift_left = normalize_seq(shift_left, n_planes)

    if n_planes == 1:
        return super().scale(clip, width, height, (shift_top[0], shift_left[0]), **kwargs)

    width, height = self._wh_norm(clip, width, height)

    format_in = clip.format
    format_out = get_video_format(fallback(kwargs.pop("format", None), self.kwargs.get("format"), clip.format))

    chromaloc = ChromaLocation.from_video(clip, func=self.scale)
    chromaloc_in = ChromaLocation(
        fallback(kwargs.pop("chromaloc_in", None), self.kwargs.get("chromaloc_in"), chromaloc)
    )
    chromaloc_out = ChromaLocation(fallback(kwargs.pop("chromaloc", None), self.kwargs.get("chromaloc"), chromaloc))

    off_left, off_top = chromaloc_in.get_offsets(format_in)
    off_left_out, off_top_out = chromaloc_out.get_offsets(format_out)

    factor_w = 1 / 2**format_in.subsampling_w
    factor_h = 1 / 2**format_in.subsampling_h

    # Offsets for format out
    offc_left = (abs(off_left) + off_left_out) * factor_w
    offc_top = (abs(off_top) + off_top_out) * factor_h

    # Offsets for scale out
    if format_out.subsampling_w:
        offc_left = ((abs(off_left) + off_left * (clip.width / width)) * factor_w) + offc_left
    if format_out.subsampling_h:
        offc_top = ((abs(off_top) + off_top * (clip.height / height)) * factor_h) + offc_top

    for i in range(1, n_planes):
        shift_left[i] += offc_left
        shift_top[i] += offc_top

    scaled_planes = list[vs.VideoNode]()

    for i, (plane, top, left) in enumerate(zip(split(clip), shift_top, shift_left)):
        if i:
            w = round(width * 1 / 2**format_out.subsampling_h)
            h = round(height * 1 / 2**format_out.subsampling_h)
        else:
            w, h = width, height

        scaled_planes.append(
            super().scale(
                plane,
                w,
                h,
                (top, left),
                format=format_out.replace(color_family=vs.GRAY, subsampling_w=0, subsampling_h=0),
                **kwargs,
            )
        )

    merged = vs.core.std.ShufflePlanes(scaled_planes, [0, 0, 0], format_out.color_family, clip)

    if chromaloc_in != chromaloc_out:
        return chromaloc_out.apply(merged)

    return merged

shift

shift(
    clip: VideoNode, shift: tuple[TopShift, LeftShift], /, **kwargs: Any
) -> ConstantFormatVideoNode
shift(
    clip: VideoNode,
    shift_top: float | list[float],
    shift_left: float | list[float],
    /,
    **kwargs: Any,
) -> ConstantFormatVideoNode
shift(
    clip: VideoNode,
    shifts_or_top: float | tuple[float, float] | list[float],
    shift_left: float | list[float] | None = None,
    /,
    **kwargs: Any,
) -> ConstantFormatVideoNode

Apply a subpixel shift to the clip using the kernel's scaling logic.

If a single float or tuple is provided, it is used uniformly. If a list is given, the shift is applied per plane.

Keyword arguments passed during initialization are automatically injected here, unless explicitly overridden by the arguments provided at call time. Only arguments that match named parameters in this method are injected.

Parameters:

  • clip

    (VideoNode) –

    The source clip.

  • shifts_or_top

    (float | tuple[float, float] | list[float]) –

    Either a single vertical shift, a (top, left) tuple, or a list of vertical shifts.

  • shift_left

    (float | list[float] | None, default: None ) –

    Horizontal shift or list of horizontal shifts. Ignored if shifts_or_top is a tuple.

  • **kwargs

    (Any, default: {} ) –

    Additional arguments passed to the internal scale call.

Returns:

  • ConstantFormatVideoNode

    A new clip with the applied shift.

Raises:

  • VariableFormatError

    If the input clip has variable format.

  • CustomValueError

    If the input clip is GRAY but lists of shift has been passed.

Source code in vskernels/abstract/base.py
845
846
847
848
849
850
851
852
853
854
855
856
857
858
859
860
861
862
863
864
865
866
867
868
869
870
871
872
873
874
875
876
877
878
879
880
881
882
883
884
885
886
887
888
889
890
891
892
893
894
895
896
897
898
899
900
901
902
903
def shift(
    self,
    clip: vs.VideoNode,
    shifts_or_top: float | tuple[float, float] | list[float],
    shift_left: float | list[float] | None = None,
    /,
    **kwargs: Any,
) -> ConstantFormatVideoNode:
    """
    Apply a subpixel shift to the clip using the kernel's scaling logic.

    If a single float or tuple is provided, it is used uniformly.
    If a list is given, the shift is applied per plane.

    Keyword arguments passed during initialization are automatically injected here,
    unless explicitly overridden by the arguments provided at call time.
    Only arguments that match named parameters in this method are injected.

    Args:
        clip: The source clip.
        shifts_or_top: Either a single vertical shift, a (top, left) tuple, or a list of vertical shifts.
        shift_left: Horizontal shift or list of horizontal shifts. Ignored if `shifts_or_top` is a tuple.
        **kwargs: Additional arguments passed to the internal `scale` call.

    Returns:
        A new clip with the applied shift.

    Raises:
        VariableFormatError: If the input clip has variable format.
        CustomValueError: If the input clip is GRAY but lists of shift has been passed.
    """
    assert check_variable_format(clip, self.shift)

    n_planes = clip.format.num_planes

    def _shift(src: vs.VideoNode, shift: tuple[TopShift, LeftShift] = (0, 0)) -> ConstantFormatVideoNode:
        return self.scale(src, shift=shift, **kwargs)  # type: ignore[return-value]

    if isinstance(shifts_or_top, tuple):
        return _shift(clip, shifts_or_top)

    if isinstance(shifts_or_top, (int, float)) and isinstance(shift_left, (int, float, NoneType)):
        return _shift(clip, (shifts_or_top, shift_left or 0))

    if shift_left is None:
        shift_left = 0.0

    shifts_top = normalize_seq(shifts_or_top, n_planes)
    shifts_left = normalize_seq(shift_left, n_planes)

    if n_planes == 1:
        return _shift(clip, (shifts_top[0], shifts_left[0]))

    shifted_planes = [
        plane if top == left == 0 else _shift(plane, (top, left))
        for plane, top, left in zip(split(clip), shifts_top, shifts_left)
    ]

    return core.std.ShufflePlanes(shifted_planes, [0, 0, 0], clip.format.color_family, clip)

supersample

supersample(
    clip: VideoNodeT,
    rfactor: float = 2.0,
    shift: tuple[TopShift, LeftShift] = (0, 0),
    **kwargs: Any
) -> VideoNodeT

Supersample a clip by a given scaling factor.

Keyword arguments passed during initialization are automatically injected here, unless explicitly overridden by the arguments provided at call time. Only arguments that match named parameters in this method are injected.

Parameters:

  • clip

    (VideoNodeT) –

    The source clip.

  • rfactor

    (float, default: 2.0 ) –

    Scaling factor for supersampling.

  • shift

    (tuple[TopShift, LeftShift], default: (0, 0) ) –

    Subpixel shift (top, left) applied during scaling.

  • **kwargs

    (Any, default: {} ) –

    Additional arguments forwarded to the scale function.

Raises:

  • CustomValueError

    If resulting resolution is non-positive.

Returns:

Source code in vskernels/abstract/base.py
496
497
498
499
500
501
502
503
504
505
506
507
508
509
510
511
512
513
514
515
516
517
518
519
520
521
522
523
524
525
526
527
528
529
def supersample(
    self, clip: VideoNodeT, rfactor: float = 2.0, shift: tuple[TopShift, LeftShift] = (0, 0), **kwargs: Any
) -> VideoNodeT:
    """
    Supersample a clip by a given scaling factor.

    Keyword arguments passed during initialization are automatically injected here,
    unless explicitly overridden by the arguments provided at call time.
    Only arguments that match named parameters in this method are injected.

    Args:
        clip: The source clip.
        rfactor: Scaling factor for supersampling.
        shift: Subpixel shift (top, left) applied during scaling.
        **kwargs: Additional arguments forwarded to the scale function.

    Raises:
        CustomValueError: If resulting resolution is non-positive.

    Returns:
        The supersampled clip.
    """
    assert check_variable_resolution(clip, self.supersample)

    dst_width, dst_height = ceil(clip.width * rfactor), ceil(clip.height * rfactor)

    if max(dst_width, dst_height) <= 0.0:
        raise CustomValueError(
            'Multiplying the resolution by "rfactor" must result in a positive resolution!',
            self.supersample,
            rfactor,
        )

    return self.scale(clip, dst_width, dst_height, shift, **kwargs)  # type: ignore[return-value]

Mitchell

Mitchell(**kwargs: Any)

Bases: Bicubic

Mitchell resizer (b=1/3, c=1/3).

Initialize the scaler with optional arguments.

Parameters:

  • **kwargs

    (Any, default: {} ) –

    Keyword arguments that configure the internal scaling behavior.

Methods:

  • bob

    Apply bob deinterlacing to a given clip using the selected resizer.

  • deinterlace

    Apply deinterlacing to a given clip using the selected resizer.

  • descale

    Descale a clip to the given resolution, with image borders handling and sampling grid alignment,

  • ensure_obj

    Ensure that the input is a scaler instance, resolving it if necessary.

  • from_param

    Resolve and return a scaler type from a given input (string, type, or instance).

  • get_bob_args
  • get_descale_args

    Generate and normalize argument dictionary for a descale operation.

  • get_params_args
  • get_resample_args

    Generate and normalize argument dictionary for a resample operation.

  • get_rescale_args

    Generate and normalize argument dictionary for a rescale operation.

  • get_scale_args

    Generate and normalize argument dictionary for a scale operation.

  • implemented_funcs

    Returns a set of function names that are implemented in the current class and the parent classes.

  • kernel_radius
  • multi

    Deprecated alias for supersample.

  • pretty_string

    Cached property returning a user-friendly string representation.

  • resample

    Resample a video clip to the given format.

  • rescale

    Rescale a clip to the given resolution from a previously descaled clip,

  • scale

    Scale a clip to the given resolution, with aspect ratio and linear light support.

  • shift

    Apply a subpixel shift to the clip using the kernel's scaling logic.

  • supersample

    Supersample a clip by a given scaling factor.

Attributes:

Source code in vskernels/kernels/zimg/bicubic.py
115
116
117
118
119
120
121
122
def __init__(self, **kwargs: Any) -> None:
    """
    Initialize the scaler with optional arguments.

    Args:
        **kwargs: Keyword arguments that configure the internal scaling behavior.
    """
    super().__init__(b=1 / 3, c=1 / 3, **kwargs)

b instance-attribute

b = b

bob_function class-attribute instance-attribute

bob_function: Callable[..., ConstantFormatVideoNode] = Bob

Bob function called internally when performing bobbing operations.

c instance-attribute

c = c

descale_function class-attribute instance-attribute

descale_function: Callable[..., ConstantFormatVideoNode] = Debicubic

kwargs instance-attribute

kwargs: dict[str, Any] = kwargs

Arguments passed to the implemented funcs or internal scale function.

resample_function class-attribute instance-attribute

resample_function: Callable[..., ConstantFormatVideoNode] = Bicubic

rescale_function class-attribute instance-attribute

rescale_function: Callable[..., ConstantFormatVideoNode] = Bicubic

scale_function class-attribute instance-attribute

scale_function: Callable[..., VideoNode] = Bicubic

bob

bob(
    clip: VideoNode, *, tff: FieldBasedLike | bool | None = None, **kwargs: Any
) -> ConstantFormatVideoNode

Apply bob deinterlacing to a given clip using the selected resizer.

Keyword arguments passed during initialization are automatically injected here, unless explicitly overridden by the arguments provided at call time. Only arguments that match named parameters in this method are injected.

Parameters:

  • clip

    (VideoNode) –

    The source clip

  • tff

    (FieldBasedLike | bool | None, default: None ) –

    Field order of the clip.

Returns:

  • ConstantFormatVideoNode

    The bobbed clip.

Source code in vskernels/kernels/zimg/abstract.py
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
def bob(
    self, clip: vs.VideoNode, *, tff: FieldBasedLike | bool | None = None, **kwargs: Any
) -> ConstantFormatVideoNode:
    """
    Apply bob deinterlacing to a given clip using the selected resizer.

    Keyword arguments passed during initialization are automatically injected here,
    unless explicitly overridden by the arguments provided at call time.
    Only arguments that match named parameters in this method are injected.

    Args:
        clip: The source clip
        tff: Field order of the clip.

    Returns:
        The bobbed clip.
    """
    clip_fieldbased = FieldBased.from_param_or_video(tff, clip, True, self.__class__)

    assert check_variable(clip, self.__class__)

    return self.bob_function(clip, **self.get_bob_args(clip, tff=clip_fieldbased.is_tff, **kwargs))

deinterlace

deinterlace(
    clip: VideoNode,
    *,
    tff: FieldBasedLike | bool | None = None,
    double_rate: bool = True,
    **kwargs: Any
) -> ConstantFormatVideoNode

Apply deinterlacing to a given clip using the selected resizer.

Keyword arguments passed during initialization are automatically injected here, unless explicitly overridden by the arguments provided at call time. Only arguments that match named parameters in this method are injected.

Parameters:

  • clip

    (VideoNode) –

    The source clip

  • tff

    (FieldBasedLike | bool | None, default: None ) –

    Field order of the clip.

  • double_rate

    (bool, default: True ) –

    Whether to double the frame rate (True) or retain the original rate (False).

Returns:

  • ConstantFormatVideoNode

    The bobbed clip.

Source code in vskernels/kernels/zimg/abstract.py
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
def deinterlace(
    self, clip: vs.VideoNode, *, tff: FieldBasedLike | bool | None = None, double_rate: bool = True, **kwargs: Any
) -> ConstantFormatVideoNode:
    """
    Apply deinterlacing to a given clip using the selected resizer.

    Keyword arguments passed during initialization are automatically injected here,
    unless explicitly overridden by the arguments provided at call time.
    Only arguments that match named parameters in this method are injected.

    Args:
        clip: The source clip
        tff: Field order of the clip.
        double_rate: Whether to double the frame rate (True) or retain the original rate (False).

    Returns:
        The bobbed clip.
    """
    bobbed = self.bob(clip, tff=tff, **kwargs)

    if not double_rate:
        return bobbed[::2]

    return bobbed

descale

descale(
    clip: VideoNode,
    width: int | None = None,
    height: int | None = None,
    shift: ShiftT = (0, 0),
    *,
    linear: bool | None = None,
    sigmoid: bool | tuple[Slope, Center] = False,
    border_handling: int | BorderHandling = MIRROR,
    sample_grid_model: int | SampleGridModel = MATCH_EDGES,
    field_based: FieldBasedLike | None = None,
    ignore_mask: VideoNode | None = None,
    blur: float | None = None,
    **kwargs: Any
) -> ConstantFormatVideoNode

Descale a clip to the given resolution, with image borders handling and sampling grid alignment, optionally using linear light processing.

Supports both progressive and interlaced sources. When interlaced, it will separate fields, perform per-field descaling, and weave them back.

Keyword arguments passed during initialization are automatically injected here, unless explicitly overridden by the arguments provided at call time. Only arguments that match named parameters in this method are injected.

Parameters:

  • clip

    (VideoNode) –

    The source clip.

  • width

    (int | None, default: None ) –

    Target descaled width (defaults to clip width if None).

  • height

    (int | None, default: None ) –

    Target descaled height (defaults to clip height if None).

  • shift

    (ShiftT, default: (0, 0) ) –

    Subpixel shift (top, left) or per-field shifts.

  • linear

    (bool | None, default: None ) –

    Whether to linearize the input before descaling. If None, inferred from sigmoid.

  • sigmoid

    (bool | tuple[Slope, Center], default: False ) –

    Whether to use sigmoid transfer curve. Can be True, False, or a tuple of (slope, center). True applies the defaults values (6.5, 0.75). Keep in mind sigmoid slope has to be in range 1.0-20.0 (inclusive) and sigmoid center has to be in range 0.0-1.0 (inclusive).

  • border_handling

    (int | BorderHandling, default: MIRROR ) –

    Method for handling image borders during sampling.

  • sample_grid_model

    (int | SampleGridModel, default: MATCH_EDGES ) –

    Model used to align sampling grid.

  • field_based

    (FieldBasedLike | None, default: None ) –

    Field-based processing mode (interlaced or progressive).

  • ignore_mask

    (VideoNode | None, default: None ) –

    Optional mask specifying areas to ignore during descaling.

  • blur

    (float | None, default: None ) –

    Amount of blur to apply during scaling.

  • **kwargs

    (Any, default: {} ) –

    Additional arguments passed to descale_function.

Returns:

  • ConstantFormatVideoNode

    The descaled video node, optionally processed in linear light.

Source code in vskernels/abstract/complex.py
671
672
673
674
675
676
677
678
679
680
681
682
683
684
685
686
687
688
689
690
691
692
693
694
695
696
697
698
699
700
701
702
703
704
705
706
707
708
709
710
711
712
713
714
715
716
717
718
719
720
721
722
723
724
725
726
727
728
729
730
731
732
733
734
735
736
737
738
739
740
741
742
743
744
745
746
747
748
749
750
751
752
753
754
755
756
757
758
759
760
761
762
763
764
765
766
767
768
769
770
771
def descale(
    self,
    clip: vs.VideoNode,
    width: int | None = None,
    height: int | None = None,
    shift: ShiftT = (0, 0),
    *,
    # `linear` and `sigmoid` parameters from LinearDescaler
    linear: bool | None = None,
    sigmoid: bool | tuple[Slope, Center] = False,
    # ComplexDescaler adds border_handling, sample_grid_model, field_based,  ignore_mask and blur
    border_handling: int | BorderHandling = BorderHandling.MIRROR,
    sample_grid_model: int | SampleGridModel = SampleGridModel.MATCH_EDGES,
    field_based: FieldBasedLike | None = None,
    ignore_mask: vs.VideoNode | None = None,
    blur: float | None = None,
    **kwargs: Any,
) -> ConstantFormatVideoNode:
    """
    Descale a clip to the given resolution, with image borders handling and sampling grid alignment,
    optionally using linear light processing.

    Supports both progressive and interlaced sources. When interlaced, it will separate fields,
    perform per-field descaling, and weave them back.

    Keyword arguments passed during initialization are automatically injected here,
    unless explicitly overridden by the arguments provided at call time.
    Only arguments that match named parameters in this method are injected.

    Args:
        clip: The source clip.
        width: Target descaled width (defaults to clip width if None).
        height: Target descaled height (defaults to clip height if None).
        shift: Subpixel shift (top, left) or per-field shifts.
        linear: Whether to linearize the input before descaling. If None, inferred from sigmoid.
        sigmoid: Whether to use sigmoid transfer curve. Can be True, False, or a tuple of (slope, center). `True`
            applies the defaults values (6.5, 0.75). Keep in mind sigmoid slope has to be in range 1.0-20.0
            (inclusive) and sigmoid center has to be in range 0.0-1.0 (inclusive).
        border_handling: Method for handling image borders during sampling.
        sample_grid_model: Model used to align sampling grid.
        field_based: Field-based processing mode (interlaced or progressive).
        ignore_mask: Optional mask specifying areas to ignore during descaling.
        blur: Amount of blur to apply during scaling.
        **kwargs: Additional arguments passed to `descale_function`.

    Returns:
        The descaled video node, optionally processed in linear light.
    """
    width, height = self._wh_norm(clip, width, height)
    check_correct_subsampling(clip, width, height)

    field_based = FieldBased.from_param_or_video(field_based, clip)

    clip, bits = expect_bits(clip, 32)

    de_base_args = (width, height // (1 + field_based.is_inter))
    kwargs.update(
        linear=linear,
        sigmoid=sigmoid,
        border_handling=BorderHandling.from_param(border_handling, self.descale),
        ignore_mask=ignore_mask,
        blur=blur,
    )

    sample_grid_model = SampleGridModel(sample_grid_model)

    if field_based.is_inter:
        from vsdeinterlace import reweave

        shift_y, shift_x = _descale_shift_norm(shift, False, self.descale)

        kwargs_tf, shift = sample_grid_model.for_src(clip, width, height, (shift_y[0], shift_x[0]), **kwargs)
        kwargs_bf, shift = sample_grid_model.for_src(clip, width, height, (shift_y[1], shift_x[1]), **kwargs)

        de_kwargs_tf = self.get_descale_args(clip, (shift_y[0], shift_x[0]), *de_base_args, **kwargs_tf)
        de_kwargs_bf = self.get_descale_args(clip, (shift_y[1], shift_x[1]), *de_base_args, **kwargs_bf)

        if height % 2:
            raise CustomIndexError("You can't descale to odd resolution when crossconverted!", self.descale)

        field_shift = 0.125 * height / clip.height

        fields = clip.std.SeparateFields(field_based.is_tff)

        descaled_tf = super().descale(
            fields[0::2],
            **de_kwargs_tf | {"src_top": de_kwargs_tf.get("src_top", 0.0) + field_shift},
        )
        descaled_bf = super().descale(
            fields[1::2],
            **de_kwargs_bf | {"src_top": de_kwargs_bf.get("src_top", 0.0) - field_shift},
        )
        descaled = reweave(descaled_tf, descaled_bf, field_based)
    else:
        shift = _descale_shift_norm(shift, True, self.descale)

        kwargs, shift = sample_grid_model.for_src(clip, width, height, shift, **kwargs)

        descaled = super().descale(clip, **self.get_descale_args(clip, shift, *de_base_args, **kwargs))

    return depth(descaled, bits)

ensure_obj classmethod

ensure_obj(
    scaler: str | type[Self] | Self | None = None,
    /,
    func_except: FuncExcept | None = None,
) -> Self

Ensure that the input is a scaler instance, resolving it if necessary.

Parameters:

  • scaler

    (str | type[Self] | Self | None, default: None ) –

    Scaler identifier (string, class, or instance).

  • func_except

    (FuncExcept | None, default: None ) –

    Function returned for custom error handling.

Returns:

  • Self

    Scaler instance.

Source code in vskernels/abstract/base.py
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
@classmethod
def ensure_obj(
    cls,
    scaler: str | type[Self] | Self | None = None,
    /,
    func_except: FuncExcept | None = None,
) -> Self:
    """
    Ensure that the input is a scaler instance, resolving it if necessary.

    Args:
        scaler: Scaler identifier (string, class, or instance).
        func_except: Function returned for custom error handling.

    Returns:
        Scaler instance.
    """
    return _base_ensure_obj(cls, scaler, func_except)

from_param classmethod

from_param(
    scaler: str | type[Self] | Self | None = None,
    /,
    func_except: FuncExcept | None = None,
) -> type[Self]

Resolve and return a scaler type from a given input (string, type, or instance).

Parameters:

  • scaler

    (str | type[Self] | Self | None, default: None ) –

    Scaler identifier (string, class, or instance).

  • func_except

    (FuncExcept | None, default: None ) –

    Function returned for custom error handling.

Returns:

  • type[Self]

    Resolved scaler type.

Source code in vskernels/abstract/base.py
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
@classmethod
def from_param(
    cls,
    scaler: str | type[Self] | Self | None = None,
    /,
    func_except: FuncExcept | None = None,
) -> type[Self]:
    """
    Resolve and return a scaler type from a given input (string, type, or instance).

    Args:
        scaler: Scaler identifier (string, class, or instance).
        func_except: Function returned for custom error handling.

    Returns:
        Resolved scaler type.
    """
    return _base_from_param(cls, scaler, cls._err_class, func_except)

get_bob_args

get_bob_args(
    clip: VideoNode, shift: tuple[TopShift, LeftShift] = (0, 0), **kwargs: Any
) -> dict[str, Any]
Source code in vskernels/kernels/zimg/bicubic.py
60
61
62
63
64
65
66
67
68
def get_bob_args(
    self,
    clip: vs.VideoNode,
    shift: tuple[TopShift, LeftShift] = (0, 0),
    **kwargs: Any,
) -> dict[str, Any]:
    return super().get_bob_args(
        clip, shift, filter="bicubic", filter_param_a=self.b, filter_param_b=self.c, **kwargs
    )

get_descale_args

get_descale_args(
    clip: VideoNode,
    shift: tuple[TopShift, LeftShift] = (0, 0),
    width: int | None = None,
    height: int | None = None,
    **kwargs: Any
) -> dict[str, Any]

Generate and normalize argument dictionary for a descale operation.

Parameters:

  • clip

    (VideoNode) –

    The source clip.

  • shift

    (tuple[TopShift, LeftShift], default: (0, 0) ) –

    Vertical and horizontal shift to apply.

  • width

    (int | None, default: None ) –

    Target width.

  • height

    (int | None, default: None ) –

    Target height.

  • **kwargs

    (Any, default: {} ) –

    Additional arguments to pass to the descale function.

Returns:

  • dict[str, Any]

    Dictionary of keyword arguments for the descale function.

Source code in vskernels/abstract/base.py
946
947
948
949
950
951
952
953
954
955
956
957
958
959
960
961
962
963
964
965
966
967
def get_descale_args(
    self,
    clip: vs.VideoNode,
    shift: tuple[TopShift, LeftShift] = (0, 0),
    width: int | None = None,
    height: int | None = None,
    **kwargs: Any,
) -> dict[str, Any]:
    """
    Generate and normalize argument dictionary for a descale operation.

    Args:
        clip: The source clip.
        shift: Vertical and horizontal shift to apply.
        width: Target width.
        height: Target height.
        **kwargs: Additional arguments to pass to the descale function.

    Returns:
        Dictionary of keyword arguments for the descale function.
    """
    return {"src_top": shift[0], "src_left": shift[1]} | self.get_params_args(True, clip, width, height, **kwargs)

get_params_args

get_params_args(
    is_descale: bool,
    clip: VideoNode,
    width: int | None = None,
    height: int | None = None,
    **kwargs: Any
) -> dict[str, Any]
Source code in vskernels/kernels/zimg/bicubic.py
52
53
54
55
56
57
58
def get_params_args(
    self, is_descale: bool, clip: vs.VideoNode, width: int | None = None, height: int | None = None, **kwargs: Any
) -> dict[str, Any]:
    args = super().get_params_args(is_descale, clip, width, height, **kwargs)
    if is_descale:
        return args | {"b": self.b, "c": self.c}
    return args | {"filter_param_a": self.b, "filter_param_b": self.c}

get_resample_args

get_resample_args(
    clip: VideoNode,
    format: int | VideoFormatLike | HoldsVideoFormat,
    matrix: MatrixLike | None,
    matrix_in: MatrixLike | None,
    **kwargs: Any
) -> dict[str, Any]

Generate and normalize argument dictionary for a resample operation.

Parameters:

  • clip

    (VideoNode) –

    The source clip.

  • format

    (int | VideoFormatLike | HoldsVideoFormat) –

    The target video format, which can either be:

    • an integer format ID,
    • a vs.PresetVideoFormat or vs.VideoFormat,
    • or a source from which a valid VideoFormat can be extracted.
  • matrix

    (MatrixLike | None) –

    Target color matrix.

  • matrix_in

    (MatrixLike | None) –

    Source color matrix.

  • **kwargs

    (Any, default: {} ) –

    Additional arguments to pass to the resample function.

Returns:

  • dict[str, Any]

    Dictionary of keyword arguments for the resample function.

Source code in vskernels/abstract/base.py
 992
 993
 994
 995
 996
 997
 998
 999
1000
1001
1002
1003
1004
1005
1006
1007
1008
1009
1010
1011
1012
1013
1014
1015
1016
1017
1018
1019
1020
1021
def get_resample_args(
    self,
    clip: vs.VideoNode,
    format: int | VideoFormatLike | HoldsVideoFormat,
    matrix: MatrixLike | None,
    matrix_in: MatrixLike | None,
    **kwargs: Any,
) -> dict[str, Any]:
    """
    Generate and normalize argument dictionary for a resample operation.

    Args:
        clip: The source clip.
        format: The target video format, which can either be:

               - an integer format ID,
               - a `vs.PresetVideoFormat` or `vs.VideoFormat`,
               - or a source from which a valid `VideoFormat` can be extracted.
        matrix: Target color matrix.
        matrix_in: Source color matrix.
        **kwargs: Additional arguments to pass to the resample function.

    Returns:
        Dictionary of keyword arguments for the resample function.
    """
    return {
        "format": get_video_format(format).id,
        "matrix": Matrix.from_param(matrix),
        "matrix_in": Matrix.from_param(matrix_in),
    } | self.get_params_args(False, clip, **kwargs)

get_rescale_args

get_rescale_args(
    clip: VideoNode,
    shift: tuple[TopShift, LeftShift] = (0, 0),
    width: int | None = None,
    height: int | None = None,
    **kwargs: Any
) -> dict[str, Any]

Generate and normalize argument dictionary for a rescale operation.

Parameters:

  • clip

    (VideoNode) –

    The source clip.

  • shift

    (tuple[TopShift, LeftShift], default: (0, 0) ) –

    Vertical and horizontal shift to apply.

  • width

    (int | None, default: None ) –

    Target width.

  • height

    (int | None, default: None ) –

    Target height.

  • **kwargs

    (Any, default: {} ) –

    Additional arguments to pass to the rescale function.

Returns:

  • dict[str, Any]

    Dictionary of keyword arguments for the rescale function.

Source code in vskernels/abstract/base.py
969
970
971
972
973
974
975
976
977
978
979
980
981
982
983
984
985
986
987
988
989
990
def get_rescale_args(
    self,
    clip: vs.VideoNode,
    shift: tuple[TopShift, LeftShift] = (0, 0),
    width: int | None = None,
    height: int | None = None,
    **kwargs: Any,
) -> dict[str, Any]:
    """
    Generate and normalize argument dictionary for a rescale operation.

    Args:
        clip: The source clip.
        shift: Vertical and horizontal shift to apply.
        width: Target width.
        height: Target height.
        **kwargs: Additional arguments to pass to the rescale function.

    Returns:
        Dictionary of keyword arguments for the rescale function.
    """
    return {"src_top": shift[0], "src_left": shift[1]} | self.get_params_args(True, clip, width, height, **kwargs)

get_scale_args

get_scale_args(
    clip: VideoNode,
    shift: tuple[TopShift, LeftShift] = (0, 0),
    width: int | None = None,
    height: int | None = None,
    **kwargs: Any
) -> dict[str, Any]

Generate and normalize argument dictionary for a scale operation.

Parameters:

  • clip

    (VideoNode) –

    The source clip.

  • shift

    (tuple[TopShift, LeftShift], default: (0, 0) ) –

    Vertical and horizontal shift to apply.

  • width

    (int | None, default: None ) –

    Target width.

  • height

    (int | None, default: None ) –

    Target height.

  • **kwargs

    (Any, default: {} ) –

    Additional arguments to pass to the scale function.

Returns:

  • dict[str, Any]

    Dictionary of keyword arguments for the scale function.

Source code in vskernels/abstract/base.py
923
924
925
926
927
928
929
930
931
932
933
934
935
936
937
938
939
940
941
942
943
944
def get_scale_args(
    self,
    clip: vs.VideoNode,
    shift: tuple[TopShift, LeftShift] = (0, 0),
    width: int | None = None,
    height: int | None = None,
    **kwargs: Any,
) -> dict[str, Any]:
    """
    Generate and normalize argument dictionary for a scale operation.

    Args:
        clip: The source clip.
        shift: Vertical and horizontal shift to apply.
        width: Target width.
        height: Target height.
        **kwargs: Additional arguments to pass to the scale function.

    Returns:
        Dictionary of keyword arguments for the scale function.
    """
    return {"src_top": shift[0], "src_left": shift[1]} | self.get_params_args(False, clip, width, height, **kwargs)

implemented_funcs classmethod

implemented_funcs() -> frozenset[str]

Returns a set of function names that are implemented in the current class and the parent classes.

These functions determine which keyword arguments will be extracted from the init method.

Returns:

Source code in vskernels/abstract/base.py
431
432
433
434
435
436
437
438
439
440
441
442
@classproperty
@classmethod
def implemented_funcs(cls) -> frozenset[str]:
    """
    Returns a set of function names that are implemented in the current class and the parent classes.

    These functions determine which keyword arguments will be extracted from the __init__ method.

    Returns:
        Frozen set of function names.
    """
    return frozenset(func for klass in cls.mro() for func in getattr(klass, "_implemented_funcs", ()))

kernel_radius

kernel_radius() -> int
Source code in vskernels/kernels/zimg/bicubic.py
70
71
72
73
74
@ZimgComplexKernel.cachedproperty
def kernel_radius(self) -> int:
    if (self.b, self.c) == (0, 0):
        return 1
    return 2

multi

multi(
    clip: VideoNodeT,
    multi: float = 2.0,
    shift: tuple[TopShift, LeftShift] = (0, 0),
    **kwargs: Any
) -> VideoNodeT

Deprecated alias for supersample.

Keyword arguments passed during initialization are automatically injected here, unless explicitly overridden by the arguments provided at call time. Only arguments that match named parameters in this method are injected.

Parameters:

  • clip

    (VideoNodeT) –

    The source clip.

  • multi

    (float, default: 2.0 ) –

    Supersampling factor.

  • shift

    (tuple[TopShift, LeftShift], default: (0, 0) ) –

    Subpixel shift (top, left) applied during scaling.

  • **kwargs

    (Any, default: {} ) –

    Additional arguments forwarded to the scale function.

Returns:

Source code in vskernels/abstract/base.py
531
532
533
534
535
536
537
538
539
540
541
542
543
544
545
546
547
548
549
550
551
@deprecated('The "multi" method is deprecated. Use "supersample" instead.', category=DeprecationWarning)
def multi(
    self, clip: VideoNodeT, multi: float = 2.0, shift: tuple[TopShift, LeftShift] = (0, 0), **kwargs: Any
) -> VideoNodeT:
    """
    Deprecated alias for `supersample`.

    Keyword arguments passed during initialization are automatically injected here,
    unless explicitly overridden by the arguments provided at call time.
    Only arguments that match named parameters in this method are injected.

    Args:
        clip: The source clip.
        multi: Supersampling factor.
        shift: Subpixel shift (top, left) applied during scaling.
        **kwargs: Additional arguments forwarded to the scale function.

    Returns:
        The supersampled clip.
    """
    return self.supersample(clip, multi, shift, **kwargs)

pretty_string

pretty_string() -> str

Cached property returning a user-friendly string representation.

Returns:

  • str

    Pretty-printed string with arguments.

Source code in vskernels/abstract/base.py
421
422
423
424
425
426
427
428
429
@BaseScalerMeta.cachedproperty
def pretty_string(self) -> str:
    """
    Cached property returning a user-friendly string representation.

    Returns:
        Pretty-printed string with arguments.
    """
    return self._pretty_string()

resample

resample(
    clip: VideoNode,
    format: int | VideoFormatLike | HoldsVideoFormat,
    matrix: MatrixLike | None = None,
    matrix_in: MatrixLike | None = None,
    **kwargs: Any
) -> ConstantFormatVideoNode

Resample a video clip to the given format.

Keyword arguments passed during initialization are automatically injected here, unless explicitly overridden by the arguments provided at call time. Only arguments that match named parameters in this method are injected.

Parameters:

  • clip

    (VideoNode) –

    The source clip.

  • format

    (int | VideoFormatLike | HoldsVideoFormat) –

    The target video format, which can either be:

    • an integer format ID,
    • a vs.PresetVideoFormat or vs.VideoFormat,
    • or a source from which a valid VideoFormat can be extracted.
  • matrix

    (MatrixLike | None, default: None ) –

    An optional color transformation matrix to apply.

  • matrix_in

    (MatrixLike | None, default: None ) –

    An optional input matrix for color transformations.

  • **kwargs

    (Any, default: {} ) –

    Additional keyword arguments passed to the resample_function.

Returns:

  • ConstantFormatVideoNode

    The resampled clip.

Source code in vskernels/abstract/base.py
717
718
719
720
721
722
723
724
725
726
727
728
729
730
731
732
733
734
735
736
737
738
739
740
741
742
743
744
745
746
747
748
def resample(
    self,
    clip: vs.VideoNode,
    format: int | VideoFormatLike | HoldsVideoFormat,
    matrix: MatrixLike | None = None,
    matrix_in: MatrixLike | None = None,
    **kwargs: Any,
) -> ConstantFormatVideoNode:
    """
    Resample a video clip to the given format.

    Keyword arguments passed during initialization are automatically injected here,
    unless explicitly overridden by the arguments provided at call time.
    Only arguments that match named parameters in this method are injected.

    Args:
        clip: The source clip.
        format: The target video format, which can either be:

               - an integer format ID,
               - a `vs.PresetVideoFormat` or `vs.VideoFormat`,
               - or a source from which a valid `VideoFormat` can be extracted.
        matrix: An optional color transformation matrix to apply.
        matrix_in: An optional input matrix for color transformations.
        **kwargs: Additional keyword arguments passed to the `resample_function`.

    Returns:
        The resampled clip.
    """
    return self.resample_function(
        clip, **_norm_props_enums(self.get_resample_args(clip, format, matrix, matrix_in, **kwargs))
    )

rescale

rescale(
    clip: VideoNode,
    width: int | None = None,
    height: int | None = None,
    shift: ShiftT = (0, 0),
    *,
    linear: bool | None = None,
    sigmoid: bool | tuple[Slope, Center] = False,
    border_handling: int | BorderHandling = MIRROR,
    sample_grid_model: int | SampleGridModel = MATCH_EDGES,
    field_based: FieldBasedLike | None = None,
    ignore_mask: VideoNode | None = None,
    blur: float | None = None,
    **kwargs: Any
) -> ConstantFormatVideoNode

Rescale a clip to the given resolution from a previously descaled clip, with image borders handling and sampling grid alignment, optionally using linear light processing.

Keyword arguments passed during initialization are automatically injected here, unless explicitly overridden by the arguments provided at call time. Only arguments that match named parameters in this method are injected.

Parameters:

  • clip

    (VideoNode) –

    The source clip.

  • width

    (int | None, default: None ) –

    Target scaled width (defaults to clip width if None).

  • height

    (int | None, default: None ) –

    Target scaled height (defaults to clip height if None).

  • shift

    (ShiftT, default: (0, 0) ) –

    Subpixel shift (top, left) or per-field shifts.

  • linear

    (bool | None, default: None ) –

    Whether to linearize the input before rescaling. If None, inferred from sigmoid.

  • sigmoid

    (bool | tuple[Slope, Center], default: False ) –

    Whether to use sigmoid transfer curve. Can be True, False, or a tuple of (slope, center). True applies the defaults values (6.5, 0.75). Keep in mind sigmoid slope has to be in range 1.0-20.0 (inclusive) and sigmoid center has to be in range 0.0-1.0 (inclusive).

  • border_handling

    (int | BorderHandling, default: MIRROR ) –

    Method for handling image borders during sampling.

  • sample_grid_model

    (int | SampleGridModel, default: MATCH_EDGES ) –

    Model used to align sampling grid.

  • field_based

    (FieldBasedLike | None, default: None ) –

    Field-based processing mode (interlaced or progressive).

  • ignore_mask

    (VideoNode | None, default: None ) –

    Optional mask specifying areas to ignore during rescaling.

  • blur

    (float | None, default: None ) –

    Amount of blur to apply during rescaling.

  • **kwargs

    (Any, default: {} ) –

    Additional arguments passed to rescale_function.

Returns:

  • ConstantFormatVideoNode

    The scaled clip.

Source code in vskernels/abstract/complex.py
773
774
775
776
777
778
779
780
781
782
783
784
785
786
787
788
789
790
791
792
793
794
795
796
797
798
799
800
801
802
803
804
805
806
807
808
809
810
811
812
813
814
815
816
817
818
819
820
821
822
823
824
825
826
827
828
829
830
831
832
833
834
835
836
837
838
839
840
841
842
843
def rescale(
    self,
    clip: vs.VideoNode,
    width: int | None = None,
    height: int | None = None,
    shift: ShiftT = (0, 0),
    *,
    # `linear` and `sigmoid` parameters from LinearDescaler
    linear: bool | None = None,
    sigmoid: bool | tuple[Slope, Center] = False,
    # ComplexDescaler adds border_handling, sample_grid_model, field_based, ignore_mask and blur
    border_handling: int | BorderHandling = BorderHandling.MIRROR,
    sample_grid_model: int | SampleGridModel = SampleGridModel.MATCH_EDGES,
    field_based: FieldBasedLike | None = None,
    ignore_mask: vs.VideoNode | None = None,
    blur: float | None = None,
    **kwargs: Any,
) -> ConstantFormatVideoNode:
    """
    Rescale a clip to the given resolution from a previously descaled clip,
    with image borders handling and sampling grid alignment, optionally using linear light processing.

    Keyword arguments passed during initialization are automatically injected here,
    unless explicitly overridden by the arguments provided at call time.
    Only arguments that match named parameters in this method are injected.

    Args:
        clip: The source clip.
        width: Target scaled width (defaults to clip width if None).
        height: Target scaled height (defaults to clip height if None).
        shift: Subpixel shift (top, left) or per-field shifts.
        linear: Whether to linearize the input before rescaling. If None, inferred from sigmoid.
        sigmoid: Whether to use sigmoid transfer curve. Can be True, False, or a tuple of (slope, center). `True`
            applies the defaults values (6.5, 0.75). Keep in mind sigmoid slope has to be in range 1.0-20.0
            (inclusive) and sigmoid center has to be in range 0.0-1.0 (inclusive).
        border_handling: Method for handling image borders during sampling.
        sample_grid_model: Model used to align sampling grid.
        field_based: Field-based processing mode (interlaced or progressive).
        ignore_mask: Optional mask specifying areas to ignore during rescaling.
        blur: Amount of blur to apply during rescaling.
        **kwargs: Additional arguments passed to `rescale_function`.

    Returns:
        The scaled clip.
    """
    width, height = self._wh_norm(clip, width, height)
    check_correct_subsampling(clip, width, height)

    field_based = FieldBased.from_param_or_video(field_based, clip)

    clip, bits = expect_bits(clip, 32)

    de_base_args = (width, height // (1 + field_based.is_inter))
    kwargs.update(
        border_handling=BorderHandling.from_param(border_handling, self.rescale), ignore_mask=ignore_mask, blur=blur
    )

    sample_grid_model = SampleGridModel(sample_grid_model)

    if field_based.is_inter:
        raise NotImplementedError
    else:
        shift = _descale_shift_norm(shift, True, self.rescale)

        kwargs, shift = sample_grid_model.for_src(clip, width, height, shift, **kwargs)

        rescaled = super().rescale(
            clip, **self.get_rescale_args(clip, shift, *de_base_args, **kwargs), linear=linear, sigmoid=sigmoid
        )

    return depth(rescaled, bits)

scale

scale(
    clip: VideoNode,
    width: int | None = None,
    height: int | None = None,
    shift: tuple[TopShift | list[TopShift], LeftShift | list[LeftShift]] = (
        0,
        0,
    ),
    *,
    linear: bool | None = None,
    sigmoid: bool | tuple[Slope, Center] = False,
    border_handling: BorderHandling = MIRROR,
    sample_grid_model: SampleGridModel = MATCH_EDGES,
    sar: Sar | float | bool | None = None,
    dar: Dar | float | bool | None = None,
    dar_in: Dar | bool | float | None = None,
    keep_ar: bool | None = None,
    blur: float | None = None,
    **kwargs: Any
) -> VideoNode | ConstantFormatVideoNode

Scale a clip to the given resolution, with aspect ratio and linear light support.

Keyword arguments passed during initialization are automatically injected here, unless explicitly overridden by the arguments provided at call time. Only arguments that match named parameters in this method are injected.

Parameters:

  • clip

    (VideoNode) –

    The source clip.

  • width

    (int | None, default: None ) –

    Target width (defaults to clip width if None).

  • height

    (int | None, default: None ) –

    Target height (defaults to clip height if None).

  • shift

    (tuple[TopShift | list[TopShift], LeftShift | list[LeftShift]], default: (0, 0) ) –

    Subpixel shift (top, left) applied during scaling. If a tuple is provided, it is used uniformly. If a list is given, the shift is applied per plane.

  • linear

    (bool | None, default: None ) –

    Whether to linearize the input before descaling. If None, inferred from sigmoid.

  • sigmoid

    (bool | tuple[Slope, Center], default: False ) –

    Whether to use sigmoid transfer curve. Can be True, False, or a tuple of (slope, center). True applies the defaults values (6.5, 0.75). Keep in mind sigmoid slope has to be in range 1.0-20.0 (inclusive) and sigmoid center has to be in range 0.0-1.0 (inclusive).

  • border_handling

    (BorderHandling, default: MIRROR ) –

    Method for handling image borders during sampling.

  • sample_grid_model

    (SampleGridModel, default: MATCH_EDGES ) –

    Model used to align sampling grid.

  • sar

    (Sar | float | bool | None, default: None ) –

    Sample aspect ratio to assume or convert to.

  • dar

    (Dar | float | bool | None, default: None ) –

    Desired display aspect ratio.

  • dar_in

    (Dar | bool | float | None, default: None ) –

    Input display aspect ratio, if different from clip's.

  • keep_ar

    (bool | None, default: None ) –

    Whether to adjust dimensions to preserve aspect ratio.

  • blur

    (float | None, default: None ) –

    Amount of blur to apply during scaling.

Returns:

  • VideoNode | ConstantFormatVideoNode

    Scaled clip, optionally aspect-corrected and linearized.

Source code in vskernels/abstract/complex.py
527
528
529
530
531
532
533
534
535
536
537
538
539
540
541
542
543
544
545
546
547
548
549
550
551
552
553
554
555
556
557
558
559
560
561
562
563
564
565
566
567
568
569
570
571
572
573
574
575
576
577
578
579
580
581
582
583
584
585
586
587
588
589
590
591
592
593
594
595
596
597
598
599
600
601
602
603
604
605
606
607
608
609
610
611
612
613
614
615
616
617
618
619
620
621
622
623
624
625
626
627
628
629
630
631
632
633
634
635
636
637
638
639
640
641
642
643
644
645
646
647
648
649
650
651
652
653
654
655
656
657
658
659
660
def scale(
    self,
    clip: vs.VideoNode,
    width: int | None = None,
    height: int | None = None,
    # ComplexScaler adds shift per planes
    shift: tuple[TopShift | list[TopShift], LeftShift | list[LeftShift]] = (0, 0),
    *,
    # `linear` and `sigmoid` from LinearScaler
    linear: bool | None = None,
    sigmoid: bool | tuple[Slope, Center] = False,
    # `border_handling`, `sample_grid_model`, `sar`, `dar`, `dar_in` and `keep_ar` from KeepArScaler
    border_handling: BorderHandling = BorderHandling.MIRROR,
    sample_grid_model: SampleGridModel = SampleGridModel.MATCH_EDGES,
    sar: Sar | float | bool | None = None,
    dar: Dar | float | bool | None = None,
    dar_in: Dar | bool | float | None = None,
    keep_ar: bool | None = None,
    # ComplexScaler adds blur
    blur: float | None = None,
    **kwargs: Any,
) -> vs.VideoNode | ConstantFormatVideoNode:
    """
    Scale a clip to the given resolution, with aspect ratio and linear light support.

    Keyword arguments passed during initialization are automatically injected here,
    unless explicitly overridden by the arguments provided at call time.
    Only arguments that match named parameters in this method are injected.

    Args:
        clip: The source clip.
        width: Target width (defaults to clip width if None).
        height: Target height (defaults to clip height if None).
        shift: Subpixel shift (top, left) applied during scaling. If a tuple is provided, it is used uniformly. If a
            list is given, the shift is applied per plane.
        linear: Whether to linearize the input before descaling. If None, inferred from sigmoid.
        sigmoid: Whether to use sigmoid transfer curve. Can be True, False, or a tuple of (slope, center). `True`
            applies the defaults values (6.5, 0.75). Keep in mind sigmoid slope has to be in range 1.0-20.0
            (inclusive) and sigmoid center has to be in range 0.0-1.0 (inclusive).
        border_handling: Method for handling image borders during sampling.
        sample_grid_model: Model used to align sampling grid.
        sar: Sample aspect ratio to assume or convert to.
        dar: Desired display aspect ratio.
        dar_in: Input display aspect ratio, if different from clip's.
        keep_ar: Whether to adjust dimensions to preserve aspect ratio.
        blur: Amount of blur to apply during scaling.

    Returns:
        Scaled clip, optionally aspect-corrected and linearized.
    """
    kwargs.update(
        linear=linear,
        sigmoid=sigmoid,
        border_handling=border_handling,
        sample_grid_model=sample_grid_model,
        sar=sar,
        dar=dar,
        dar_in=dar_in,
        keep_ar=keep_ar,
        blur=blur,
    )

    shift_top, shift_left = shift

    if isinstance(shift_top, (int, float)) and isinstance(shift_left, (int, float)):
        return super().scale(clip, width, height, (shift_top, shift_left), **kwargs)

    assert check_variable_format(clip, self.scale)

    n_planes = clip.format.num_planes

    shift_top = normalize_seq(shift_top, n_planes)
    shift_left = normalize_seq(shift_left, n_planes)

    if n_planes == 1:
        return super().scale(clip, width, height, (shift_top[0], shift_left[0]), **kwargs)

    width, height = self._wh_norm(clip, width, height)

    format_in = clip.format
    format_out = get_video_format(fallback(kwargs.pop("format", None), self.kwargs.get("format"), clip.format))

    chromaloc = ChromaLocation.from_video(clip, func=self.scale)
    chromaloc_in = ChromaLocation(
        fallback(kwargs.pop("chromaloc_in", None), self.kwargs.get("chromaloc_in"), chromaloc)
    )
    chromaloc_out = ChromaLocation(fallback(kwargs.pop("chromaloc", None), self.kwargs.get("chromaloc"), chromaloc))

    off_left, off_top = chromaloc_in.get_offsets(format_in)
    off_left_out, off_top_out = chromaloc_out.get_offsets(format_out)

    factor_w = 1 / 2**format_in.subsampling_w
    factor_h = 1 / 2**format_in.subsampling_h

    # Offsets for format out
    offc_left = (abs(off_left) + off_left_out) * factor_w
    offc_top = (abs(off_top) + off_top_out) * factor_h

    # Offsets for scale out
    if format_out.subsampling_w:
        offc_left = ((abs(off_left) + off_left * (clip.width / width)) * factor_w) + offc_left
    if format_out.subsampling_h:
        offc_top = ((abs(off_top) + off_top * (clip.height / height)) * factor_h) + offc_top

    for i in range(1, n_planes):
        shift_left[i] += offc_left
        shift_top[i] += offc_top

    scaled_planes = list[vs.VideoNode]()

    for i, (plane, top, left) in enumerate(zip(split(clip), shift_top, shift_left)):
        if i:
            w = round(width * 1 / 2**format_out.subsampling_h)
            h = round(height * 1 / 2**format_out.subsampling_h)
        else:
            w, h = width, height

        scaled_planes.append(
            super().scale(
                plane,
                w,
                h,
                (top, left),
                format=format_out.replace(color_family=vs.GRAY, subsampling_w=0, subsampling_h=0),
                **kwargs,
            )
        )

    merged = vs.core.std.ShufflePlanes(scaled_planes, [0, 0, 0], format_out.color_family, clip)

    if chromaloc_in != chromaloc_out:
        return chromaloc_out.apply(merged)

    return merged

shift

shift(
    clip: VideoNode, shift: tuple[TopShift, LeftShift], /, **kwargs: Any
) -> ConstantFormatVideoNode
shift(
    clip: VideoNode,
    shift_top: float | list[float],
    shift_left: float | list[float],
    /,
    **kwargs: Any,
) -> ConstantFormatVideoNode
shift(
    clip: VideoNode,
    shifts_or_top: float | tuple[float, float] | list[float],
    shift_left: float | list[float] | None = None,
    /,
    **kwargs: Any,
) -> ConstantFormatVideoNode

Apply a subpixel shift to the clip using the kernel's scaling logic.

If a single float or tuple is provided, it is used uniformly. If a list is given, the shift is applied per plane.

Keyword arguments passed during initialization are automatically injected here, unless explicitly overridden by the arguments provided at call time. Only arguments that match named parameters in this method are injected.

Parameters:

  • clip

    (VideoNode) –

    The source clip.

  • shifts_or_top

    (float | tuple[float, float] | list[float]) –

    Either a single vertical shift, a (top, left) tuple, or a list of vertical shifts.

  • shift_left

    (float | list[float] | None, default: None ) –

    Horizontal shift or list of horizontal shifts. Ignored if shifts_or_top is a tuple.

  • **kwargs

    (Any, default: {} ) –

    Additional arguments passed to the internal scale call.

Returns:

  • ConstantFormatVideoNode

    A new clip with the applied shift.

Raises:

  • VariableFormatError

    If the input clip has variable format.

  • CustomValueError

    If the input clip is GRAY but lists of shift has been passed.

Source code in vskernels/abstract/base.py
845
846
847
848
849
850
851
852
853
854
855
856
857
858
859
860
861
862
863
864
865
866
867
868
869
870
871
872
873
874
875
876
877
878
879
880
881
882
883
884
885
886
887
888
889
890
891
892
893
894
895
896
897
898
899
900
901
902
903
def shift(
    self,
    clip: vs.VideoNode,
    shifts_or_top: float | tuple[float, float] | list[float],
    shift_left: float | list[float] | None = None,
    /,
    **kwargs: Any,
) -> ConstantFormatVideoNode:
    """
    Apply a subpixel shift to the clip using the kernel's scaling logic.

    If a single float or tuple is provided, it is used uniformly.
    If a list is given, the shift is applied per plane.

    Keyword arguments passed during initialization are automatically injected here,
    unless explicitly overridden by the arguments provided at call time.
    Only arguments that match named parameters in this method are injected.

    Args:
        clip: The source clip.
        shifts_or_top: Either a single vertical shift, a (top, left) tuple, or a list of vertical shifts.
        shift_left: Horizontal shift or list of horizontal shifts. Ignored if `shifts_or_top` is a tuple.
        **kwargs: Additional arguments passed to the internal `scale` call.

    Returns:
        A new clip with the applied shift.

    Raises:
        VariableFormatError: If the input clip has variable format.
        CustomValueError: If the input clip is GRAY but lists of shift has been passed.
    """
    assert check_variable_format(clip, self.shift)

    n_planes = clip.format.num_planes

    def _shift(src: vs.VideoNode, shift: tuple[TopShift, LeftShift] = (0, 0)) -> ConstantFormatVideoNode:
        return self.scale(src, shift=shift, **kwargs)  # type: ignore[return-value]

    if isinstance(shifts_or_top, tuple):
        return _shift(clip, shifts_or_top)

    if isinstance(shifts_or_top, (int, float)) and isinstance(shift_left, (int, float, NoneType)):
        return _shift(clip, (shifts_or_top, shift_left or 0))

    if shift_left is None:
        shift_left = 0.0

    shifts_top = normalize_seq(shifts_or_top, n_planes)
    shifts_left = normalize_seq(shift_left, n_planes)

    if n_planes == 1:
        return _shift(clip, (shifts_top[0], shifts_left[0]))

    shifted_planes = [
        plane if top == left == 0 else _shift(plane, (top, left))
        for plane, top, left in zip(split(clip), shifts_top, shifts_left)
    ]

    return core.std.ShufflePlanes(shifted_planes, [0, 0, 0], clip.format.color_family, clip)

supersample

supersample(
    clip: VideoNodeT,
    rfactor: float = 2.0,
    shift: tuple[TopShift, LeftShift] = (0, 0),
    **kwargs: Any
) -> VideoNodeT

Supersample a clip by a given scaling factor.

Keyword arguments passed during initialization are automatically injected here, unless explicitly overridden by the arguments provided at call time. Only arguments that match named parameters in this method are injected.

Parameters:

  • clip

    (VideoNodeT) –

    The source clip.

  • rfactor

    (float, default: 2.0 ) –

    Scaling factor for supersampling.

  • shift

    (tuple[TopShift, LeftShift], default: (0, 0) ) –

    Subpixel shift (top, left) applied during scaling.

  • **kwargs

    (Any, default: {} ) –

    Additional arguments forwarded to the scale function.

Raises:

  • CustomValueError

    If resulting resolution is non-positive.

Returns:

Source code in vskernels/abstract/base.py
496
497
498
499
500
501
502
503
504
505
506
507
508
509
510
511
512
513
514
515
516
517
518
519
520
521
522
523
524
525
526
527
528
529
def supersample(
    self, clip: VideoNodeT, rfactor: float = 2.0, shift: tuple[TopShift, LeftShift] = (0, 0), **kwargs: Any
) -> VideoNodeT:
    """
    Supersample a clip by a given scaling factor.

    Keyword arguments passed during initialization are automatically injected here,
    unless explicitly overridden by the arguments provided at call time.
    Only arguments that match named parameters in this method are injected.

    Args:
        clip: The source clip.
        rfactor: Scaling factor for supersampling.
        shift: Subpixel shift (top, left) applied during scaling.
        **kwargs: Additional arguments forwarded to the scale function.

    Raises:
        CustomValueError: If resulting resolution is non-positive.

    Returns:
        The supersampled clip.
    """
    assert check_variable_resolution(clip, self.supersample)

    dst_width, dst_height = ceil(clip.width * rfactor), ceil(clip.height * rfactor)

    if max(dst_width, dst_height) <= 0.0:
        raise CustomValueError(
            'Multiplying the resolution by "rfactor" must result in a positive resolution!',
            self.supersample,
            rfactor,
        )

    return self.scale(clip, dst_width, dst_height, shift, **kwargs)  # type: ignore[return-value]

Robidoux

Robidoux(**kwargs: Any)

Bases: Bicubic

Robidoux resizer (b=0.37822, c=0.31089).

Initialize the scaler with optional arguments.

Parameters:

  • **kwargs

    (Any, default: {} ) –

    Keyword arguments that configure the internal scaling behavior.

Methods:

  • bob

    Apply bob deinterlacing to a given clip using the selected resizer.

  • deinterlace

    Apply deinterlacing to a given clip using the selected resizer.

  • descale

    Descale a clip to the given resolution, with image borders handling and sampling grid alignment,

  • ensure_obj

    Ensure that the input is a scaler instance, resolving it if necessary.

  • from_param

    Resolve and return a scaler type from a given input (string, type, or instance).

  • get_bob_args
  • get_descale_args

    Generate and normalize argument dictionary for a descale operation.

  • get_params_args
  • get_resample_args

    Generate and normalize argument dictionary for a resample operation.

  • get_rescale_args

    Generate and normalize argument dictionary for a rescale operation.

  • get_scale_args

    Generate and normalize argument dictionary for a scale operation.

  • implemented_funcs

    Returns a set of function names that are implemented in the current class and the parent classes.

  • kernel_radius
  • multi

    Deprecated alias for supersample.

  • pretty_string

    Cached property returning a user-friendly string representation.

  • resample

    Resample a video clip to the given format.

  • rescale

    Rescale a clip to the given resolution from a previously descaled clip,

  • scale

    Scale a clip to the given resolution, with aspect ratio and linear light support.

  • shift

    Apply a subpixel shift to the clip using the kernel's scaling logic.

  • supersample

    Supersample a clip by a given scaling factor.

Attributes:

Source code in vskernels/kernels/zimg/bicubic.py
237
238
239
240
241
242
243
244
245
246
247
def __init__(self, **kwargs: Any) -> None:
    """
    Initialize the scaler with optional arguments.

    Args:
        **kwargs: Keyword arguments that configure the internal scaling behavior.
    """
    b = 12 / (19 + 9 * sqrt(2))
    c = 113 / (58 + 216 * sqrt(2))

    super().__init__(b=b, c=c, **kwargs)

b instance-attribute

b = b

bob_function class-attribute instance-attribute

bob_function: Callable[..., ConstantFormatVideoNode] = Bob

Bob function called internally when performing bobbing operations.

c instance-attribute

c = c

descale_function class-attribute instance-attribute

descale_function: Callable[..., ConstantFormatVideoNode] = Debicubic

kwargs instance-attribute

kwargs: dict[str, Any] = kwargs

Arguments passed to the implemented funcs or internal scale function.

resample_function class-attribute instance-attribute

resample_function: Callable[..., ConstantFormatVideoNode] = Bicubic

rescale_function class-attribute instance-attribute

rescale_function: Callable[..., ConstantFormatVideoNode] = Bicubic

scale_function class-attribute instance-attribute

scale_function: Callable[..., VideoNode] = Bicubic

bob

bob(
    clip: VideoNode, *, tff: FieldBasedLike | bool | None = None, **kwargs: Any
) -> ConstantFormatVideoNode

Apply bob deinterlacing to a given clip using the selected resizer.

Keyword arguments passed during initialization are automatically injected here, unless explicitly overridden by the arguments provided at call time. Only arguments that match named parameters in this method are injected.

Parameters:

  • clip

    (VideoNode) –

    The source clip

  • tff

    (FieldBasedLike | bool | None, default: None ) –

    Field order of the clip.

Returns:

  • ConstantFormatVideoNode

    The bobbed clip.

Source code in vskernels/kernels/zimg/abstract.py
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
def bob(
    self, clip: vs.VideoNode, *, tff: FieldBasedLike | bool | None = None, **kwargs: Any
) -> ConstantFormatVideoNode:
    """
    Apply bob deinterlacing to a given clip using the selected resizer.

    Keyword arguments passed during initialization are automatically injected here,
    unless explicitly overridden by the arguments provided at call time.
    Only arguments that match named parameters in this method are injected.

    Args:
        clip: The source clip
        tff: Field order of the clip.

    Returns:
        The bobbed clip.
    """
    clip_fieldbased = FieldBased.from_param_or_video(tff, clip, True, self.__class__)

    assert check_variable(clip, self.__class__)

    return self.bob_function(clip, **self.get_bob_args(clip, tff=clip_fieldbased.is_tff, **kwargs))

deinterlace

deinterlace(
    clip: VideoNode,
    *,
    tff: FieldBasedLike | bool | None = None,
    double_rate: bool = True,
    **kwargs: Any
) -> ConstantFormatVideoNode

Apply deinterlacing to a given clip using the selected resizer.

Keyword arguments passed during initialization are automatically injected here, unless explicitly overridden by the arguments provided at call time. Only arguments that match named parameters in this method are injected.

Parameters:

  • clip

    (VideoNode) –

    The source clip

  • tff

    (FieldBasedLike | bool | None, default: None ) –

    Field order of the clip.

  • double_rate

    (bool, default: True ) –

    Whether to double the frame rate (True) or retain the original rate (False).

Returns:

  • ConstantFormatVideoNode

    The bobbed clip.

Source code in vskernels/kernels/zimg/abstract.py
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
def deinterlace(
    self, clip: vs.VideoNode, *, tff: FieldBasedLike | bool | None = None, double_rate: bool = True, **kwargs: Any
) -> ConstantFormatVideoNode:
    """
    Apply deinterlacing to a given clip using the selected resizer.

    Keyword arguments passed during initialization are automatically injected here,
    unless explicitly overridden by the arguments provided at call time.
    Only arguments that match named parameters in this method are injected.

    Args:
        clip: The source clip
        tff: Field order of the clip.
        double_rate: Whether to double the frame rate (True) or retain the original rate (False).

    Returns:
        The bobbed clip.
    """
    bobbed = self.bob(clip, tff=tff, **kwargs)

    if not double_rate:
        return bobbed[::2]

    return bobbed

descale

descale(
    clip: VideoNode,
    width: int | None = None,
    height: int | None = None,
    shift: ShiftT = (0, 0),
    *,
    linear: bool | None = None,
    sigmoid: bool | tuple[Slope, Center] = False,
    border_handling: int | BorderHandling = MIRROR,
    sample_grid_model: int | SampleGridModel = MATCH_EDGES,
    field_based: FieldBasedLike | None = None,
    ignore_mask: VideoNode | None = None,
    blur: float | None = None,
    **kwargs: Any
) -> ConstantFormatVideoNode

Descale a clip to the given resolution, with image borders handling and sampling grid alignment, optionally using linear light processing.

Supports both progressive and interlaced sources. When interlaced, it will separate fields, perform per-field descaling, and weave them back.

Keyword arguments passed during initialization are automatically injected here, unless explicitly overridden by the arguments provided at call time. Only arguments that match named parameters in this method are injected.

Parameters:

  • clip

    (VideoNode) –

    The source clip.

  • width

    (int | None, default: None ) –

    Target descaled width (defaults to clip width if None).

  • height

    (int | None, default: None ) –

    Target descaled height (defaults to clip height if None).

  • shift

    (ShiftT, default: (0, 0) ) –

    Subpixel shift (top, left) or per-field shifts.

  • linear

    (bool | None, default: None ) –

    Whether to linearize the input before descaling. If None, inferred from sigmoid.

  • sigmoid

    (bool | tuple[Slope, Center], default: False ) –

    Whether to use sigmoid transfer curve. Can be True, False, or a tuple of (slope, center). True applies the defaults values (6.5, 0.75). Keep in mind sigmoid slope has to be in range 1.0-20.0 (inclusive) and sigmoid center has to be in range 0.0-1.0 (inclusive).

  • border_handling

    (int | BorderHandling, default: MIRROR ) –

    Method for handling image borders during sampling.

  • sample_grid_model

    (int | SampleGridModel, default: MATCH_EDGES ) –

    Model used to align sampling grid.

  • field_based

    (FieldBasedLike | None, default: None ) –

    Field-based processing mode (interlaced or progressive).

  • ignore_mask

    (VideoNode | None, default: None ) –

    Optional mask specifying areas to ignore during descaling.

  • blur

    (float | None, default: None ) –

    Amount of blur to apply during scaling.

  • **kwargs

    (Any, default: {} ) –

    Additional arguments passed to descale_function.

Returns:

  • ConstantFormatVideoNode

    The descaled video node, optionally processed in linear light.

Source code in vskernels/abstract/complex.py
671
672
673
674
675
676
677
678
679
680
681
682
683
684
685
686
687
688
689
690
691
692
693
694
695
696
697
698
699
700
701
702
703
704
705
706
707
708
709
710
711
712
713
714
715
716
717
718
719
720
721
722
723
724
725
726
727
728
729
730
731
732
733
734
735
736
737
738
739
740
741
742
743
744
745
746
747
748
749
750
751
752
753
754
755
756
757
758
759
760
761
762
763
764
765
766
767
768
769
770
771
def descale(
    self,
    clip: vs.VideoNode,
    width: int | None = None,
    height: int | None = None,
    shift: ShiftT = (0, 0),
    *,
    # `linear` and `sigmoid` parameters from LinearDescaler
    linear: bool | None = None,
    sigmoid: bool | tuple[Slope, Center] = False,
    # ComplexDescaler adds border_handling, sample_grid_model, field_based,  ignore_mask and blur
    border_handling: int | BorderHandling = BorderHandling.MIRROR,
    sample_grid_model: int | SampleGridModel = SampleGridModel.MATCH_EDGES,
    field_based: FieldBasedLike | None = None,
    ignore_mask: vs.VideoNode | None = None,
    blur: float | None = None,
    **kwargs: Any,
) -> ConstantFormatVideoNode:
    """
    Descale a clip to the given resolution, with image borders handling and sampling grid alignment,
    optionally using linear light processing.

    Supports both progressive and interlaced sources. When interlaced, it will separate fields,
    perform per-field descaling, and weave them back.

    Keyword arguments passed during initialization are automatically injected here,
    unless explicitly overridden by the arguments provided at call time.
    Only arguments that match named parameters in this method are injected.

    Args:
        clip: The source clip.
        width: Target descaled width (defaults to clip width if None).
        height: Target descaled height (defaults to clip height if None).
        shift: Subpixel shift (top, left) or per-field shifts.
        linear: Whether to linearize the input before descaling. If None, inferred from sigmoid.
        sigmoid: Whether to use sigmoid transfer curve. Can be True, False, or a tuple of (slope, center). `True`
            applies the defaults values (6.5, 0.75). Keep in mind sigmoid slope has to be in range 1.0-20.0
            (inclusive) and sigmoid center has to be in range 0.0-1.0 (inclusive).
        border_handling: Method for handling image borders during sampling.
        sample_grid_model: Model used to align sampling grid.
        field_based: Field-based processing mode (interlaced or progressive).
        ignore_mask: Optional mask specifying areas to ignore during descaling.
        blur: Amount of blur to apply during scaling.
        **kwargs: Additional arguments passed to `descale_function`.

    Returns:
        The descaled video node, optionally processed in linear light.
    """
    width, height = self._wh_norm(clip, width, height)
    check_correct_subsampling(clip, width, height)

    field_based = FieldBased.from_param_or_video(field_based, clip)

    clip, bits = expect_bits(clip, 32)

    de_base_args = (width, height // (1 + field_based.is_inter))
    kwargs.update(
        linear=linear,
        sigmoid=sigmoid,
        border_handling=BorderHandling.from_param(border_handling, self.descale),
        ignore_mask=ignore_mask,
        blur=blur,
    )

    sample_grid_model = SampleGridModel(sample_grid_model)

    if field_based.is_inter:
        from vsdeinterlace import reweave

        shift_y, shift_x = _descale_shift_norm(shift, False, self.descale)

        kwargs_tf, shift = sample_grid_model.for_src(clip, width, height, (shift_y[0], shift_x[0]), **kwargs)
        kwargs_bf, shift = sample_grid_model.for_src(clip, width, height, (shift_y[1], shift_x[1]), **kwargs)

        de_kwargs_tf = self.get_descale_args(clip, (shift_y[0], shift_x[0]), *de_base_args, **kwargs_tf)
        de_kwargs_bf = self.get_descale_args(clip, (shift_y[1], shift_x[1]), *de_base_args, **kwargs_bf)

        if height % 2:
            raise CustomIndexError("You can't descale to odd resolution when crossconverted!", self.descale)

        field_shift = 0.125 * height / clip.height

        fields = clip.std.SeparateFields(field_based.is_tff)

        descaled_tf = super().descale(
            fields[0::2],
            **de_kwargs_tf | {"src_top": de_kwargs_tf.get("src_top", 0.0) + field_shift},
        )
        descaled_bf = super().descale(
            fields[1::2],
            **de_kwargs_bf | {"src_top": de_kwargs_bf.get("src_top", 0.0) - field_shift},
        )
        descaled = reweave(descaled_tf, descaled_bf, field_based)
    else:
        shift = _descale_shift_norm(shift, True, self.descale)

        kwargs, shift = sample_grid_model.for_src(clip, width, height, shift, **kwargs)

        descaled = super().descale(clip, **self.get_descale_args(clip, shift, *de_base_args, **kwargs))

    return depth(descaled, bits)

ensure_obj classmethod

ensure_obj(
    scaler: str | type[Self] | Self | None = None,
    /,
    func_except: FuncExcept | None = None,
) -> Self

Ensure that the input is a scaler instance, resolving it if necessary.

Parameters:

  • scaler

    (str | type[Self] | Self | None, default: None ) –

    Scaler identifier (string, class, or instance).

  • func_except

    (FuncExcept | None, default: None ) –

    Function returned for custom error handling.

Returns:

  • Self

    Scaler instance.

Source code in vskernels/abstract/base.py
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
@classmethod
def ensure_obj(
    cls,
    scaler: str | type[Self] | Self | None = None,
    /,
    func_except: FuncExcept | None = None,
) -> Self:
    """
    Ensure that the input is a scaler instance, resolving it if necessary.

    Args:
        scaler: Scaler identifier (string, class, or instance).
        func_except: Function returned for custom error handling.

    Returns:
        Scaler instance.
    """
    return _base_ensure_obj(cls, scaler, func_except)

from_param classmethod

from_param(
    scaler: str | type[Self] | Self | None = None,
    /,
    func_except: FuncExcept | None = None,
) -> type[Self]

Resolve and return a scaler type from a given input (string, type, or instance).

Parameters:

  • scaler

    (str | type[Self] | Self | None, default: None ) –

    Scaler identifier (string, class, or instance).

  • func_except

    (FuncExcept | None, default: None ) –

    Function returned for custom error handling.

Returns:

  • type[Self]

    Resolved scaler type.

Source code in vskernels/abstract/base.py
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
@classmethod
def from_param(
    cls,
    scaler: str | type[Self] | Self | None = None,
    /,
    func_except: FuncExcept | None = None,
) -> type[Self]:
    """
    Resolve and return a scaler type from a given input (string, type, or instance).

    Args:
        scaler: Scaler identifier (string, class, or instance).
        func_except: Function returned for custom error handling.

    Returns:
        Resolved scaler type.
    """
    return _base_from_param(cls, scaler, cls._err_class, func_except)

get_bob_args

get_bob_args(
    clip: VideoNode, shift: tuple[TopShift, LeftShift] = (0, 0), **kwargs: Any
) -> dict[str, Any]
Source code in vskernels/kernels/zimg/bicubic.py
60
61
62
63
64
65
66
67
68
def get_bob_args(
    self,
    clip: vs.VideoNode,
    shift: tuple[TopShift, LeftShift] = (0, 0),
    **kwargs: Any,
) -> dict[str, Any]:
    return super().get_bob_args(
        clip, shift, filter="bicubic", filter_param_a=self.b, filter_param_b=self.c, **kwargs
    )

get_descale_args

get_descale_args(
    clip: VideoNode,
    shift: tuple[TopShift, LeftShift] = (0, 0),
    width: int | None = None,
    height: int | None = None,
    **kwargs: Any
) -> dict[str, Any]

Generate and normalize argument dictionary for a descale operation.

Parameters:

  • clip

    (VideoNode) –

    The source clip.

  • shift

    (tuple[TopShift, LeftShift], default: (0, 0) ) –

    Vertical and horizontal shift to apply.

  • width

    (int | None, default: None ) –

    Target width.

  • height

    (int | None, default: None ) –

    Target height.

  • **kwargs

    (Any, default: {} ) –

    Additional arguments to pass to the descale function.

Returns:

  • dict[str, Any]

    Dictionary of keyword arguments for the descale function.

Source code in vskernels/abstract/base.py
946
947
948
949
950
951
952
953
954
955
956
957
958
959
960
961
962
963
964
965
966
967
def get_descale_args(
    self,
    clip: vs.VideoNode,
    shift: tuple[TopShift, LeftShift] = (0, 0),
    width: int | None = None,
    height: int | None = None,
    **kwargs: Any,
) -> dict[str, Any]:
    """
    Generate and normalize argument dictionary for a descale operation.

    Args:
        clip: The source clip.
        shift: Vertical and horizontal shift to apply.
        width: Target width.
        height: Target height.
        **kwargs: Additional arguments to pass to the descale function.

    Returns:
        Dictionary of keyword arguments for the descale function.
    """
    return {"src_top": shift[0], "src_left": shift[1]} | self.get_params_args(True, clip, width, height, **kwargs)

get_params_args

get_params_args(
    is_descale: bool,
    clip: VideoNode,
    width: int | None = None,
    height: int | None = None,
    **kwargs: Any
) -> dict[str, Any]
Source code in vskernels/kernels/zimg/bicubic.py
52
53
54
55
56
57
58
def get_params_args(
    self, is_descale: bool, clip: vs.VideoNode, width: int | None = None, height: int | None = None, **kwargs: Any
) -> dict[str, Any]:
    args = super().get_params_args(is_descale, clip, width, height, **kwargs)
    if is_descale:
        return args | {"b": self.b, "c": self.c}
    return args | {"filter_param_a": self.b, "filter_param_b": self.c}

get_resample_args

get_resample_args(
    clip: VideoNode,
    format: int | VideoFormatLike | HoldsVideoFormat,
    matrix: MatrixLike | None,
    matrix_in: MatrixLike | None,
    **kwargs: Any
) -> dict[str, Any]

Generate and normalize argument dictionary for a resample operation.

Parameters:

  • clip

    (VideoNode) –

    The source clip.

  • format

    (int | VideoFormatLike | HoldsVideoFormat) –

    The target video format, which can either be:

    • an integer format ID,
    • a vs.PresetVideoFormat or vs.VideoFormat,
    • or a source from which a valid VideoFormat can be extracted.
  • matrix

    (MatrixLike | None) –

    Target color matrix.

  • matrix_in

    (MatrixLike | None) –

    Source color matrix.

  • **kwargs

    (Any, default: {} ) –

    Additional arguments to pass to the resample function.

Returns:

  • dict[str, Any]

    Dictionary of keyword arguments for the resample function.

Source code in vskernels/abstract/base.py
 992
 993
 994
 995
 996
 997
 998
 999
1000
1001
1002
1003
1004
1005
1006
1007
1008
1009
1010
1011
1012
1013
1014
1015
1016
1017
1018
1019
1020
1021
def get_resample_args(
    self,
    clip: vs.VideoNode,
    format: int | VideoFormatLike | HoldsVideoFormat,
    matrix: MatrixLike | None,
    matrix_in: MatrixLike | None,
    **kwargs: Any,
) -> dict[str, Any]:
    """
    Generate and normalize argument dictionary for a resample operation.

    Args:
        clip: The source clip.
        format: The target video format, which can either be:

               - an integer format ID,
               - a `vs.PresetVideoFormat` or `vs.VideoFormat`,
               - or a source from which a valid `VideoFormat` can be extracted.
        matrix: Target color matrix.
        matrix_in: Source color matrix.
        **kwargs: Additional arguments to pass to the resample function.

    Returns:
        Dictionary of keyword arguments for the resample function.
    """
    return {
        "format": get_video_format(format).id,
        "matrix": Matrix.from_param(matrix),
        "matrix_in": Matrix.from_param(matrix_in),
    } | self.get_params_args(False, clip, **kwargs)

get_rescale_args

get_rescale_args(
    clip: VideoNode,
    shift: tuple[TopShift, LeftShift] = (0, 0),
    width: int | None = None,
    height: int | None = None,
    **kwargs: Any
) -> dict[str, Any]

Generate and normalize argument dictionary for a rescale operation.

Parameters:

  • clip

    (VideoNode) –

    The source clip.

  • shift

    (tuple[TopShift, LeftShift], default: (0, 0) ) –

    Vertical and horizontal shift to apply.

  • width

    (int | None, default: None ) –

    Target width.

  • height

    (int | None, default: None ) –

    Target height.

  • **kwargs

    (Any, default: {} ) –

    Additional arguments to pass to the rescale function.

Returns:

  • dict[str, Any]

    Dictionary of keyword arguments for the rescale function.

Source code in vskernels/abstract/base.py
969
970
971
972
973
974
975
976
977
978
979
980
981
982
983
984
985
986
987
988
989
990
def get_rescale_args(
    self,
    clip: vs.VideoNode,
    shift: tuple[TopShift, LeftShift] = (0, 0),
    width: int | None = None,
    height: int | None = None,
    **kwargs: Any,
) -> dict[str, Any]:
    """
    Generate and normalize argument dictionary for a rescale operation.

    Args:
        clip: The source clip.
        shift: Vertical and horizontal shift to apply.
        width: Target width.
        height: Target height.
        **kwargs: Additional arguments to pass to the rescale function.

    Returns:
        Dictionary of keyword arguments for the rescale function.
    """
    return {"src_top": shift[0], "src_left": shift[1]} | self.get_params_args(True, clip, width, height, **kwargs)

get_scale_args

get_scale_args(
    clip: VideoNode,
    shift: tuple[TopShift, LeftShift] = (0, 0),
    width: int | None = None,
    height: int | None = None,
    **kwargs: Any
) -> dict[str, Any]

Generate and normalize argument dictionary for a scale operation.

Parameters:

  • clip

    (VideoNode) –

    The source clip.

  • shift

    (tuple[TopShift, LeftShift], default: (0, 0) ) –

    Vertical and horizontal shift to apply.

  • width

    (int | None, default: None ) –

    Target width.

  • height

    (int | None, default: None ) –

    Target height.

  • **kwargs

    (Any, default: {} ) –

    Additional arguments to pass to the scale function.

Returns:

  • dict[str, Any]

    Dictionary of keyword arguments for the scale function.

Source code in vskernels/abstract/base.py
923
924
925
926
927
928
929
930
931
932
933
934
935
936
937
938
939
940
941
942
943
944
def get_scale_args(
    self,
    clip: vs.VideoNode,
    shift: tuple[TopShift, LeftShift] = (0, 0),
    width: int | None = None,
    height: int | None = None,
    **kwargs: Any,
) -> dict[str, Any]:
    """
    Generate and normalize argument dictionary for a scale operation.

    Args:
        clip: The source clip.
        shift: Vertical and horizontal shift to apply.
        width: Target width.
        height: Target height.
        **kwargs: Additional arguments to pass to the scale function.

    Returns:
        Dictionary of keyword arguments for the scale function.
    """
    return {"src_top": shift[0], "src_left": shift[1]} | self.get_params_args(False, clip, width, height, **kwargs)

implemented_funcs classmethod

implemented_funcs() -> frozenset[str]

Returns a set of function names that are implemented in the current class and the parent classes.

These functions determine which keyword arguments will be extracted from the init method.

Returns:

Source code in vskernels/abstract/base.py
431
432
433
434
435
436
437
438
439
440
441
442
@classproperty
@classmethod
def implemented_funcs(cls) -> frozenset[str]:
    """
    Returns a set of function names that are implemented in the current class and the parent classes.

    These functions determine which keyword arguments will be extracted from the __init__ method.

    Returns:
        Frozen set of function names.
    """
    return frozenset(func for klass in cls.mro() for func in getattr(klass, "_implemented_funcs", ()))

kernel_radius

kernel_radius() -> int
Source code in vskernels/kernels/zimg/bicubic.py
70
71
72
73
74
@ZimgComplexKernel.cachedproperty
def kernel_radius(self) -> int:
    if (self.b, self.c) == (0, 0):
        return 1
    return 2

multi

multi(
    clip: VideoNodeT,
    multi: float = 2.0,
    shift: tuple[TopShift, LeftShift] = (0, 0),
    **kwargs: Any
) -> VideoNodeT

Deprecated alias for supersample.

Keyword arguments passed during initialization are automatically injected here, unless explicitly overridden by the arguments provided at call time. Only arguments that match named parameters in this method are injected.

Parameters:

  • clip

    (VideoNodeT) –

    The source clip.

  • multi

    (float, default: 2.0 ) –

    Supersampling factor.

  • shift

    (tuple[TopShift, LeftShift], default: (0, 0) ) –

    Subpixel shift (top, left) applied during scaling.

  • **kwargs

    (Any, default: {} ) –

    Additional arguments forwarded to the scale function.

Returns:

Source code in vskernels/abstract/base.py
531
532
533
534
535
536
537
538
539
540
541
542
543
544
545
546
547
548
549
550
551
@deprecated('The "multi" method is deprecated. Use "supersample" instead.', category=DeprecationWarning)
def multi(
    self, clip: VideoNodeT, multi: float = 2.0, shift: tuple[TopShift, LeftShift] = (0, 0), **kwargs: Any
) -> VideoNodeT:
    """
    Deprecated alias for `supersample`.

    Keyword arguments passed during initialization are automatically injected here,
    unless explicitly overridden by the arguments provided at call time.
    Only arguments that match named parameters in this method are injected.

    Args:
        clip: The source clip.
        multi: Supersampling factor.
        shift: Subpixel shift (top, left) applied during scaling.
        **kwargs: Additional arguments forwarded to the scale function.

    Returns:
        The supersampled clip.
    """
    return self.supersample(clip, multi, shift, **kwargs)

pretty_string

pretty_string() -> str

Cached property returning a user-friendly string representation.

Returns:

  • str

    Pretty-printed string with arguments.

Source code in vskernels/abstract/base.py
421
422
423
424
425
426
427
428
429
@BaseScalerMeta.cachedproperty
def pretty_string(self) -> str:
    """
    Cached property returning a user-friendly string representation.

    Returns:
        Pretty-printed string with arguments.
    """
    return self._pretty_string()

resample

resample(
    clip: VideoNode,
    format: int | VideoFormatLike | HoldsVideoFormat,
    matrix: MatrixLike | None = None,
    matrix_in: MatrixLike | None = None,
    **kwargs: Any
) -> ConstantFormatVideoNode

Resample a video clip to the given format.

Keyword arguments passed during initialization are automatically injected here, unless explicitly overridden by the arguments provided at call time. Only arguments that match named parameters in this method are injected.

Parameters:

  • clip

    (VideoNode) –

    The source clip.

  • format

    (int | VideoFormatLike | HoldsVideoFormat) –

    The target video format, which can either be:

    • an integer format ID,
    • a vs.PresetVideoFormat or vs.VideoFormat,
    • or a source from which a valid VideoFormat can be extracted.
  • matrix

    (MatrixLike | None, default: None ) –

    An optional color transformation matrix to apply.

  • matrix_in

    (MatrixLike | None, default: None ) –

    An optional input matrix for color transformations.

  • **kwargs

    (Any, default: {} ) –

    Additional keyword arguments passed to the resample_function.

Returns:

  • ConstantFormatVideoNode

    The resampled clip.

Source code in vskernels/abstract/base.py
717
718
719
720
721
722
723
724
725
726
727
728
729
730
731
732
733
734
735
736
737
738
739
740
741
742
743
744
745
746
747
748
def resample(
    self,
    clip: vs.VideoNode,
    format: int | VideoFormatLike | HoldsVideoFormat,
    matrix: MatrixLike | None = None,
    matrix_in: MatrixLike | None = None,
    **kwargs: Any,
) -> ConstantFormatVideoNode:
    """
    Resample a video clip to the given format.

    Keyword arguments passed during initialization are automatically injected here,
    unless explicitly overridden by the arguments provided at call time.
    Only arguments that match named parameters in this method are injected.

    Args:
        clip: The source clip.
        format: The target video format, which can either be:

               - an integer format ID,
               - a `vs.PresetVideoFormat` or `vs.VideoFormat`,
               - or a source from which a valid `VideoFormat` can be extracted.
        matrix: An optional color transformation matrix to apply.
        matrix_in: An optional input matrix for color transformations.
        **kwargs: Additional keyword arguments passed to the `resample_function`.

    Returns:
        The resampled clip.
    """
    return self.resample_function(
        clip, **_norm_props_enums(self.get_resample_args(clip, format, matrix, matrix_in, **kwargs))
    )

rescale

rescale(
    clip: VideoNode,
    width: int | None = None,
    height: int | None = None,
    shift: ShiftT = (0, 0),
    *,
    linear: bool | None = None,
    sigmoid: bool | tuple[Slope, Center] = False,
    border_handling: int | BorderHandling = MIRROR,
    sample_grid_model: int | SampleGridModel = MATCH_EDGES,
    field_based: FieldBasedLike | None = None,
    ignore_mask: VideoNode | None = None,
    blur: float | None = None,
    **kwargs: Any
) -> ConstantFormatVideoNode

Rescale a clip to the given resolution from a previously descaled clip, with image borders handling and sampling grid alignment, optionally using linear light processing.

Keyword arguments passed during initialization are automatically injected here, unless explicitly overridden by the arguments provided at call time. Only arguments that match named parameters in this method are injected.

Parameters:

  • clip

    (VideoNode) –

    The source clip.

  • width

    (int | None, default: None ) –

    Target scaled width (defaults to clip width if None).

  • height

    (int | None, default: None ) –

    Target scaled height (defaults to clip height if None).

  • shift

    (ShiftT, default: (0, 0) ) –

    Subpixel shift (top, left) or per-field shifts.

  • linear

    (bool | None, default: None ) –

    Whether to linearize the input before rescaling. If None, inferred from sigmoid.

  • sigmoid

    (bool | tuple[Slope, Center], default: False ) –

    Whether to use sigmoid transfer curve. Can be True, False, or a tuple of (slope, center). True applies the defaults values (6.5, 0.75). Keep in mind sigmoid slope has to be in range 1.0-20.0 (inclusive) and sigmoid center has to be in range 0.0-1.0 (inclusive).

  • border_handling

    (int | BorderHandling, default: MIRROR ) –

    Method for handling image borders during sampling.

  • sample_grid_model

    (int | SampleGridModel, default: MATCH_EDGES ) –

    Model used to align sampling grid.

  • field_based

    (FieldBasedLike | None, default: None ) –

    Field-based processing mode (interlaced or progressive).

  • ignore_mask

    (VideoNode | None, default: None ) –

    Optional mask specifying areas to ignore during rescaling.

  • blur

    (float | None, default: None ) –

    Amount of blur to apply during rescaling.

  • **kwargs

    (Any, default: {} ) –

    Additional arguments passed to rescale_function.

Returns:

  • ConstantFormatVideoNode

    The scaled clip.

Source code in vskernels/abstract/complex.py
773
774
775
776
777
778
779
780
781
782
783
784
785
786
787
788
789
790
791
792
793
794
795
796
797
798
799
800
801
802
803
804
805
806
807
808
809
810
811
812
813
814
815
816
817
818
819
820
821
822
823
824
825
826
827
828
829
830
831
832
833
834
835
836
837
838
839
840
841
842
843
def rescale(
    self,
    clip: vs.VideoNode,
    width: int | None = None,
    height: int | None = None,
    shift: ShiftT = (0, 0),
    *,
    # `linear` and `sigmoid` parameters from LinearDescaler
    linear: bool | None = None,
    sigmoid: bool | tuple[Slope, Center] = False,
    # ComplexDescaler adds border_handling, sample_grid_model, field_based, ignore_mask and blur
    border_handling: int | BorderHandling = BorderHandling.MIRROR,
    sample_grid_model: int | SampleGridModel = SampleGridModel.MATCH_EDGES,
    field_based: FieldBasedLike | None = None,
    ignore_mask: vs.VideoNode | None = None,
    blur: float | None = None,
    **kwargs: Any,
) -> ConstantFormatVideoNode:
    """
    Rescale a clip to the given resolution from a previously descaled clip,
    with image borders handling and sampling grid alignment, optionally using linear light processing.

    Keyword arguments passed during initialization are automatically injected here,
    unless explicitly overridden by the arguments provided at call time.
    Only arguments that match named parameters in this method are injected.

    Args:
        clip: The source clip.
        width: Target scaled width (defaults to clip width if None).
        height: Target scaled height (defaults to clip height if None).
        shift: Subpixel shift (top, left) or per-field shifts.
        linear: Whether to linearize the input before rescaling. If None, inferred from sigmoid.
        sigmoid: Whether to use sigmoid transfer curve. Can be True, False, or a tuple of (slope, center). `True`
            applies the defaults values (6.5, 0.75). Keep in mind sigmoid slope has to be in range 1.0-20.0
            (inclusive) and sigmoid center has to be in range 0.0-1.0 (inclusive).
        border_handling: Method for handling image borders during sampling.
        sample_grid_model: Model used to align sampling grid.
        field_based: Field-based processing mode (interlaced or progressive).
        ignore_mask: Optional mask specifying areas to ignore during rescaling.
        blur: Amount of blur to apply during rescaling.
        **kwargs: Additional arguments passed to `rescale_function`.

    Returns:
        The scaled clip.
    """
    width, height = self._wh_norm(clip, width, height)
    check_correct_subsampling(clip, width, height)

    field_based = FieldBased.from_param_or_video(field_based, clip)

    clip, bits = expect_bits(clip, 32)

    de_base_args = (width, height // (1 + field_based.is_inter))
    kwargs.update(
        border_handling=BorderHandling.from_param(border_handling, self.rescale), ignore_mask=ignore_mask, blur=blur
    )

    sample_grid_model = SampleGridModel(sample_grid_model)

    if field_based.is_inter:
        raise NotImplementedError
    else:
        shift = _descale_shift_norm(shift, True, self.rescale)

        kwargs, shift = sample_grid_model.for_src(clip, width, height, shift, **kwargs)

        rescaled = super().rescale(
            clip, **self.get_rescale_args(clip, shift, *de_base_args, **kwargs), linear=linear, sigmoid=sigmoid
        )

    return depth(rescaled, bits)

scale

scale(
    clip: VideoNode,
    width: int | None = None,
    height: int | None = None,
    shift: tuple[TopShift | list[TopShift], LeftShift | list[LeftShift]] = (
        0,
        0,
    ),
    *,
    linear: bool | None = None,
    sigmoid: bool | tuple[Slope, Center] = False,
    border_handling: BorderHandling = MIRROR,
    sample_grid_model: SampleGridModel = MATCH_EDGES,
    sar: Sar | float | bool | None = None,
    dar: Dar | float | bool | None = None,
    dar_in: Dar | bool | float | None = None,
    keep_ar: bool | None = None,
    blur: float | None = None,
    **kwargs: Any
) -> VideoNode | ConstantFormatVideoNode

Scale a clip to the given resolution, with aspect ratio and linear light support.

Keyword arguments passed during initialization are automatically injected here, unless explicitly overridden by the arguments provided at call time. Only arguments that match named parameters in this method are injected.

Parameters:

  • clip

    (VideoNode) –

    The source clip.

  • width

    (int | None, default: None ) –

    Target width (defaults to clip width if None).

  • height

    (int | None, default: None ) –

    Target height (defaults to clip height if None).

  • shift

    (tuple[TopShift | list[TopShift], LeftShift | list[LeftShift]], default: (0, 0) ) –

    Subpixel shift (top, left) applied during scaling. If a tuple is provided, it is used uniformly. If a list is given, the shift is applied per plane.

  • linear

    (bool | None, default: None ) –

    Whether to linearize the input before descaling. If None, inferred from sigmoid.

  • sigmoid

    (bool | tuple[Slope, Center], default: False ) –

    Whether to use sigmoid transfer curve. Can be True, False, or a tuple of (slope, center). True applies the defaults values (6.5, 0.75). Keep in mind sigmoid slope has to be in range 1.0-20.0 (inclusive) and sigmoid center has to be in range 0.0-1.0 (inclusive).

  • border_handling

    (BorderHandling, default: MIRROR ) –

    Method for handling image borders during sampling.

  • sample_grid_model

    (SampleGridModel, default: MATCH_EDGES ) –

    Model used to align sampling grid.

  • sar

    (Sar | float | bool | None, default: None ) –

    Sample aspect ratio to assume or convert to.

  • dar

    (Dar | float | bool | None, default: None ) –

    Desired display aspect ratio.

  • dar_in

    (Dar | bool | float | None, default: None ) –

    Input display aspect ratio, if different from clip's.

  • keep_ar

    (bool | None, default: None ) –

    Whether to adjust dimensions to preserve aspect ratio.

  • blur

    (float | None, default: None ) –

    Amount of blur to apply during scaling.

Returns:

  • VideoNode | ConstantFormatVideoNode

    Scaled clip, optionally aspect-corrected and linearized.

Source code in vskernels/abstract/complex.py
527
528
529
530
531
532
533
534
535
536
537
538
539
540
541
542
543
544
545
546
547
548
549
550
551
552
553
554
555
556
557
558
559
560
561
562
563
564
565
566
567
568
569
570
571
572
573
574
575
576
577
578
579
580
581
582
583
584
585
586
587
588
589
590
591
592
593
594
595
596
597
598
599
600
601
602
603
604
605
606
607
608
609
610
611
612
613
614
615
616
617
618
619
620
621
622
623
624
625
626
627
628
629
630
631
632
633
634
635
636
637
638
639
640
641
642
643
644
645
646
647
648
649
650
651
652
653
654
655
656
657
658
659
660
def scale(
    self,
    clip: vs.VideoNode,
    width: int | None = None,
    height: int | None = None,
    # ComplexScaler adds shift per planes
    shift: tuple[TopShift | list[TopShift], LeftShift | list[LeftShift]] = (0, 0),
    *,
    # `linear` and `sigmoid` from LinearScaler
    linear: bool | None = None,
    sigmoid: bool | tuple[Slope, Center] = False,
    # `border_handling`, `sample_grid_model`, `sar`, `dar`, `dar_in` and `keep_ar` from KeepArScaler
    border_handling: BorderHandling = BorderHandling.MIRROR,
    sample_grid_model: SampleGridModel = SampleGridModel.MATCH_EDGES,
    sar: Sar | float | bool | None = None,
    dar: Dar | float | bool | None = None,
    dar_in: Dar | bool | float | None = None,
    keep_ar: bool | None = None,
    # ComplexScaler adds blur
    blur: float | None = None,
    **kwargs: Any,
) -> vs.VideoNode | ConstantFormatVideoNode:
    """
    Scale a clip to the given resolution, with aspect ratio and linear light support.

    Keyword arguments passed during initialization are automatically injected here,
    unless explicitly overridden by the arguments provided at call time.
    Only arguments that match named parameters in this method are injected.

    Args:
        clip: The source clip.
        width: Target width (defaults to clip width if None).
        height: Target height (defaults to clip height if None).
        shift: Subpixel shift (top, left) applied during scaling. If a tuple is provided, it is used uniformly. If a
            list is given, the shift is applied per plane.
        linear: Whether to linearize the input before descaling. If None, inferred from sigmoid.
        sigmoid: Whether to use sigmoid transfer curve. Can be True, False, or a tuple of (slope, center). `True`
            applies the defaults values (6.5, 0.75). Keep in mind sigmoid slope has to be in range 1.0-20.0
            (inclusive) and sigmoid center has to be in range 0.0-1.0 (inclusive).
        border_handling: Method for handling image borders during sampling.
        sample_grid_model: Model used to align sampling grid.
        sar: Sample aspect ratio to assume or convert to.
        dar: Desired display aspect ratio.
        dar_in: Input display aspect ratio, if different from clip's.
        keep_ar: Whether to adjust dimensions to preserve aspect ratio.
        blur: Amount of blur to apply during scaling.

    Returns:
        Scaled clip, optionally aspect-corrected and linearized.
    """
    kwargs.update(
        linear=linear,
        sigmoid=sigmoid,
        border_handling=border_handling,
        sample_grid_model=sample_grid_model,
        sar=sar,
        dar=dar,
        dar_in=dar_in,
        keep_ar=keep_ar,
        blur=blur,
    )

    shift_top, shift_left = shift

    if isinstance(shift_top, (int, float)) and isinstance(shift_left, (int, float)):
        return super().scale(clip, width, height, (shift_top, shift_left), **kwargs)

    assert check_variable_format(clip, self.scale)

    n_planes = clip.format.num_planes

    shift_top = normalize_seq(shift_top, n_planes)
    shift_left = normalize_seq(shift_left, n_planes)

    if n_planes == 1:
        return super().scale(clip, width, height, (shift_top[0], shift_left[0]), **kwargs)

    width, height = self._wh_norm(clip, width, height)

    format_in = clip.format
    format_out = get_video_format(fallback(kwargs.pop("format", None), self.kwargs.get("format"), clip.format))

    chromaloc = ChromaLocation.from_video(clip, func=self.scale)
    chromaloc_in = ChromaLocation(
        fallback(kwargs.pop("chromaloc_in", None), self.kwargs.get("chromaloc_in"), chromaloc)
    )
    chromaloc_out = ChromaLocation(fallback(kwargs.pop("chromaloc", None), self.kwargs.get("chromaloc"), chromaloc))

    off_left, off_top = chromaloc_in.get_offsets(format_in)
    off_left_out, off_top_out = chromaloc_out.get_offsets(format_out)

    factor_w = 1 / 2**format_in.subsampling_w
    factor_h = 1 / 2**format_in.subsampling_h

    # Offsets for format out
    offc_left = (abs(off_left) + off_left_out) * factor_w
    offc_top = (abs(off_top) + off_top_out) * factor_h

    # Offsets for scale out
    if format_out.subsampling_w:
        offc_left = ((abs(off_left) + off_left * (clip.width / width)) * factor_w) + offc_left
    if format_out.subsampling_h:
        offc_top = ((abs(off_top) + off_top * (clip.height / height)) * factor_h) + offc_top

    for i in range(1, n_planes):
        shift_left[i] += offc_left
        shift_top[i] += offc_top

    scaled_planes = list[vs.VideoNode]()

    for i, (plane, top, left) in enumerate(zip(split(clip), shift_top, shift_left)):
        if i:
            w = round(width * 1 / 2**format_out.subsampling_h)
            h = round(height * 1 / 2**format_out.subsampling_h)
        else:
            w, h = width, height

        scaled_planes.append(
            super().scale(
                plane,
                w,
                h,
                (top, left),
                format=format_out.replace(color_family=vs.GRAY, subsampling_w=0, subsampling_h=0),
                **kwargs,
            )
        )

    merged = vs.core.std.ShufflePlanes(scaled_planes, [0, 0, 0], format_out.color_family, clip)

    if chromaloc_in != chromaloc_out:
        return chromaloc_out.apply(merged)

    return merged

shift

shift(
    clip: VideoNode, shift: tuple[TopShift, LeftShift], /, **kwargs: Any
) -> ConstantFormatVideoNode
shift(
    clip: VideoNode,
    shift_top: float | list[float],
    shift_left: float | list[float],
    /,
    **kwargs: Any,
) -> ConstantFormatVideoNode
shift(
    clip: VideoNode,
    shifts_or_top: float | tuple[float, float] | list[float],
    shift_left: float | list[float] | None = None,
    /,
    **kwargs: Any,
) -> ConstantFormatVideoNode

Apply a subpixel shift to the clip using the kernel's scaling logic.

If a single float or tuple is provided, it is used uniformly. If a list is given, the shift is applied per plane.

Keyword arguments passed during initialization are automatically injected here, unless explicitly overridden by the arguments provided at call time. Only arguments that match named parameters in this method are injected.

Parameters:

  • clip

    (VideoNode) –

    The source clip.

  • shifts_or_top

    (float | tuple[float, float] | list[float]) –

    Either a single vertical shift, a (top, left) tuple, or a list of vertical shifts.

  • shift_left

    (float | list[float] | None, default: None ) –

    Horizontal shift or list of horizontal shifts. Ignored if shifts_or_top is a tuple.

  • **kwargs

    (Any, default: {} ) –

    Additional arguments passed to the internal scale call.

Returns:

  • ConstantFormatVideoNode

    A new clip with the applied shift.

Raises:

  • VariableFormatError

    If the input clip has variable format.

  • CustomValueError

    If the input clip is GRAY but lists of shift has been passed.

Source code in vskernels/abstract/base.py
845
846
847
848
849
850
851
852
853
854
855
856
857
858
859
860
861
862
863
864
865
866
867
868
869
870
871
872
873
874
875
876
877
878
879
880
881
882
883
884
885
886
887
888
889
890
891
892
893
894
895
896
897
898
899
900
901
902
903
def shift(
    self,
    clip: vs.VideoNode,
    shifts_or_top: float | tuple[float, float] | list[float],
    shift_left: float | list[float] | None = None,
    /,
    **kwargs: Any,
) -> ConstantFormatVideoNode:
    """
    Apply a subpixel shift to the clip using the kernel's scaling logic.

    If a single float or tuple is provided, it is used uniformly.
    If a list is given, the shift is applied per plane.

    Keyword arguments passed during initialization are automatically injected here,
    unless explicitly overridden by the arguments provided at call time.
    Only arguments that match named parameters in this method are injected.

    Args:
        clip: The source clip.
        shifts_or_top: Either a single vertical shift, a (top, left) tuple, or a list of vertical shifts.
        shift_left: Horizontal shift or list of horizontal shifts. Ignored if `shifts_or_top` is a tuple.
        **kwargs: Additional arguments passed to the internal `scale` call.

    Returns:
        A new clip with the applied shift.

    Raises:
        VariableFormatError: If the input clip has variable format.
        CustomValueError: If the input clip is GRAY but lists of shift has been passed.
    """
    assert check_variable_format(clip, self.shift)

    n_planes = clip.format.num_planes

    def _shift(src: vs.VideoNode, shift: tuple[TopShift, LeftShift] = (0, 0)) -> ConstantFormatVideoNode:
        return self.scale(src, shift=shift, **kwargs)  # type: ignore[return-value]

    if isinstance(shifts_or_top, tuple):
        return _shift(clip, shifts_or_top)

    if isinstance(shifts_or_top, (int, float)) and isinstance(shift_left, (int, float, NoneType)):
        return _shift(clip, (shifts_or_top, shift_left or 0))

    if shift_left is None:
        shift_left = 0.0

    shifts_top = normalize_seq(shifts_or_top, n_planes)
    shifts_left = normalize_seq(shift_left, n_planes)

    if n_planes == 1:
        return _shift(clip, (shifts_top[0], shifts_left[0]))

    shifted_planes = [
        plane if top == left == 0 else _shift(plane, (top, left))
        for plane, top, left in zip(split(clip), shifts_top, shifts_left)
    ]

    return core.std.ShufflePlanes(shifted_planes, [0, 0, 0], clip.format.color_family, clip)

supersample

supersample(
    clip: VideoNodeT,
    rfactor: float = 2.0,
    shift: tuple[TopShift, LeftShift] = (0, 0),
    **kwargs: Any
) -> VideoNodeT

Supersample a clip by a given scaling factor.

Keyword arguments passed during initialization are automatically injected here, unless explicitly overridden by the arguments provided at call time. Only arguments that match named parameters in this method are injected.

Parameters:

  • clip

    (VideoNodeT) –

    The source clip.

  • rfactor

    (float, default: 2.0 ) –

    Scaling factor for supersampling.

  • shift

    (tuple[TopShift, LeftShift], default: (0, 0) ) –

    Subpixel shift (top, left) applied during scaling.

  • **kwargs

    (Any, default: {} ) –

    Additional arguments forwarded to the scale function.

Raises:

  • CustomValueError

    If resulting resolution is non-positive.

Returns:

Source code in vskernels/abstract/base.py
496
497
498
499
500
501
502
503
504
505
506
507
508
509
510
511
512
513
514
515
516
517
518
519
520
521
522
523
524
525
526
527
528
529
def supersample(
    self, clip: VideoNodeT, rfactor: float = 2.0, shift: tuple[TopShift, LeftShift] = (0, 0), **kwargs: Any
) -> VideoNodeT:
    """
    Supersample a clip by a given scaling factor.

    Keyword arguments passed during initialization are automatically injected here,
    unless explicitly overridden by the arguments provided at call time.
    Only arguments that match named parameters in this method are injected.

    Args:
        clip: The source clip.
        rfactor: Scaling factor for supersampling.
        shift: Subpixel shift (top, left) applied during scaling.
        **kwargs: Additional arguments forwarded to the scale function.

    Raises:
        CustomValueError: If resulting resolution is non-positive.

    Returns:
        The supersampled clip.
    """
    assert check_variable_resolution(clip, self.supersample)

    dst_width, dst_height = ceil(clip.width * rfactor), ceil(clip.height * rfactor)

    if max(dst_width, dst_height) <= 0.0:
        raise CustomValueError(
            'Multiplying the resolution by "rfactor" must result in a positive resolution!',
            self.supersample,
            rfactor,
        )

    return self.scale(clip, dst_width, dst_height, shift, **kwargs)  # type: ignore[return-value]

RobidouxSharp

RobidouxSharp(**kwargs: Any)

Bases: Bicubic

RobidouxSharp resizer (b=0.26201, c=0.36899).

Initialize the scaler with optional arguments.

Parameters:

  • **kwargs

    (Any, default: {} ) –

    Keyword arguments that configure the internal scaling behavior.

Methods:

  • bob

    Apply bob deinterlacing to a given clip using the selected resizer.

  • deinterlace

    Apply deinterlacing to a given clip using the selected resizer.

  • descale

    Descale a clip to the given resolution, with image borders handling and sampling grid alignment,

  • ensure_obj

    Ensure that the input is a scaler instance, resolving it if necessary.

  • from_param

    Resolve and return a scaler type from a given input (string, type, or instance).

  • get_bob_args
  • get_descale_args

    Generate and normalize argument dictionary for a descale operation.

  • get_params_args
  • get_resample_args

    Generate and normalize argument dictionary for a resample operation.

  • get_rescale_args

    Generate and normalize argument dictionary for a rescale operation.

  • get_scale_args

    Generate and normalize argument dictionary for a scale operation.

  • implemented_funcs

    Returns a set of function names that are implemented in the current class and the parent classes.

  • kernel_radius
  • multi

    Deprecated alias for supersample.

  • pretty_string

    Cached property returning a user-friendly string representation.

  • resample

    Resample a video clip to the given format.

  • rescale

    Rescale a clip to the given resolution from a previously descaled clip,

  • scale

    Scale a clip to the given resolution, with aspect ratio and linear light support.

  • shift

    Apply a subpixel shift to the clip using the kernel's scaling logic.

  • supersample

    Supersample a clip by a given scaling factor.

Attributes:

Source code in vskernels/kernels/zimg/bicubic.py
255
256
257
258
259
260
261
262
263
264
265
def __init__(self, **kwargs: Any) -> None:
    """
    Initialize the scaler with optional arguments.

    Args:
        **kwargs: Keyword arguments that configure the internal scaling behavior.
    """
    b = 6 / (13 + 7 * sqrt(2))
    c = 7 / (2 + 12 * sqrt(2))

    super().__init__(b=b, c=c, **kwargs)

b instance-attribute

b = b

bob_function class-attribute instance-attribute

bob_function: Callable[..., ConstantFormatVideoNode] = Bob

Bob function called internally when performing bobbing operations.

c instance-attribute

c = c

descale_function class-attribute instance-attribute

descale_function: Callable[..., ConstantFormatVideoNode] = Debicubic

kwargs instance-attribute

kwargs: dict[str, Any] = kwargs

Arguments passed to the implemented funcs or internal scale function.

resample_function class-attribute instance-attribute

resample_function: Callable[..., ConstantFormatVideoNode] = Bicubic

rescale_function class-attribute instance-attribute

rescale_function: Callable[..., ConstantFormatVideoNode] = Bicubic

scale_function class-attribute instance-attribute

scale_function: Callable[..., VideoNode] = Bicubic

bob

bob(
    clip: VideoNode, *, tff: FieldBasedLike | bool | None = None, **kwargs: Any
) -> ConstantFormatVideoNode

Apply bob deinterlacing to a given clip using the selected resizer.

Keyword arguments passed during initialization are automatically injected here, unless explicitly overridden by the arguments provided at call time. Only arguments that match named parameters in this method are injected.

Parameters:

  • clip

    (VideoNode) –

    The source clip

  • tff

    (FieldBasedLike | bool | None, default: None ) –

    Field order of the clip.

Returns:

  • ConstantFormatVideoNode

    The bobbed clip.

Source code in vskernels/kernels/zimg/abstract.py
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
def bob(
    self, clip: vs.VideoNode, *, tff: FieldBasedLike | bool | None = None, **kwargs: Any
) -> ConstantFormatVideoNode:
    """
    Apply bob deinterlacing to a given clip using the selected resizer.

    Keyword arguments passed during initialization are automatically injected here,
    unless explicitly overridden by the arguments provided at call time.
    Only arguments that match named parameters in this method are injected.

    Args:
        clip: The source clip
        tff: Field order of the clip.

    Returns:
        The bobbed clip.
    """
    clip_fieldbased = FieldBased.from_param_or_video(tff, clip, True, self.__class__)

    assert check_variable(clip, self.__class__)

    return self.bob_function(clip, **self.get_bob_args(clip, tff=clip_fieldbased.is_tff, **kwargs))

deinterlace

deinterlace(
    clip: VideoNode,
    *,
    tff: FieldBasedLike | bool | None = None,
    double_rate: bool = True,
    **kwargs: Any
) -> ConstantFormatVideoNode

Apply deinterlacing to a given clip using the selected resizer.

Keyword arguments passed during initialization are automatically injected here, unless explicitly overridden by the arguments provided at call time. Only arguments that match named parameters in this method are injected.

Parameters:

  • clip

    (VideoNode) –

    The source clip

  • tff

    (FieldBasedLike | bool | None, default: None ) –

    Field order of the clip.

  • double_rate

    (bool, default: True ) –

    Whether to double the frame rate (True) or retain the original rate (False).

Returns:

  • ConstantFormatVideoNode

    The bobbed clip.

Source code in vskernels/kernels/zimg/abstract.py
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
def deinterlace(
    self, clip: vs.VideoNode, *, tff: FieldBasedLike | bool | None = None, double_rate: bool = True, **kwargs: Any
) -> ConstantFormatVideoNode:
    """
    Apply deinterlacing to a given clip using the selected resizer.

    Keyword arguments passed during initialization are automatically injected here,
    unless explicitly overridden by the arguments provided at call time.
    Only arguments that match named parameters in this method are injected.

    Args:
        clip: The source clip
        tff: Field order of the clip.
        double_rate: Whether to double the frame rate (True) or retain the original rate (False).

    Returns:
        The bobbed clip.
    """
    bobbed = self.bob(clip, tff=tff, **kwargs)

    if not double_rate:
        return bobbed[::2]

    return bobbed

descale

descale(
    clip: VideoNode,
    width: int | None = None,
    height: int | None = None,
    shift: ShiftT = (0, 0),
    *,
    linear: bool | None = None,
    sigmoid: bool | tuple[Slope, Center] = False,
    border_handling: int | BorderHandling = MIRROR,
    sample_grid_model: int | SampleGridModel = MATCH_EDGES,
    field_based: FieldBasedLike | None = None,
    ignore_mask: VideoNode | None = None,
    blur: float | None = None,
    **kwargs: Any
) -> ConstantFormatVideoNode

Descale a clip to the given resolution, with image borders handling and sampling grid alignment, optionally using linear light processing.

Supports both progressive and interlaced sources. When interlaced, it will separate fields, perform per-field descaling, and weave them back.

Keyword arguments passed during initialization are automatically injected here, unless explicitly overridden by the arguments provided at call time. Only arguments that match named parameters in this method are injected.

Parameters:

  • clip

    (VideoNode) –

    The source clip.

  • width

    (int | None, default: None ) –

    Target descaled width (defaults to clip width if None).

  • height

    (int | None, default: None ) –

    Target descaled height (defaults to clip height if None).

  • shift

    (ShiftT, default: (0, 0) ) –

    Subpixel shift (top, left) or per-field shifts.

  • linear

    (bool | None, default: None ) –

    Whether to linearize the input before descaling. If None, inferred from sigmoid.

  • sigmoid

    (bool | tuple[Slope, Center], default: False ) –

    Whether to use sigmoid transfer curve. Can be True, False, or a tuple of (slope, center). True applies the defaults values (6.5, 0.75). Keep in mind sigmoid slope has to be in range 1.0-20.0 (inclusive) and sigmoid center has to be in range 0.0-1.0 (inclusive).

  • border_handling

    (int | BorderHandling, default: MIRROR ) –

    Method for handling image borders during sampling.

  • sample_grid_model

    (int | SampleGridModel, default: MATCH_EDGES ) –

    Model used to align sampling grid.

  • field_based

    (FieldBasedLike | None, default: None ) –

    Field-based processing mode (interlaced or progressive).

  • ignore_mask

    (VideoNode | None, default: None ) –

    Optional mask specifying areas to ignore during descaling.

  • blur

    (float | None, default: None ) –

    Amount of blur to apply during scaling.

  • **kwargs

    (Any, default: {} ) –

    Additional arguments passed to descale_function.

Returns:

  • ConstantFormatVideoNode

    The descaled video node, optionally processed in linear light.

Source code in vskernels/abstract/complex.py
671
672
673
674
675
676
677
678
679
680
681
682
683
684
685
686
687
688
689
690
691
692
693
694
695
696
697
698
699
700
701
702
703
704
705
706
707
708
709
710
711
712
713
714
715
716
717
718
719
720
721
722
723
724
725
726
727
728
729
730
731
732
733
734
735
736
737
738
739
740
741
742
743
744
745
746
747
748
749
750
751
752
753
754
755
756
757
758
759
760
761
762
763
764
765
766
767
768
769
770
771
def descale(
    self,
    clip: vs.VideoNode,
    width: int | None = None,
    height: int | None = None,
    shift: ShiftT = (0, 0),
    *,
    # `linear` and `sigmoid` parameters from LinearDescaler
    linear: bool | None = None,
    sigmoid: bool | tuple[Slope, Center] = False,
    # ComplexDescaler adds border_handling, sample_grid_model, field_based,  ignore_mask and blur
    border_handling: int | BorderHandling = BorderHandling.MIRROR,
    sample_grid_model: int | SampleGridModel = SampleGridModel.MATCH_EDGES,
    field_based: FieldBasedLike | None = None,
    ignore_mask: vs.VideoNode | None = None,
    blur: float | None = None,
    **kwargs: Any,
) -> ConstantFormatVideoNode:
    """
    Descale a clip to the given resolution, with image borders handling and sampling grid alignment,
    optionally using linear light processing.

    Supports both progressive and interlaced sources. When interlaced, it will separate fields,
    perform per-field descaling, and weave them back.

    Keyword arguments passed during initialization are automatically injected here,
    unless explicitly overridden by the arguments provided at call time.
    Only arguments that match named parameters in this method are injected.

    Args:
        clip: The source clip.
        width: Target descaled width (defaults to clip width if None).
        height: Target descaled height (defaults to clip height if None).
        shift: Subpixel shift (top, left) or per-field shifts.
        linear: Whether to linearize the input before descaling. If None, inferred from sigmoid.
        sigmoid: Whether to use sigmoid transfer curve. Can be True, False, or a tuple of (slope, center). `True`
            applies the defaults values (6.5, 0.75). Keep in mind sigmoid slope has to be in range 1.0-20.0
            (inclusive) and sigmoid center has to be in range 0.0-1.0 (inclusive).
        border_handling: Method for handling image borders during sampling.
        sample_grid_model: Model used to align sampling grid.
        field_based: Field-based processing mode (interlaced or progressive).
        ignore_mask: Optional mask specifying areas to ignore during descaling.
        blur: Amount of blur to apply during scaling.
        **kwargs: Additional arguments passed to `descale_function`.

    Returns:
        The descaled video node, optionally processed in linear light.
    """
    width, height = self._wh_norm(clip, width, height)
    check_correct_subsampling(clip, width, height)

    field_based = FieldBased.from_param_or_video(field_based, clip)

    clip, bits = expect_bits(clip, 32)

    de_base_args = (width, height // (1 + field_based.is_inter))
    kwargs.update(
        linear=linear,
        sigmoid=sigmoid,
        border_handling=BorderHandling.from_param(border_handling, self.descale),
        ignore_mask=ignore_mask,
        blur=blur,
    )

    sample_grid_model = SampleGridModel(sample_grid_model)

    if field_based.is_inter:
        from vsdeinterlace import reweave

        shift_y, shift_x = _descale_shift_norm(shift, False, self.descale)

        kwargs_tf, shift = sample_grid_model.for_src(clip, width, height, (shift_y[0], shift_x[0]), **kwargs)
        kwargs_bf, shift = sample_grid_model.for_src(clip, width, height, (shift_y[1], shift_x[1]), **kwargs)

        de_kwargs_tf = self.get_descale_args(clip, (shift_y[0], shift_x[0]), *de_base_args, **kwargs_tf)
        de_kwargs_bf = self.get_descale_args(clip, (shift_y[1], shift_x[1]), *de_base_args, **kwargs_bf)

        if height % 2:
            raise CustomIndexError("You can't descale to odd resolution when crossconverted!", self.descale)

        field_shift = 0.125 * height / clip.height

        fields = clip.std.SeparateFields(field_based.is_tff)

        descaled_tf = super().descale(
            fields[0::2],
            **de_kwargs_tf | {"src_top": de_kwargs_tf.get("src_top", 0.0) + field_shift},
        )
        descaled_bf = super().descale(
            fields[1::2],
            **de_kwargs_bf | {"src_top": de_kwargs_bf.get("src_top", 0.0) - field_shift},
        )
        descaled = reweave(descaled_tf, descaled_bf, field_based)
    else:
        shift = _descale_shift_norm(shift, True, self.descale)

        kwargs, shift = sample_grid_model.for_src(clip, width, height, shift, **kwargs)

        descaled = super().descale(clip, **self.get_descale_args(clip, shift, *de_base_args, **kwargs))

    return depth(descaled, bits)

ensure_obj classmethod

ensure_obj(
    scaler: str | type[Self] | Self | None = None,
    /,
    func_except: FuncExcept | None = None,
) -> Self

Ensure that the input is a scaler instance, resolving it if necessary.

Parameters:

  • scaler

    (str | type[Self] | Self | None, default: None ) –

    Scaler identifier (string, class, or instance).

  • func_except

    (FuncExcept | None, default: None ) –

    Function returned for custom error handling.

Returns:

  • Self

    Scaler instance.

Source code in vskernels/abstract/base.py
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
@classmethod
def ensure_obj(
    cls,
    scaler: str | type[Self] | Self | None = None,
    /,
    func_except: FuncExcept | None = None,
) -> Self:
    """
    Ensure that the input is a scaler instance, resolving it if necessary.

    Args:
        scaler: Scaler identifier (string, class, or instance).
        func_except: Function returned for custom error handling.

    Returns:
        Scaler instance.
    """
    return _base_ensure_obj(cls, scaler, func_except)

from_param classmethod

from_param(
    scaler: str | type[Self] | Self | None = None,
    /,
    func_except: FuncExcept | None = None,
) -> type[Self]

Resolve and return a scaler type from a given input (string, type, or instance).

Parameters:

  • scaler

    (str | type[Self] | Self | None, default: None ) –

    Scaler identifier (string, class, or instance).

  • func_except

    (FuncExcept | None, default: None ) –

    Function returned for custom error handling.

Returns:

  • type[Self]

    Resolved scaler type.

Source code in vskernels/abstract/base.py
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
@classmethod
def from_param(
    cls,
    scaler: str | type[Self] | Self | None = None,
    /,
    func_except: FuncExcept | None = None,
) -> type[Self]:
    """
    Resolve and return a scaler type from a given input (string, type, or instance).

    Args:
        scaler: Scaler identifier (string, class, or instance).
        func_except: Function returned for custom error handling.

    Returns:
        Resolved scaler type.
    """
    return _base_from_param(cls, scaler, cls._err_class, func_except)

get_bob_args

get_bob_args(
    clip: VideoNode, shift: tuple[TopShift, LeftShift] = (0, 0), **kwargs: Any
) -> dict[str, Any]
Source code in vskernels/kernels/zimg/bicubic.py
60
61
62
63
64
65
66
67
68
def get_bob_args(
    self,
    clip: vs.VideoNode,
    shift: tuple[TopShift, LeftShift] = (0, 0),
    **kwargs: Any,
) -> dict[str, Any]:
    return super().get_bob_args(
        clip, shift, filter="bicubic", filter_param_a=self.b, filter_param_b=self.c, **kwargs
    )

get_descale_args

get_descale_args(
    clip: VideoNode,
    shift: tuple[TopShift, LeftShift] = (0, 0),
    width: int | None = None,
    height: int | None = None,
    **kwargs: Any
) -> dict[str, Any]

Generate and normalize argument dictionary for a descale operation.

Parameters:

  • clip

    (VideoNode) –

    The source clip.

  • shift

    (tuple[TopShift, LeftShift], default: (0, 0) ) –

    Vertical and horizontal shift to apply.

  • width

    (int | None, default: None ) –

    Target width.

  • height

    (int | None, default: None ) –

    Target height.

  • **kwargs

    (Any, default: {} ) –

    Additional arguments to pass to the descale function.

Returns:

  • dict[str, Any]

    Dictionary of keyword arguments for the descale function.

Source code in vskernels/abstract/base.py
946
947
948
949
950
951
952
953
954
955
956
957
958
959
960
961
962
963
964
965
966
967
def get_descale_args(
    self,
    clip: vs.VideoNode,
    shift: tuple[TopShift, LeftShift] = (0, 0),
    width: int | None = None,
    height: int | None = None,
    **kwargs: Any,
) -> dict[str, Any]:
    """
    Generate and normalize argument dictionary for a descale operation.

    Args:
        clip: The source clip.
        shift: Vertical and horizontal shift to apply.
        width: Target width.
        height: Target height.
        **kwargs: Additional arguments to pass to the descale function.

    Returns:
        Dictionary of keyword arguments for the descale function.
    """
    return {"src_top": shift[0], "src_left": shift[1]} | self.get_params_args(True, clip, width, height, **kwargs)

get_params_args

get_params_args(
    is_descale: bool,
    clip: VideoNode,
    width: int | None = None,
    height: int | None = None,
    **kwargs: Any
) -> dict[str, Any]
Source code in vskernels/kernels/zimg/bicubic.py
52
53
54
55
56
57
58
def get_params_args(
    self, is_descale: bool, clip: vs.VideoNode, width: int | None = None, height: int | None = None, **kwargs: Any
) -> dict[str, Any]:
    args = super().get_params_args(is_descale, clip, width, height, **kwargs)
    if is_descale:
        return args | {"b": self.b, "c": self.c}
    return args | {"filter_param_a": self.b, "filter_param_b": self.c}

get_resample_args

get_resample_args(
    clip: VideoNode,
    format: int | VideoFormatLike | HoldsVideoFormat,
    matrix: MatrixLike | None,
    matrix_in: MatrixLike | None,
    **kwargs: Any
) -> dict[str, Any]

Generate and normalize argument dictionary for a resample operation.

Parameters:

  • clip

    (VideoNode) –

    The source clip.

  • format

    (int | VideoFormatLike | HoldsVideoFormat) –

    The target video format, which can either be:

    • an integer format ID,
    • a vs.PresetVideoFormat or vs.VideoFormat,
    • or a source from which a valid VideoFormat can be extracted.
  • matrix

    (MatrixLike | None) –

    Target color matrix.

  • matrix_in

    (MatrixLike | None) –

    Source color matrix.

  • **kwargs

    (Any, default: {} ) –

    Additional arguments to pass to the resample function.

Returns:

  • dict[str, Any]

    Dictionary of keyword arguments for the resample function.

Source code in vskernels/abstract/base.py
 992
 993
 994
 995
 996
 997
 998
 999
1000
1001
1002
1003
1004
1005
1006
1007
1008
1009
1010
1011
1012
1013
1014
1015
1016
1017
1018
1019
1020
1021
def get_resample_args(
    self,
    clip: vs.VideoNode,
    format: int | VideoFormatLike | HoldsVideoFormat,
    matrix: MatrixLike | None,
    matrix_in: MatrixLike | None,
    **kwargs: Any,
) -> dict[str, Any]:
    """
    Generate and normalize argument dictionary for a resample operation.

    Args:
        clip: The source clip.
        format: The target video format, which can either be:

               - an integer format ID,
               - a `vs.PresetVideoFormat` or `vs.VideoFormat`,
               - or a source from which a valid `VideoFormat` can be extracted.
        matrix: Target color matrix.
        matrix_in: Source color matrix.
        **kwargs: Additional arguments to pass to the resample function.

    Returns:
        Dictionary of keyword arguments for the resample function.
    """
    return {
        "format": get_video_format(format).id,
        "matrix": Matrix.from_param(matrix),
        "matrix_in": Matrix.from_param(matrix_in),
    } | self.get_params_args(False, clip, **kwargs)

get_rescale_args

get_rescale_args(
    clip: VideoNode,
    shift: tuple[TopShift, LeftShift] = (0, 0),
    width: int | None = None,
    height: int | None = None,
    **kwargs: Any
) -> dict[str, Any]

Generate and normalize argument dictionary for a rescale operation.

Parameters:

  • clip

    (VideoNode) –

    The source clip.

  • shift

    (tuple[TopShift, LeftShift], default: (0, 0) ) –

    Vertical and horizontal shift to apply.

  • width

    (int | None, default: None ) –

    Target width.

  • height

    (int | None, default: None ) –

    Target height.

  • **kwargs

    (Any, default: {} ) –

    Additional arguments to pass to the rescale function.

Returns:

  • dict[str, Any]

    Dictionary of keyword arguments for the rescale function.

Source code in vskernels/abstract/base.py
969
970
971
972
973
974
975
976
977
978
979
980
981
982
983
984
985
986
987
988
989
990
def get_rescale_args(
    self,
    clip: vs.VideoNode,
    shift: tuple[TopShift, LeftShift] = (0, 0),
    width: int | None = None,
    height: int | None = None,
    **kwargs: Any,
) -> dict[str, Any]:
    """
    Generate and normalize argument dictionary for a rescale operation.

    Args:
        clip: The source clip.
        shift: Vertical and horizontal shift to apply.
        width: Target width.
        height: Target height.
        **kwargs: Additional arguments to pass to the rescale function.

    Returns:
        Dictionary of keyword arguments for the rescale function.
    """
    return {"src_top": shift[0], "src_left": shift[1]} | self.get_params_args(True, clip, width, height, **kwargs)

get_scale_args

get_scale_args(
    clip: VideoNode,
    shift: tuple[TopShift, LeftShift] = (0, 0),
    width: int | None = None,
    height: int | None = None,
    **kwargs: Any
) -> dict[str, Any]

Generate and normalize argument dictionary for a scale operation.

Parameters:

  • clip

    (VideoNode) –

    The source clip.

  • shift

    (tuple[TopShift, LeftShift], default: (0, 0) ) –

    Vertical and horizontal shift to apply.

  • width

    (int | None, default: None ) –

    Target width.

  • height

    (int | None, default: None ) –

    Target height.

  • **kwargs

    (Any, default: {} ) –

    Additional arguments to pass to the scale function.

Returns:

  • dict[str, Any]

    Dictionary of keyword arguments for the scale function.

Source code in vskernels/abstract/base.py
923
924
925
926
927
928
929
930
931
932
933
934
935
936
937
938
939
940
941
942
943
944
def get_scale_args(
    self,
    clip: vs.VideoNode,
    shift: tuple[TopShift, LeftShift] = (0, 0),
    width: int | None = None,
    height: int | None = None,
    **kwargs: Any,
) -> dict[str, Any]:
    """
    Generate and normalize argument dictionary for a scale operation.

    Args:
        clip: The source clip.
        shift: Vertical and horizontal shift to apply.
        width: Target width.
        height: Target height.
        **kwargs: Additional arguments to pass to the scale function.

    Returns:
        Dictionary of keyword arguments for the scale function.
    """
    return {"src_top": shift[0], "src_left": shift[1]} | self.get_params_args(False, clip, width, height, **kwargs)

implemented_funcs classmethod

implemented_funcs() -> frozenset[str]

Returns a set of function names that are implemented in the current class and the parent classes.

These functions determine which keyword arguments will be extracted from the init method.

Returns:

Source code in vskernels/abstract/base.py
431
432
433
434
435
436
437
438
439
440
441
442
@classproperty
@classmethod
def implemented_funcs(cls) -> frozenset[str]:
    """
    Returns a set of function names that are implemented in the current class and the parent classes.

    These functions determine which keyword arguments will be extracted from the __init__ method.

    Returns:
        Frozen set of function names.
    """
    return frozenset(func for klass in cls.mro() for func in getattr(klass, "_implemented_funcs", ()))

kernel_radius

kernel_radius() -> int
Source code in vskernels/kernels/zimg/bicubic.py
70
71
72
73
74
@ZimgComplexKernel.cachedproperty
def kernel_radius(self) -> int:
    if (self.b, self.c) == (0, 0):
        return 1
    return 2

multi

multi(
    clip: VideoNodeT,
    multi: float = 2.0,
    shift: tuple[TopShift, LeftShift] = (0, 0),
    **kwargs: Any
) -> VideoNodeT

Deprecated alias for supersample.

Keyword arguments passed during initialization are automatically injected here, unless explicitly overridden by the arguments provided at call time. Only arguments that match named parameters in this method are injected.

Parameters:

  • clip

    (VideoNodeT) –

    The source clip.

  • multi

    (float, default: 2.0 ) –

    Supersampling factor.

  • shift

    (tuple[TopShift, LeftShift], default: (0, 0) ) –

    Subpixel shift (top, left) applied during scaling.

  • **kwargs

    (Any, default: {} ) –

    Additional arguments forwarded to the scale function.

Returns:

Source code in vskernels/abstract/base.py
531
532
533
534
535
536
537
538
539
540
541
542
543
544
545
546
547
548
549
550
551
@deprecated('The "multi" method is deprecated. Use "supersample" instead.', category=DeprecationWarning)
def multi(
    self, clip: VideoNodeT, multi: float = 2.0, shift: tuple[TopShift, LeftShift] = (0, 0), **kwargs: Any
) -> VideoNodeT:
    """
    Deprecated alias for `supersample`.

    Keyword arguments passed during initialization are automatically injected here,
    unless explicitly overridden by the arguments provided at call time.
    Only arguments that match named parameters in this method are injected.

    Args:
        clip: The source clip.
        multi: Supersampling factor.
        shift: Subpixel shift (top, left) applied during scaling.
        **kwargs: Additional arguments forwarded to the scale function.

    Returns:
        The supersampled clip.
    """
    return self.supersample(clip, multi, shift, **kwargs)

pretty_string

pretty_string() -> str

Cached property returning a user-friendly string representation.

Returns:

  • str

    Pretty-printed string with arguments.

Source code in vskernels/abstract/base.py
421
422
423
424
425
426
427
428
429
@BaseScalerMeta.cachedproperty
def pretty_string(self) -> str:
    """
    Cached property returning a user-friendly string representation.

    Returns:
        Pretty-printed string with arguments.
    """
    return self._pretty_string()

resample

resample(
    clip: VideoNode,
    format: int | VideoFormatLike | HoldsVideoFormat,
    matrix: MatrixLike | None = None,
    matrix_in: MatrixLike | None = None,
    **kwargs: Any
) -> ConstantFormatVideoNode

Resample a video clip to the given format.

Keyword arguments passed during initialization are automatically injected here, unless explicitly overridden by the arguments provided at call time. Only arguments that match named parameters in this method are injected.

Parameters:

  • clip

    (VideoNode) –

    The source clip.

  • format

    (int | VideoFormatLike | HoldsVideoFormat) –

    The target video format, which can either be:

    • an integer format ID,
    • a vs.PresetVideoFormat or vs.VideoFormat,
    • or a source from which a valid VideoFormat can be extracted.
  • matrix

    (MatrixLike | None, default: None ) –

    An optional color transformation matrix to apply.

  • matrix_in

    (MatrixLike | None, default: None ) –

    An optional input matrix for color transformations.

  • **kwargs

    (Any, default: {} ) –

    Additional keyword arguments passed to the resample_function.

Returns:

  • ConstantFormatVideoNode

    The resampled clip.

Source code in vskernels/abstract/base.py
717
718
719
720
721
722
723
724
725
726
727
728
729
730
731
732
733
734
735
736
737
738
739
740
741
742
743
744
745
746
747
748
def resample(
    self,
    clip: vs.VideoNode,
    format: int | VideoFormatLike | HoldsVideoFormat,
    matrix: MatrixLike | None = None,
    matrix_in: MatrixLike | None = None,
    **kwargs: Any,
) -> ConstantFormatVideoNode:
    """
    Resample a video clip to the given format.

    Keyword arguments passed during initialization are automatically injected here,
    unless explicitly overridden by the arguments provided at call time.
    Only arguments that match named parameters in this method are injected.

    Args:
        clip: The source clip.
        format: The target video format, which can either be:

               - an integer format ID,
               - a `vs.PresetVideoFormat` or `vs.VideoFormat`,
               - or a source from which a valid `VideoFormat` can be extracted.
        matrix: An optional color transformation matrix to apply.
        matrix_in: An optional input matrix for color transformations.
        **kwargs: Additional keyword arguments passed to the `resample_function`.

    Returns:
        The resampled clip.
    """
    return self.resample_function(
        clip, **_norm_props_enums(self.get_resample_args(clip, format, matrix, matrix_in, **kwargs))
    )

rescale

rescale(
    clip: VideoNode,
    width: int | None = None,
    height: int | None = None,
    shift: ShiftT = (0, 0),
    *,
    linear: bool | None = None,
    sigmoid: bool | tuple[Slope, Center] = False,
    border_handling: int | BorderHandling = MIRROR,
    sample_grid_model: int | SampleGridModel = MATCH_EDGES,
    field_based: FieldBasedLike | None = None,
    ignore_mask: VideoNode | None = None,
    blur: float | None = None,
    **kwargs: Any
) -> ConstantFormatVideoNode

Rescale a clip to the given resolution from a previously descaled clip, with image borders handling and sampling grid alignment, optionally using linear light processing.

Keyword arguments passed during initialization are automatically injected here, unless explicitly overridden by the arguments provided at call time. Only arguments that match named parameters in this method are injected.

Parameters:

  • clip

    (VideoNode) –

    The source clip.

  • width

    (int | None, default: None ) –

    Target scaled width (defaults to clip width if None).

  • height

    (int | None, default: None ) –

    Target scaled height (defaults to clip height if None).

  • shift

    (ShiftT, default: (0, 0) ) –

    Subpixel shift (top, left) or per-field shifts.

  • linear

    (bool | None, default: None ) –

    Whether to linearize the input before rescaling. If None, inferred from sigmoid.

  • sigmoid

    (bool | tuple[Slope, Center], default: False ) –

    Whether to use sigmoid transfer curve. Can be True, False, or a tuple of (slope, center). True applies the defaults values (6.5, 0.75). Keep in mind sigmoid slope has to be in range 1.0-20.0 (inclusive) and sigmoid center has to be in range 0.0-1.0 (inclusive).

  • border_handling

    (int | BorderHandling, default: MIRROR ) –

    Method for handling image borders during sampling.

  • sample_grid_model

    (int | SampleGridModel, default: MATCH_EDGES ) –

    Model used to align sampling grid.

  • field_based

    (FieldBasedLike | None, default: None ) –

    Field-based processing mode (interlaced or progressive).

  • ignore_mask

    (VideoNode | None, default: None ) –

    Optional mask specifying areas to ignore during rescaling.

  • blur

    (float | None, default: None ) –

    Amount of blur to apply during rescaling.

  • **kwargs

    (Any, default: {} ) –

    Additional arguments passed to rescale_function.

Returns:

  • ConstantFormatVideoNode

    The scaled clip.

Source code in vskernels/abstract/complex.py
773
774
775
776
777
778
779
780
781
782
783
784
785
786
787
788
789
790
791
792
793
794
795
796
797
798
799
800
801
802
803
804
805
806
807
808
809
810
811
812
813
814
815
816
817
818
819
820
821
822
823
824
825
826
827
828
829
830
831
832
833
834
835
836
837
838
839
840
841
842
843
def rescale(
    self,
    clip: vs.VideoNode,
    width: int | None = None,
    height: int | None = None,
    shift: ShiftT = (0, 0),
    *,
    # `linear` and `sigmoid` parameters from LinearDescaler
    linear: bool | None = None,
    sigmoid: bool | tuple[Slope, Center] = False,
    # ComplexDescaler adds border_handling, sample_grid_model, field_based, ignore_mask and blur
    border_handling: int | BorderHandling = BorderHandling.MIRROR,
    sample_grid_model: int | SampleGridModel = SampleGridModel.MATCH_EDGES,
    field_based: FieldBasedLike | None = None,
    ignore_mask: vs.VideoNode | None = None,
    blur: float | None = None,
    **kwargs: Any,
) -> ConstantFormatVideoNode:
    """
    Rescale a clip to the given resolution from a previously descaled clip,
    with image borders handling and sampling grid alignment, optionally using linear light processing.

    Keyword arguments passed during initialization are automatically injected here,
    unless explicitly overridden by the arguments provided at call time.
    Only arguments that match named parameters in this method are injected.

    Args:
        clip: The source clip.
        width: Target scaled width (defaults to clip width if None).
        height: Target scaled height (defaults to clip height if None).
        shift: Subpixel shift (top, left) or per-field shifts.
        linear: Whether to linearize the input before rescaling. If None, inferred from sigmoid.
        sigmoid: Whether to use sigmoid transfer curve. Can be True, False, or a tuple of (slope, center). `True`
            applies the defaults values (6.5, 0.75). Keep in mind sigmoid slope has to be in range 1.0-20.0
            (inclusive) and sigmoid center has to be in range 0.0-1.0 (inclusive).
        border_handling: Method for handling image borders during sampling.
        sample_grid_model: Model used to align sampling grid.
        field_based: Field-based processing mode (interlaced or progressive).
        ignore_mask: Optional mask specifying areas to ignore during rescaling.
        blur: Amount of blur to apply during rescaling.
        **kwargs: Additional arguments passed to `rescale_function`.

    Returns:
        The scaled clip.
    """
    width, height = self._wh_norm(clip, width, height)
    check_correct_subsampling(clip, width, height)

    field_based = FieldBased.from_param_or_video(field_based, clip)

    clip, bits = expect_bits(clip, 32)

    de_base_args = (width, height // (1 + field_based.is_inter))
    kwargs.update(
        border_handling=BorderHandling.from_param(border_handling, self.rescale), ignore_mask=ignore_mask, blur=blur
    )

    sample_grid_model = SampleGridModel(sample_grid_model)

    if field_based.is_inter:
        raise NotImplementedError
    else:
        shift = _descale_shift_norm(shift, True, self.rescale)

        kwargs, shift = sample_grid_model.for_src(clip, width, height, shift, **kwargs)

        rescaled = super().rescale(
            clip, **self.get_rescale_args(clip, shift, *de_base_args, **kwargs), linear=linear, sigmoid=sigmoid
        )

    return depth(rescaled, bits)

scale

scale(
    clip: VideoNode,
    width: int | None = None,
    height: int | None = None,
    shift: tuple[TopShift | list[TopShift], LeftShift | list[LeftShift]] = (
        0,
        0,
    ),
    *,
    linear: bool | None = None,
    sigmoid: bool | tuple[Slope, Center] = False,
    border_handling: BorderHandling = MIRROR,
    sample_grid_model: SampleGridModel = MATCH_EDGES,
    sar: Sar | float | bool | None = None,
    dar: Dar | float | bool | None = None,
    dar_in: Dar | bool | float | None = None,
    keep_ar: bool | None = None,
    blur: float | None = None,
    **kwargs: Any
) -> VideoNode | ConstantFormatVideoNode

Scale a clip to the given resolution, with aspect ratio and linear light support.

Keyword arguments passed during initialization are automatically injected here, unless explicitly overridden by the arguments provided at call time. Only arguments that match named parameters in this method are injected.

Parameters:

  • clip

    (VideoNode) –

    The source clip.

  • width

    (int | None, default: None ) –

    Target width (defaults to clip width if None).

  • height

    (int | None, default: None ) –

    Target height (defaults to clip height if None).

  • shift

    (tuple[TopShift | list[TopShift], LeftShift | list[LeftShift]], default: (0, 0) ) –

    Subpixel shift (top, left) applied during scaling. If a tuple is provided, it is used uniformly. If a list is given, the shift is applied per plane.

  • linear

    (bool | None, default: None ) –

    Whether to linearize the input before descaling. If None, inferred from sigmoid.

  • sigmoid

    (bool | tuple[Slope, Center], default: False ) –

    Whether to use sigmoid transfer curve. Can be True, False, or a tuple of (slope, center). True applies the defaults values (6.5, 0.75). Keep in mind sigmoid slope has to be in range 1.0-20.0 (inclusive) and sigmoid center has to be in range 0.0-1.0 (inclusive).

  • border_handling

    (BorderHandling, default: MIRROR ) –

    Method for handling image borders during sampling.

  • sample_grid_model

    (SampleGridModel, default: MATCH_EDGES ) –

    Model used to align sampling grid.

  • sar

    (Sar | float | bool | None, default: None ) –

    Sample aspect ratio to assume or convert to.

  • dar

    (Dar | float | bool | None, default: None ) –

    Desired display aspect ratio.

  • dar_in

    (Dar | bool | float | None, default: None ) –

    Input display aspect ratio, if different from clip's.

  • keep_ar

    (bool | None, default: None ) –

    Whether to adjust dimensions to preserve aspect ratio.

  • blur

    (float | None, default: None ) –

    Amount of blur to apply during scaling.

Returns:

  • VideoNode | ConstantFormatVideoNode

    Scaled clip, optionally aspect-corrected and linearized.

Source code in vskernels/abstract/complex.py
527
528
529
530
531
532
533
534
535
536
537
538
539
540
541
542
543
544
545
546
547
548
549
550
551
552
553
554
555
556
557
558
559
560
561
562
563
564
565
566
567
568
569
570
571
572
573
574
575
576
577
578
579
580
581
582
583
584
585
586
587
588
589
590
591
592
593
594
595
596
597
598
599
600
601
602
603
604
605
606
607
608
609
610
611
612
613
614
615
616
617
618
619
620
621
622
623
624
625
626
627
628
629
630
631
632
633
634
635
636
637
638
639
640
641
642
643
644
645
646
647
648
649
650
651
652
653
654
655
656
657
658
659
660
def scale(
    self,
    clip: vs.VideoNode,
    width: int | None = None,
    height: int | None = None,
    # ComplexScaler adds shift per planes
    shift: tuple[TopShift | list[TopShift], LeftShift | list[LeftShift]] = (0, 0),
    *,
    # `linear` and `sigmoid` from LinearScaler
    linear: bool | None = None,
    sigmoid: bool | tuple[Slope, Center] = False,
    # `border_handling`, `sample_grid_model`, `sar`, `dar`, `dar_in` and `keep_ar` from KeepArScaler
    border_handling: BorderHandling = BorderHandling.MIRROR,
    sample_grid_model: SampleGridModel = SampleGridModel.MATCH_EDGES,
    sar: Sar | float | bool | None = None,
    dar: Dar | float | bool | None = None,
    dar_in: Dar | bool | float | None = None,
    keep_ar: bool | None = None,
    # ComplexScaler adds blur
    blur: float | None = None,
    **kwargs: Any,
) -> vs.VideoNode | ConstantFormatVideoNode:
    """
    Scale a clip to the given resolution, with aspect ratio and linear light support.

    Keyword arguments passed during initialization are automatically injected here,
    unless explicitly overridden by the arguments provided at call time.
    Only arguments that match named parameters in this method are injected.

    Args:
        clip: The source clip.
        width: Target width (defaults to clip width if None).
        height: Target height (defaults to clip height if None).
        shift: Subpixel shift (top, left) applied during scaling. If a tuple is provided, it is used uniformly. If a
            list is given, the shift is applied per plane.
        linear: Whether to linearize the input before descaling. If None, inferred from sigmoid.
        sigmoid: Whether to use sigmoid transfer curve. Can be True, False, or a tuple of (slope, center). `True`
            applies the defaults values (6.5, 0.75). Keep in mind sigmoid slope has to be in range 1.0-20.0
            (inclusive) and sigmoid center has to be in range 0.0-1.0 (inclusive).
        border_handling: Method for handling image borders during sampling.
        sample_grid_model: Model used to align sampling grid.
        sar: Sample aspect ratio to assume or convert to.
        dar: Desired display aspect ratio.
        dar_in: Input display aspect ratio, if different from clip's.
        keep_ar: Whether to adjust dimensions to preserve aspect ratio.
        blur: Amount of blur to apply during scaling.

    Returns:
        Scaled clip, optionally aspect-corrected and linearized.
    """
    kwargs.update(
        linear=linear,
        sigmoid=sigmoid,
        border_handling=border_handling,
        sample_grid_model=sample_grid_model,
        sar=sar,
        dar=dar,
        dar_in=dar_in,
        keep_ar=keep_ar,
        blur=blur,
    )

    shift_top, shift_left = shift

    if isinstance(shift_top, (int, float)) and isinstance(shift_left, (int, float)):
        return super().scale(clip, width, height, (shift_top, shift_left), **kwargs)

    assert check_variable_format(clip, self.scale)

    n_planes = clip.format.num_planes

    shift_top = normalize_seq(shift_top, n_planes)
    shift_left = normalize_seq(shift_left, n_planes)

    if n_planes == 1:
        return super().scale(clip, width, height, (shift_top[0], shift_left[0]), **kwargs)

    width, height = self._wh_norm(clip, width, height)

    format_in = clip.format
    format_out = get_video_format(fallback(kwargs.pop("format", None), self.kwargs.get("format"), clip.format))

    chromaloc = ChromaLocation.from_video(clip, func=self.scale)
    chromaloc_in = ChromaLocation(
        fallback(kwargs.pop("chromaloc_in", None), self.kwargs.get("chromaloc_in"), chromaloc)
    )
    chromaloc_out = ChromaLocation(fallback(kwargs.pop("chromaloc", None), self.kwargs.get("chromaloc"), chromaloc))

    off_left, off_top = chromaloc_in.get_offsets(format_in)
    off_left_out, off_top_out = chromaloc_out.get_offsets(format_out)

    factor_w = 1 / 2**format_in.subsampling_w
    factor_h = 1 / 2**format_in.subsampling_h

    # Offsets for format out
    offc_left = (abs(off_left) + off_left_out) * factor_w
    offc_top = (abs(off_top) + off_top_out) * factor_h

    # Offsets for scale out
    if format_out.subsampling_w:
        offc_left = ((abs(off_left) + off_left * (clip.width / width)) * factor_w) + offc_left
    if format_out.subsampling_h:
        offc_top = ((abs(off_top) + off_top * (clip.height / height)) * factor_h) + offc_top

    for i in range(1, n_planes):
        shift_left[i] += offc_left
        shift_top[i] += offc_top

    scaled_planes = list[vs.VideoNode]()

    for i, (plane, top, left) in enumerate(zip(split(clip), shift_top, shift_left)):
        if i:
            w = round(width * 1 / 2**format_out.subsampling_h)
            h = round(height * 1 / 2**format_out.subsampling_h)
        else:
            w, h = width, height

        scaled_planes.append(
            super().scale(
                plane,
                w,
                h,
                (top, left),
                format=format_out.replace(color_family=vs.GRAY, subsampling_w=0, subsampling_h=0),
                **kwargs,
            )
        )

    merged = vs.core.std.ShufflePlanes(scaled_planes, [0, 0, 0], format_out.color_family, clip)

    if chromaloc_in != chromaloc_out:
        return chromaloc_out.apply(merged)

    return merged

shift

shift(
    clip: VideoNode, shift: tuple[TopShift, LeftShift], /, **kwargs: Any
) -> ConstantFormatVideoNode
shift(
    clip: VideoNode,
    shift_top: float | list[float],
    shift_left: float | list[float],
    /,
    **kwargs: Any,
) -> ConstantFormatVideoNode
shift(
    clip: VideoNode,
    shifts_or_top: float | tuple[float, float] | list[float],
    shift_left: float | list[float] | None = None,
    /,
    **kwargs: Any,
) -> ConstantFormatVideoNode

Apply a subpixel shift to the clip using the kernel's scaling logic.

If a single float or tuple is provided, it is used uniformly. If a list is given, the shift is applied per plane.

Keyword arguments passed during initialization are automatically injected here, unless explicitly overridden by the arguments provided at call time. Only arguments that match named parameters in this method are injected.

Parameters:

  • clip

    (VideoNode) –

    The source clip.

  • shifts_or_top

    (float | tuple[float, float] | list[float]) –

    Either a single vertical shift, a (top, left) tuple, or a list of vertical shifts.

  • shift_left

    (float | list[float] | None, default: None ) –

    Horizontal shift or list of horizontal shifts. Ignored if shifts_or_top is a tuple.

  • **kwargs

    (Any, default: {} ) –

    Additional arguments passed to the internal scale call.

Returns:

  • ConstantFormatVideoNode

    A new clip with the applied shift.

Raises:

  • VariableFormatError

    If the input clip has variable format.

  • CustomValueError

    If the input clip is GRAY but lists of shift has been passed.

Source code in vskernels/abstract/base.py
845
846
847
848
849
850
851
852
853
854
855
856
857
858
859
860
861
862
863
864
865
866
867
868
869
870
871
872
873
874
875
876
877
878
879
880
881
882
883
884
885
886
887
888
889
890
891
892
893
894
895
896
897
898
899
900
901
902
903
def shift(
    self,
    clip: vs.VideoNode,
    shifts_or_top: float | tuple[float, float] | list[float],
    shift_left: float | list[float] | None = None,
    /,
    **kwargs: Any,
) -> ConstantFormatVideoNode:
    """
    Apply a subpixel shift to the clip using the kernel's scaling logic.

    If a single float or tuple is provided, it is used uniformly.
    If a list is given, the shift is applied per plane.

    Keyword arguments passed during initialization are automatically injected here,
    unless explicitly overridden by the arguments provided at call time.
    Only arguments that match named parameters in this method are injected.

    Args:
        clip: The source clip.
        shifts_or_top: Either a single vertical shift, a (top, left) tuple, or a list of vertical shifts.
        shift_left: Horizontal shift or list of horizontal shifts. Ignored if `shifts_or_top` is a tuple.
        **kwargs: Additional arguments passed to the internal `scale` call.

    Returns:
        A new clip with the applied shift.

    Raises:
        VariableFormatError: If the input clip has variable format.
        CustomValueError: If the input clip is GRAY but lists of shift has been passed.
    """
    assert check_variable_format(clip, self.shift)

    n_planes = clip.format.num_planes

    def _shift(src: vs.VideoNode, shift: tuple[TopShift, LeftShift] = (0, 0)) -> ConstantFormatVideoNode:
        return self.scale(src, shift=shift, **kwargs)  # type: ignore[return-value]

    if isinstance(shifts_or_top, tuple):
        return _shift(clip, shifts_or_top)

    if isinstance(shifts_or_top, (int, float)) and isinstance(shift_left, (int, float, NoneType)):
        return _shift(clip, (shifts_or_top, shift_left or 0))

    if shift_left is None:
        shift_left = 0.0

    shifts_top = normalize_seq(shifts_or_top, n_planes)
    shifts_left = normalize_seq(shift_left, n_planes)

    if n_planes == 1:
        return _shift(clip, (shifts_top[0], shifts_left[0]))

    shifted_planes = [
        plane if top == left == 0 else _shift(plane, (top, left))
        for plane, top, left in zip(split(clip), shifts_top, shifts_left)
    ]

    return core.std.ShufflePlanes(shifted_planes, [0, 0, 0], clip.format.color_family, clip)

supersample

supersample(
    clip: VideoNodeT,
    rfactor: float = 2.0,
    shift: tuple[TopShift, LeftShift] = (0, 0),
    **kwargs: Any
) -> VideoNodeT

Supersample a clip by a given scaling factor.

Keyword arguments passed during initialization are automatically injected here, unless explicitly overridden by the arguments provided at call time. Only arguments that match named parameters in this method are injected.

Parameters:

  • clip

    (VideoNodeT) –

    The source clip.

  • rfactor

    (float, default: 2.0 ) –

    Scaling factor for supersampling.

  • shift

    (tuple[TopShift, LeftShift], default: (0, 0) ) –

    Subpixel shift (top, left) applied during scaling.

  • **kwargs

    (Any, default: {} ) –

    Additional arguments forwarded to the scale function.

Raises:

  • CustomValueError

    If resulting resolution is non-positive.

Returns:

Source code in vskernels/abstract/base.py
496
497
498
499
500
501
502
503
504
505
506
507
508
509
510
511
512
513
514
515
516
517
518
519
520
521
522
523
524
525
526
527
528
529
def supersample(
    self, clip: VideoNodeT, rfactor: float = 2.0, shift: tuple[TopShift, LeftShift] = (0, 0), **kwargs: Any
) -> VideoNodeT:
    """
    Supersample a clip by a given scaling factor.

    Keyword arguments passed during initialization are automatically injected here,
    unless explicitly overridden by the arguments provided at call time.
    Only arguments that match named parameters in this method are injected.

    Args:
        clip: The source clip.
        rfactor: Scaling factor for supersampling.
        shift: Subpixel shift (top, left) applied during scaling.
        **kwargs: Additional arguments forwarded to the scale function.

    Raises:
        CustomValueError: If resulting resolution is non-positive.

    Returns:
        The supersampled clip.
    """
    assert check_variable_resolution(clip, self.supersample)

    dst_width, dst_height = ceil(clip.width * rfactor), ceil(clip.height * rfactor)

    if max(dst_width, dst_height) <= 0.0:
        raise CustomValueError(
            'Multiplying the resolution by "rfactor" must result in a positive resolution!',
            self.supersample,
            rfactor,
        )

    return self.scale(clip, dst_width, dst_height, shift, **kwargs)  # type: ignore[return-value]

RobidouxSoft

RobidouxSoft(**kwargs: Any)

Bases: Bicubic

RobidouxSoft resizer (b=0.67962, c=0.16019).

Initialize the scaler with optional arguments.

Parameters:

  • **kwargs

    (Any, default: {} ) –

    Keyword arguments that configure the internal scaling behavior.

Methods:

  • bob

    Apply bob deinterlacing to a given clip using the selected resizer.

  • deinterlace

    Apply deinterlacing to a given clip using the selected resizer.

  • descale

    Descale a clip to the given resolution, with image borders handling and sampling grid alignment,

  • ensure_obj

    Ensure that the input is a scaler instance, resolving it if necessary.

  • from_param

    Resolve and return a scaler type from a given input (string, type, or instance).

  • get_bob_args
  • get_descale_args

    Generate and normalize argument dictionary for a descale operation.

  • get_params_args
  • get_resample_args

    Generate and normalize argument dictionary for a resample operation.

  • get_rescale_args

    Generate and normalize argument dictionary for a rescale operation.

  • get_scale_args

    Generate and normalize argument dictionary for a scale operation.

  • implemented_funcs

    Returns a set of function names that are implemented in the current class and the parent classes.

  • kernel_radius
  • multi

    Deprecated alias for supersample.

  • pretty_string

    Cached property returning a user-friendly string representation.

  • resample

    Resample a video clip to the given format.

  • rescale

    Rescale a clip to the given resolution from a previously descaled clip,

  • scale

    Scale a clip to the given resolution, with aspect ratio and linear light support.

  • shift

    Apply a subpixel shift to the clip using the kernel's scaling logic.

  • supersample

    Supersample a clip by a given scaling factor.

Attributes:

Source code in vskernels/kernels/zimg/bicubic.py
220
221
222
223
224
225
226
227
228
229
def __init__(self, **kwargs: Any) -> None:
    """
    Initialize the scaler with optional arguments.

    Args:
        **kwargs: Keyword arguments that configure the internal scaling behavior.
    """
    b = (9 - 3 * sqrt(2)) / 7
    c = (1 - b) / 2
    super().__init__(b=b, c=c, **kwargs)

b instance-attribute

b = b

bob_function class-attribute instance-attribute

bob_function: Callable[..., ConstantFormatVideoNode] = Bob

Bob function called internally when performing bobbing operations.

c instance-attribute

c = c

descale_function class-attribute instance-attribute

descale_function: Callable[..., ConstantFormatVideoNode] = Debicubic

kwargs instance-attribute

kwargs: dict[str, Any] = kwargs

Arguments passed to the implemented funcs or internal scale function.

resample_function class-attribute instance-attribute

resample_function: Callable[..., ConstantFormatVideoNode] = Bicubic

rescale_function class-attribute instance-attribute

rescale_function: Callable[..., ConstantFormatVideoNode] = Bicubic

scale_function class-attribute instance-attribute

scale_function: Callable[..., VideoNode] = Bicubic

bob

bob(
    clip: VideoNode, *, tff: FieldBasedLike | bool | None = None, **kwargs: Any
) -> ConstantFormatVideoNode

Apply bob deinterlacing to a given clip using the selected resizer.

Keyword arguments passed during initialization are automatically injected here, unless explicitly overridden by the arguments provided at call time. Only arguments that match named parameters in this method are injected.

Parameters:

  • clip

    (VideoNode) –

    The source clip

  • tff

    (FieldBasedLike | bool | None, default: None ) –

    Field order of the clip.

Returns:

  • ConstantFormatVideoNode

    The bobbed clip.

Source code in vskernels/kernels/zimg/abstract.py
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
def bob(
    self, clip: vs.VideoNode, *, tff: FieldBasedLike | bool | None = None, **kwargs: Any
) -> ConstantFormatVideoNode:
    """
    Apply bob deinterlacing to a given clip using the selected resizer.

    Keyword arguments passed during initialization are automatically injected here,
    unless explicitly overridden by the arguments provided at call time.
    Only arguments that match named parameters in this method are injected.

    Args:
        clip: The source clip
        tff: Field order of the clip.

    Returns:
        The bobbed clip.
    """
    clip_fieldbased = FieldBased.from_param_or_video(tff, clip, True, self.__class__)

    assert check_variable(clip, self.__class__)

    return self.bob_function(clip, **self.get_bob_args(clip, tff=clip_fieldbased.is_tff, **kwargs))

deinterlace

deinterlace(
    clip: VideoNode,
    *,
    tff: FieldBasedLike | bool | None = None,
    double_rate: bool = True,
    **kwargs: Any
) -> ConstantFormatVideoNode

Apply deinterlacing to a given clip using the selected resizer.

Keyword arguments passed during initialization are automatically injected here, unless explicitly overridden by the arguments provided at call time. Only arguments that match named parameters in this method are injected.

Parameters:

  • clip

    (VideoNode) –

    The source clip

  • tff

    (FieldBasedLike | bool | None, default: None ) –

    Field order of the clip.

  • double_rate

    (bool, default: True ) –

    Whether to double the frame rate (True) or retain the original rate (False).

Returns:

  • ConstantFormatVideoNode

    The bobbed clip.

Source code in vskernels/kernels/zimg/abstract.py
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
def deinterlace(
    self, clip: vs.VideoNode, *, tff: FieldBasedLike | bool | None = None, double_rate: bool = True, **kwargs: Any
) -> ConstantFormatVideoNode:
    """
    Apply deinterlacing to a given clip using the selected resizer.

    Keyword arguments passed during initialization are automatically injected here,
    unless explicitly overridden by the arguments provided at call time.
    Only arguments that match named parameters in this method are injected.

    Args:
        clip: The source clip
        tff: Field order of the clip.
        double_rate: Whether to double the frame rate (True) or retain the original rate (False).

    Returns:
        The bobbed clip.
    """
    bobbed = self.bob(clip, tff=tff, **kwargs)

    if not double_rate:
        return bobbed[::2]

    return bobbed

descale

descale(
    clip: VideoNode,
    width: int | None = None,
    height: int | None = None,
    shift: ShiftT = (0, 0),
    *,
    linear: bool | None = None,
    sigmoid: bool | tuple[Slope, Center] = False,
    border_handling: int | BorderHandling = MIRROR,
    sample_grid_model: int | SampleGridModel = MATCH_EDGES,
    field_based: FieldBasedLike | None = None,
    ignore_mask: VideoNode | None = None,
    blur: float | None = None,
    **kwargs: Any
) -> ConstantFormatVideoNode

Descale a clip to the given resolution, with image borders handling and sampling grid alignment, optionally using linear light processing.

Supports both progressive and interlaced sources. When interlaced, it will separate fields, perform per-field descaling, and weave them back.

Keyword arguments passed during initialization are automatically injected here, unless explicitly overridden by the arguments provided at call time. Only arguments that match named parameters in this method are injected.

Parameters:

  • clip

    (VideoNode) –

    The source clip.

  • width

    (int | None, default: None ) –

    Target descaled width (defaults to clip width if None).

  • height

    (int | None, default: None ) –

    Target descaled height (defaults to clip height if None).

  • shift

    (ShiftT, default: (0, 0) ) –

    Subpixel shift (top, left) or per-field shifts.

  • linear

    (bool | None, default: None ) –

    Whether to linearize the input before descaling. If None, inferred from sigmoid.

  • sigmoid

    (bool | tuple[Slope, Center], default: False ) –

    Whether to use sigmoid transfer curve. Can be True, False, or a tuple of (slope, center). True applies the defaults values (6.5, 0.75). Keep in mind sigmoid slope has to be in range 1.0-20.0 (inclusive) and sigmoid center has to be in range 0.0-1.0 (inclusive).

  • border_handling

    (int | BorderHandling, default: MIRROR ) –

    Method for handling image borders during sampling.

  • sample_grid_model

    (int | SampleGridModel, default: MATCH_EDGES ) –

    Model used to align sampling grid.

  • field_based

    (FieldBasedLike | None, default: None ) –

    Field-based processing mode (interlaced or progressive).

  • ignore_mask

    (VideoNode | None, default: None ) –

    Optional mask specifying areas to ignore during descaling.

  • blur

    (float | None, default: None ) –

    Amount of blur to apply during scaling.

  • **kwargs

    (Any, default: {} ) –

    Additional arguments passed to descale_function.

Returns:

  • ConstantFormatVideoNode

    The descaled video node, optionally processed in linear light.

Source code in vskernels/abstract/complex.py
671
672
673
674
675
676
677
678
679
680
681
682
683
684
685
686
687
688
689
690
691
692
693
694
695
696
697
698
699
700
701
702
703
704
705
706
707
708
709
710
711
712
713
714
715
716
717
718
719
720
721
722
723
724
725
726
727
728
729
730
731
732
733
734
735
736
737
738
739
740
741
742
743
744
745
746
747
748
749
750
751
752
753
754
755
756
757
758
759
760
761
762
763
764
765
766
767
768
769
770
771
def descale(
    self,
    clip: vs.VideoNode,
    width: int | None = None,
    height: int | None = None,
    shift: ShiftT = (0, 0),
    *,
    # `linear` and `sigmoid` parameters from LinearDescaler
    linear: bool | None = None,
    sigmoid: bool | tuple[Slope, Center] = False,
    # ComplexDescaler adds border_handling, sample_grid_model, field_based,  ignore_mask and blur
    border_handling: int | BorderHandling = BorderHandling.MIRROR,
    sample_grid_model: int | SampleGridModel = SampleGridModel.MATCH_EDGES,
    field_based: FieldBasedLike | None = None,
    ignore_mask: vs.VideoNode | None = None,
    blur: float | None = None,
    **kwargs: Any,
) -> ConstantFormatVideoNode:
    """
    Descale a clip to the given resolution, with image borders handling and sampling grid alignment,
    optionally using linear light processing.

    Supports both progressive and interlaced sources. When interlaced, it will separate fields,
    perform per-field descaling, and weave them back.

    Keyword arguments passed during initialization are automatically injected here,
    unless explicitly overridden by the arguments provided at call time.
    Only arguments that match named parameters in this method are injected.

    Args:
        clip: The source clip.
        width: Target descaled width (defaults to clip width if None).
        height: Target descaled height (defaults to clip height if None).
        shift: Subpixel shift (top, left) or per-field shifts.
        linear: Whether to linearize the input before descaling. If None, inferred from sigmoid.
        sigmoid: Whether to use sigmoid transfer curve. Can be True, False, or a tuple of (slope, center). `True`
            applies the defaults values (6.5, 0.75). Keep in mind sigmoid slope has to be in range 1.0-20.0
            (inclusive) and sigmoid center has to be in range 0.0-1.0 (inclusive).
        border_handling: Method for handling image borders during sampling.
        sample_grid_model: Model used to align sampling grid.
        field_based: Field-based processing mode (interlaced or progressive).
        ignore_mask: Optional mask specifying areas to ignore during descaling.
        blur: Amount of blur to apply during scaling.
        **kwargs: Additional arguments passed to `descale_function`.

    Returns:
        The descaled video node, optionally processed in linear light.
    """
    width, height = self._wh_norm(clip, width, height)
    check_correct_subsampling(clip, width, height)

    field_based = FieldBased.from_param_or_video(field_based, clip)

    clip, bits = expect_bits(clip, 32)

    de_base_args = (width, height // (1 + field_based.is_inter))
    kwargs.update(
        linear=linear,
        sigmoid=sigmoid,
        border_handling=BorderHandling.from_param(border_handling, self.descale),
        ignore_mask=ignore_mask,
        blur=blur,
    )

    sample_grid_model = SampleGridModel(sample_grid_model)

    if field_based.is_inter:
        from vsdeinterlace import reweave

        shift_y, shift_x = _descale_shift_norm(shift, False, self.descale)

        kwargs_tf, shift = sample_grid_model.for_src(clip, width, height, (shift_y[0], shift_x[0]), **kwargs)
        kwargs_bf, shift = sample_grid_model.for_src(clip, width, height, (shift_y[1], shift_x[1]), **kwargs)

        de_kwargs_tf = self.get_descale_args(clip, (shift_y[0], shift_x[0]), *de_base_args, **kwargs_tf)
        de_kwargs_bf = self.get_descale_args(clip, (shift_y[1], shift_x[1]), *de_base_args, **kwargs_bf)

        if height % 2:
            raise CustomIndexError("You can't descale to odd resolution when crossconverted!", self.descale)

        field_shift = 0.125 * height / clip.height

        fields = clip.std.SeparateFields(field_based.is_tff)

        descaled_tf = super().descale(
            fields[0::2],
            **de_kwargs_tf | {"src_top": de_kwargs_tf.get("src_top", 0.0) + field_shift},
        )
        descaled_bf = super().descale(
            fields[1::2],
            **de_kwargs_bf | {"src_top": de_kwargs_bf.get("src_top", 0.0) - field_shift},
        )
        descaled = reweave(descaled_tf, descaled_bf, field_based)
    else:
        shift = _descale_shift_norm(shift, True, self.descale)

        kwargs, shift = sample_grid_model.for_src(clip, width, height, shift, **kwargs)

        descaled = super().descale(clip, **self.get_descale_args(clip, shift, *de_base_args, **kwargs))

    return depth(descaled, bits)

ensure_obj classmethod

ensure_obj(
    scaler: str | type[Self] | Self | None = None,
    /,
    func_except: FuncExcept | None = None,
) -> Self

Ensure that the input is a scaler instance, resolving it if necessary.

Parameters:

  • scaler

    (str | type[Self] | Self | None, default: None ) –

    Scaler identifier (string, class, or instance).

  • func_except

    (FuncExcept | None, default: None ) –

    Function returned for custom error handling.

Returns:

  • Self

    Scaler instance.

Source code in vskernels/abstract/base.py
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
@classmethod
def ensure_obj(
    cls,
    scaler: str | type[Self] | Self | None = None,
    /,
    func_except: FuncExcept | None = None,
) -> Self:
    """
    Ensure that the input is a scaler instance, resolving it if necessary.

    Args:
        scaler: Scaler identifier (string, class, or instance).
        func_except: Function returned for custom error handling.

    Returns:
        Scaler instance.
    """
    return _base_ensure_obj(cls, scaler, func_except)

from_param classmethod

from_param(
    scaler: str | type[Self] | Self | None = None,
    /,
    func_except: FuncExcept | None = None,
) -> type[Self]

Resolve and return a scaler type from a given input (string, type, or instance).

Parameters:

  • scaler

    (str | type[Self] | Self | None, default: None ) –

    Scaler identifier (string, class, or instance).

  • func_except

    (FuncExcept | None, default: None ) –

    Function returned for custom error handling.

Returns:

  • type[Self]

    Resolved scaler type.

Source code in vskernels/abstract/base.py
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
@classmethod
def from_param(
    cls,
    scaler: str | type[Self] | Self | None = None,
    /,
    func_except: FuncExcept | None = None,
) -> type[Self]:
    """
    Resolve and return a scaler type from a given input (string, type, or instance).

    Args:
        scaler: Scaler identifier (string, class, or instance).
        func_except: Function returned for custom error handling.

    Returns:
        Resolved scaler type.
    """
    return _base_from_param(cls, scaler, cls._err_class, func_except)

get_bob_args

get_bob_args(
    clip: VideoNode, shift: tuple[TopShift, LeftShift] = (0, 0), **kwargs: Any
) -> dict[str, Any]
Source code in vskernels/kernels/zimg/bicubic.py
60
61
62
63
64
65
66
67
68
def get_bob_args(
    self,
    clip: vs.VideoNode,
    shift: tuple[TopShift, LeftShift] = (0, 0),
    **kwargs: Any,
) -> dict[str, Any]:
    return super().get_bob_args(
        clip, shift, filter="bicubic", filter_param_a=self.b, filter_param_b=self.c, **kwargs
    )

get_descale_args

get_descale_args(
    clip: VideoNode,
    shift: tuple[TopShift, LeftShift] = (0, 0),
    width: int | None = None,
    height: int | None = None,
    **kwargs: Any
) -> dict[str, Any]

Generate and normalize argument dictionary for a descale operation.

Parameters:

  • clip

    (VideoNode) –

    The source clip.

  • shift

    (tuple[TopShift, LeftShift], default: (0, 0) ) –

    Vertical and horizontal shift to apply.

  • width

    (int | None, default: None ) –

    Target width.

  • height

    (int | None, default: None ) –

    Target height.

  • **kwargs

    (Any, default: {} ) –

    Additional arguments to pass to the descale function.

Returns:

  • dict[str, Any]

    Dictionary of keyword arguments for the descale function.

Source code in vskernels/abstract/base.py
946
947
948
949
950
951
952
953
954
955
956
957
958
959
960
961
962
963
964
965
966
967
def get_descale_args(
    self,
    clip: vs.VideoNode,
    shift: tuple[TopShift, LeftShift] = (0, 0),
    width: int | None = None,
    height: int | None = None,
    **kwargs: Any,
) -> dict[str, Any]:
    """
    Generate and normalize argument dictionary for a descale operation.

    Args:
        clip: The source clip.
        shift: Vertical and horizontal shift to apply.
        width: Target width.
        height: Target height.
        **kwargs: Additional arguments to pass to the descale function.

    Returns:
        Dictionary of keyword arguments for the descale function.
    """
    return {"src_top": shift[0], "src_left": shift[1]} | self.get_params_args(True, clip, width, height, **kwargs)

get_params_args

get_params_args(
    is_descale: bool,
    clip: VideoNode,
    width: int | None = None,
    height: int | None = None,
    **kwargs: Any
) -> dict[str, Any]
Source code in vskernels/kernels/zimg/bicubic.py
52
53
54
55
56
57
58
def get_params_args(
    self, is_descale: bool, clip: vs.VideoNode, width: int | None = None, height: int | None = None, **kwargs: Any
) -> dict[str, Any]:
    args = super().get_params_args(is_descale, clip, width, height, **kwargs)
    if is_descale:
        return args | {"b": self.b, "c": self.c}
    return args | {"filter_param_a": self.b, "filter_param_b": self.c}

get_resample_args

get_resample_args(
    clip: VideoNode,
    format: int | VideoFormatLike | HoldsVideoFormat,
    matrix: MatrixLike | None,
    matrix_in: MatrixLike | None,
    **kwargs: Any
) -> dict[str, Any]

Generate and normalize argument dictionary for a resample operation.

Parameters:

  • clip

    (VideoNode) –

    The source clip.

  • format

    (int | VideoFormatLike | HoldsVideoFormat) –

    The target video format, which can either be:

    • an integer format ID,
    • a vs.PresetVideoFormat or vs.VideoFormat,
    • or a source from which a valid VideoFormat can be extracted.
  • matrix

    (MatrixLike | None) –

    Target color matrix.

  • matrix_in

    (MatrixLike | None) –

    Source color matrix.

  • **kwargs

    (Any, default: {} ) –

    Additional arguments to pass to the resample function.

Returns:

  • dict[str, Any]

    Dictionary of keyword arguments for the resample function.

Source code in vskernels/abstract/base.py
 992
 993
 994
 995
 996
 997
 998
 999
1000
1001
1002
1003
1004
1005
1006
1007
1008
1009
1010
1011
1012
1013
1014
1015
1016
1017
1018
1019
1020
1021
def get_resample_args(
    self,
    clip: vs.VideoNode,
    format: int | VideoFormatLike | HoldsVideoFormat,
    matrix: MatrixLike | None,
    matrix_in: MatrixLike | None,
    **kwargs: Any,
) -> dict[str, Any]:
    """
    Generate and normalize argument dictionary for a resample operation.

    Args:
        clip: The source clip.
        format: The target video format, which can either be:

               - an integer format ID,
               - a `vs.PresetVideoFormat` or `vs.VideoFormat`,
               - or a source from which a valid `VideoFormat` can be extracted.
        matrix: Target color matrix.
        matrix_in: Source color matrix.
        **kwargs: Additional arguments to pass to the resample function.

    Returns:
        Dictionary of keyword arguments for the resample function.
    """
    return {
        "format": get_video_format(format).id,
        "matrix": Matrix.from_param(matrix),
        "matrix_in": Matrix.from_param(matrix_in),
    } | self.get_params_args(False, clip, **kwargs)

get_rescale_args

get_rescale_args(
    clip: VideoNode,
    shift: tuple[TopShift, LeftShift] = (0, 0),
    width: int | None = None,
    height: int | None = None,
    **kwargs: Any
) -> dict[str, Any]

Generate and normalize argument dictionary for a rescale operation.

Parameters:

  • clip

    (VideoNode) –

    The source clip.

  • shift

    (tuple[TopShift, LeftShift], default: (0, 0) ) –

    Vertical and horizontal shift to apply.

  • width

    (int | None, default: None ) –

    Target width.

  • height

    (int | None, default: None ) –

    Target height.

  • **kwargs

    (Any, default: {} ) –

    Additional arguments to pass to the rescale function.

Returns:

  • dict[str, Any]

    Dictionary of keyword arguments for the rescale function.

Source code in vskernels/abstract/base.py
969
970
971
972
973
974
975
976
977
978
979
980
981
982
983
984
985
986
987
988
989
990
def get_rescale_args(
    self,
    clip: vs.VideoNode,
    shift: tuple[TopShift, LeftShift] = (0, 0),
    width: int | None = None,
    height: int | None = None,
    **kwargs: Any,
) -> dict[str, Any]:
    """
    Generate and normalize argument dictionary for a rescale operation.

    Args:
        clip: The source clip.
        shift: Vertical and horizontal shift to apply.
        width: Target width.
        height: Target height.
        **kwargs: Additional arguments to pass to the rescale function.

    Returns:
        Dictionary of keyword arguments for the rescale function.
    """
    return {"src_top": shift[0], "src_left": shift[1]} | self.get_params_args(True, clip, width, height, **kwargs)

get_scale_args

get_scale_args(
    clip: VideoNode,
    shift: tuple[TopShift, LeftShift] = (0, 0),
    width: int | None = None,
    height: int | None = None,
    **kwargs: Any
) -> dict[str, Any]

Generate and normalize argument dictionary for a scale operation.

Parameters:

  • clip

    (VideoNode) –

    The source clip.

  • shift

    (tuple[TopShift, LeftShift], default: (0, 0) ) –

    Vertical and horizontal shift to apply.

  • width

    (int | None, default: None ) –

    Target width.

  • height

    (int | None, default: None ) –

    Target height.

  • **kwargs

    (Any, default: {} ) –

    Additional arguments to pass to the scale function.

Returns:

  • dict[str, Any]

    Dictionary of keyword arguments for the scale function.

Source code in vskernels/abstract/base.py
923
924
925
926
927
928
929
930
931
932
933
934
935
936
937
938
939
940
941
942
943
944
def get_scale_args(
    self,
    clip: vs.VideoNode,
    shift: tuple[TopShift, LeftShift] = (0, 0),
    width: int | None = None,
    height: int | None = None,
    **kwargs: Any,
) -> dict[str, Any]:
    """
    Generate and normalize argument dictionary for a scale operation.

    Args:
        clip: The source clip.
        shift: Vertical and horizontal shift to apply.
        width: Target width.
        height: Target height.
        **kwargs: Additional arguments to pass to the scale function.

    Returns:
        Dictionary of keyword arguments for the scale function.
    """
    return {"src_top": shift[0], "src_left": shift[1]} | self.get_params_args(False, clip, width, height, **kwargs)

implemented_funcs classmethod

implemented_funcs() -> frozenset[str]

Returns a set of function names that are implemented in the current class and the parent classes.

These functions determine which keyword arguments will be extracted from the init method.

Returns:

Source code in vskernels/abstract/base.py
431
432
433
434
435
436
437
438
439
440
441
442
@classproperty
@classmethod
def implemented_funcs(cls) -> frozenset[str]:
    """
    Returns a set of function names that are implemented in the current class and the parent classes.

    These functions determine which keyword arguments will be extracted from the __init__ method.

    Returns:
        Frozen set of function names.
    """
    return frozenset(func for klass in cls.mro() for func in getattr(klass, "_implemented_funcs", ()))

kernel_radius

kernel_radius() -> int
Source code in vskernels/kernels/zimg/bicubic.py
70
71
72
73
74
@ZimgComplexKernel.cachedproperty
def kernel_radius(self) -> int:
    if (self.b, self.c) == (0, 0):
        return 1
    return 2

multi

multi(
    clip: VideoNodeT,
    multi: float = 2.0,
    shift: tuple[TopShift, LeftShift] = (0, 0),
    **kwargs: Any
) -> VideoNodeT

Deprecated alias for supersample.

Keyword arguments passed during initialization are automatically injected here, unless explicitly overridden by the arguments provided at call time. Only arguments that match named parameters in this method are injected.

Parameters:

  • clip

    (VideoNodeT) –

    The source clip.

  • multi

    (float, default: 2.0 ) –

    Supersampling factor.

  • shift

    (tuple[TopShift, LeftShift], default: (0, 0) ) –

    Subpixel shift (top, left) applied during scaling.

  • **kwargs

    (Any, default: {} ) –

    Additional arguments forwarded to the scale function.

Returns:

Source code in vskernels/abstract/base.py
531
532
533
534
535
536
537
538
539
540
541
542
543
544
545
546
547
548
549
550
551
@deprecated('The "multi" method is deprecated. Use "supersample" instead.', category=DeprecationWarning)
def multi(
    self, clip: VideoNodeT, multi: float = 2.0, shift: tuple[TopShift, LeftShift] = (0, 0), **kwargs: Any
) -> VideoNodeT:
    """
    Deprecated alias for `supersample`.

    Keyword arguments passed during initialization are automatically injected here,
    unless explicitly overridden by the arguments provided at call time.
    Only arguments that match named parameters in this method are injected.

    Args:
        clip: The source clip.
        multi: Supersampling factor.
        shift: Subpixel shift (top, left) applied during scaling.
        **kwargs: Additional arguments forwarded to the scale function.

    Returns:
        The supersampled clip.
    """
    return self.supersample(clip, multi, shift, **kwargs)

pretty_string

pretty_string() -> str

Cached property returning a user-friendly string representation.

Returns:

  • str

    Pretty-printed string with arguments.

Source code in vskernels/abstract/base.py
421
422
423
424
425
426
427
428
429
@BaseScalerMeta.cachedproperty
def pretty_string(self) -> str:
    """
    Cached property returning a user-friendly string representation.

    Returns:
        Pretty-printed string with arguments.
    """
    return self._pretty_string()

resample

resample(
    clip: VideoNode,
    format: int | VideoFormatLike | HoldsVideoFormat,
    matrix: MatrixLike | None = None,
    matrix_in: MatrixLike | None = None,
    **kwargs: Any
) -> ConstantFormatVideoNode

Resample a video clip to the given format.

Keyword arguments passed during initialization are automatically injected here, unless explicitly overridden by the arguments provided at call time. Only arguments that match named parameters in this method are injected.

Parameters:

  • clip

    (VideoNode) –

    The source clip.

  • format

    (int | VideoFormatLike | HoldsVideoFormat) –

    The target video format, which can either be:

    • an integer format ID,
    • a vs.PresetVideoFormat or vs.VideoFormat,
    • or a source from which a valid VideoFormat can be extracted.
  • matrix

    (MatrixLike | None, default: None ) –

    An optional color transformation matrix to apply.

  • matrix_in

    (MatrixLike | None, default: None ) –

    An optional input matrix for color transformations.

  • **kwargs

    (Any, default: {} ) –

    Additional keyword arguments passed to the resample_function.

Returns:

  • ConstantFormatVideoNode

    The resampled clip.

Source code in vskernels/abstract/base.py
717
718
719
720
721
722
723
724
725
726
727
728
729
730
731
732
733
734
735
736
737
738
739
740
741
742
743
744
745
746
747
748
def resample(
    self,
    clip: vs.VideoNode,
    format: int | VideoFormatLike | HoldsVideoFormat,
    matrix: MatrixLike | None = None,
    matrix_in: MatrixLike | None = None,
    **kwargs: Any,
) -> ConstantFormatVideoNode:
    """
    Resample a video clip to the given format.

    Keyword arguments passed during initialization are automatically injected here,
    unless explicitly overridden by the arguments provided at call time.
    Only arguments that match named parameters in this method are injected.

    Args:
        clip: The source clip.
        format: The target video format, which can either be:

               - an integer format ID,
               - a `vs.PresetVideoFormat` or `vs.VideoFormat`,
               - or a source from which a valid `VideoFormat` can be extracted.
        matrix: An optional color transformation matrix to apply.
        matrix_in: An optional input matrix for color transformations.
        **kwargs: Additional keyword arguments passed to the `resample_function`.

    Returns:
        The resampled clip.
    """
    return self.resample_function(
        clip, **_norm_props_enums(self.get_resample_args(clip, format, matrix, matrix_in, **kwargs))
    )

rescale

rescale(
    clip: VideoNode,
    width: int | None = None,
    height: int | None = None,
    shift: ShiftT = (0, 0),
    *,
    linear: bool | None = None,
    sigmoid: bool | tuple[Slope, Center] = False,
    border_handling: int | BorderHandling = MIRROR,
    sample_grid_model: int | SampleGridModel = MATCH_EDGES,
    field_based: FieldBasedLike | None = None,
    ignore_mask: VideoNode | None = None,
    blur: float | None = None,
    **kwargs: Any
) -> ConstantFormatVideoNode

Rescale a clip to the given resolution from a previously descaled clip, with image borders handling and sampling grid alignment, optionally using linear light processing.

Keyword arguments passed during initialization are automatically injected here, unless explicitly overridden by the arguments provided at call time. Only arguments that match named parameters in this method are injected.

Parameters:

  • clip

    (VideoNode) –

    The source clip.

  • width

    (int | None, default: None ) –

    Target scaled width (defaults to clip width if None).

  • height

    (int | None, default: None ) –

    Target scaled height (defaults to clip height if None).

  • shift

    (ShiftT, default: (0, 0) ) –

    Subpixel shift (top, left) or per-field shifts.

  • linear

    (bool | None, default: None ) –

    Whether to linearize the input before rescaling. If None, inferred from sigmoid.

  • sigmoid

    (bool | tuple[Slope, Center], default: False ) –

    Whether to use sigmoid transfer curve. Can be True, False, or a tuple of (slope, center). True applies the defaults values (6.5, 0.75). Keep in mind sigmoid slope has to be in range 1.0-20.0 (inclusive) and sigmoid center has to be in range 0.0-1.0 (inclusive).

  • border_handling

    (int | BorderHandling, default: MIRROR ) –

    Method for handling image borders during sampling.

  • sample_grid_model

    (int | SampleGridModel, default: MATCH_EDGES ) –

    Model used to align sampling grid.

  • field_based

    (FieldBasedLike | None, default: None ) –

    Field-based processing mode (interlaced or progressive).

  • ignore_mask

    (VideoNode | None, default: None ) –

    Optional mask specifying areas to ignore during rescaling.

  • blur

    (float | None, default: None ) –

    Amount of blur to apply during rescaling.

  • **kwargs

    (Any, default: {} ) –

    Additional arguments passed to rescale_function.

Returns:

  • ConstantFormatVideoNode

    The scaled clip.

Source code in vskernels/abstract/complex.py
773
774
775
776
777
778
779
780
781
782
783
784
785
786
787
788
789
790
791
792
793
794
795
796
797
798
799
800
801
802
803
804
805
806
807
808
809
810
811
812
813
814
815
816
817
818
819
820
821
822
823
824
825
826
827
828
829
830
831
832
833
834
835
836
837
838
839
840
841
842
843
def rescale(
    self,
    clip: vs.VideoNode,
    width: int | None = None,
    height: int | None = None,
    shift: ShiftT = (0, 0),
    *,
    # `linear` and `sigmoid` parameters from LinearDescaler
    linear: bool | None = None,
    sigmoid: bool | tuple[Slope, Center] = False,
    # ComplexDescaler adds border_handling, sample_grid_model, field_based, ignore_mask and blur
    border_handling: int | BorderHandling = BorderHandling.MIRROR,
    sample_grid_model: int | SampleGridModel = SampleGridModel.MATCH_EDGES,
    field_based: FieldBasedLike | None = None,
    ignore_mask: vs.VideoNode | None = None,
    blur: float | None = None,
    **kwargs: Any,
) -> ConstantFormatVideoNode:
    """
    Rescale a clip to the given resolution from a previously descaled clip,
    with image borders handling and sampling grid alignment, optionally using linear light processing.

    Keyword arguments passed during initialization are automatically injected here,
    unless explicitly overridden by the arguments provided at call time.
    Only arguments that match named parameters in this method are injected.

    Args:
        clip: The source clip.
        width: Target scaled width (defaults to clip width if None).
        height: Target scaled height (defaults to clip height if None).
        shift: Subpixel shift (top, left) or per-field shifts.
        linear: Whether to linearize the input before rescaling. If None, inferred from sigmoid.
        sigmoid: Whether to use sigmoid transfer curve. Can be True, False, or a tuple of (slope, center). `True`
            applies the defaults values (6.5, 0.75). Keep in mind sigmoid slope has to be in range 1.0-20.0
            (inclusive) and sigmoid center has to be in range 0.0-1.0 (inclusive).
        border_handling: Method for handling image borders during sampling.
        sample_grid_model: Model used to align sampling grid.
        field_based: Field-based processing mode (interlaced or progressive).
        ignore_mask: Optional mask specifying areas to ignore during rescaling.
        blur: Amount of blur to apply during rescaling.
        **kwargs: Additional arguments passed to `rescale_function`.

    Returns:
        The scaled clip.
    """
    width, height = self._wh_norm(clip, width, height)
    check_correct_subsampling(clip, width, height)

    field_based = FieldBased.from_param_or_video(field_based, clip)

    clip, bits = expect_bits(clip, 32)

    de_base_args = (width, height // (1 + field_based.is_inter))
    kwargs.update(
        border_handling=BorderHandling.from_param(border_handling, self.rescale), ignore_mask=ignore_mask, blur=blur
    )

    sample_grid_model = SampleGridModel(sample_grid_model)

    if field_based.is_inter:
        raise NotImplementedError
    else:
        shift = _descale_shift_norm(shift, True, self.rescale)

        kwargs, shift = sample_grid_model.for_src(clip, width, height, shift, **kwargs)

        rescaled = super().rescale(
            clip, **self.get_rescale_args(clip, shift, *de_base_args, **kwargs), linear=linear, sigmoid=sigmoid
        )

    return depth(rescaled, bits)

scale

scale(
    clip: VideoNode,
    width: int | None = None,
    height: int | None = None,
    shift: tuple[TopShift | list[TopShift], LeftShift | list[LeftShift]] = (
        0,
        0,
    ),
    *,
    linear: bool | None = None,
    sigmoid: bool | tuple[Slope, Center] = False,
    border_handling: BorderHandling = MIRROR,
    sample_grid_model: SampleGridModel = MATCH_EDGES,
    sar: Sar | float | bool | None = None,
    dar: Dar | float | bool | None = None,
    dar_in: Dar | bool | float | None = None,
    keep_ar: bool | None = None,
    blur: float | None = None,
    **kwargs: Any
) -> VideoNode | ConstantFormatVideoNode

Scale a clip to the given resolution, with aspect ratio and linear light support.

Keyword arguments passed during initialization are automatically injected here, unless explicitly overridden by the arguments provided at call time. Only arguments that match named parameters in this method are injected.

Parameters:

  • clip

    (VideoNode) –

    The source clip.

  • width

    (int | None, default: None ) –

    Target width (defaults to clip width if None).

  • height

    (int | None, default: None ) –

    Target height (defaults to clip height if None).

  • shift

    (tuple[TopShift | list[TopShift], LeftShift | list[LeftShift]], default: (0, 0) ) –

    Subpixel shift (top, left) applied during scaling. If a tuple is provided, it is used uniformly. If a list is given, the shift is applied per plane.

  • linear

    (bool | None, default: None ) –

    Whether to linearize the input before descaling. If None, inferred from sigmoid.

  • sigmoid

    (bool | tuple[Slope, Center], default: False ) –

    Whether to use sigmoid transfer curve. Can be True, False, or a tuple of (slope, center). True applies the defaults values (6.5, 0.75). Keep in mind sigmoid slope has to be in range 1.0-20.0 (inclusive) and sigmoid center has to be in range 0.0-1.0 (inclusive).

  • border_handling

    (BorderHandling, default: MIRROR ) –

    Method for handling image borders during sampling.

  • sample_grid_model

    (SampleGridModel, default: MATCH_EDGES ) –

    Model used to align sampling grid.

  • sar

    (Sar | float | bool | None, default: None ) –

    Sample aspect ratio to assume or convert to.

  • dar

    (Dar | float | bool | None, default: None ) –

    Desired display aspect ratio.

  • dar_in

    (Dar | bool | float | None, default: None ) –

    Input display aspect ratio, if different from clip's.

  • keep_ar

    (bool | None, default: None ) –

    Whether to adjust dimensions to preserve aspect ratio.

  • blur

    (float | None, default: None ) –

    Amount of blur to apply during scaling.

Returns:

  • VideoNode | ConstantFormatVideoNode

    Scaled clip, optionally aspect-corrected and linearized.

Source code in vskernels/abstract/complex.py
527
528
529
530
531
532
533
534
535
536
537
538
539
540
541
542
543
544
545
546
547
548
549
550
551
552
553
554
555
556
557
558
559
560
561
562
563
564
565
566
567
568
569
570
571
572
573
574
575
576
577
578
579
580
581
582
583
584
585
586
587
588
589
590
591
592
593
594
595
596
597
598
599
600
601
602
603
604
605
606
607
608
609
610
611
612
613
614
615
616
617
618
619
620
621
622
623
624
625
626
627
628
629
630
631
632
633
634
635
636
637
638
639
640
641
642
643
644
645
646
647
648
649
650
651
652
653
654
655
656
657
658
659
660
def scale(
    self,
    clip: vs.VideoNode,
    width: int | None = None,
    height: int | None = None,
    # ComplexScaler adds shift per planes
    shift: tuple[TopShift | list[TopShift], LeftShift | list[LeftShift]] = (0, 0),
    *,
    # `linear` and `sigmoid` from LinearScaler
    linear: bool | None = None,
    sigmoid: bool | tuple[Slope, Center] = False,
    # `border_handling`, `sample_grid_model`, `sar`, `dar`, `dar_in` and `keep_ar` from KeepArScaler
    border_handling: BorderHandling = BorderHandling.MIRROR,
    sample_grid_model: SampleGridModel = SampleGridModel.MATCH_EDGES,
    sar: Sar | float | bool | None = None,
    dar: Dar | float | bool | None = None,
    dar_in: Dar | bool | float | None = None,
    keep_ar: bool | None = None,
    # ComplexScaler adds blur
    blur: float | None = None,
    **kwargs: Any,
) -> vs.VideoNode | ConstantFormatVideoNode:
    """
    Scale a clip to the given resolution, with aspect ratio and linear light support.

    Keyword arguments passed during initialization are automatically injected here,
    unless explicitly overridden by the arguments provided at call time.
    Only arguments that match named parameters in this method are injected.

    Args:
        clip: The source clip.
        width: Target width (defaults to clip width if None).
        height: Target height (defaults to clip height if None).
        shift: Subpixel shift (top, left) applied during scaling. If a tuple is provided, it is used uniformly. If a
            list is given, the shift is applied per plane.
        linear: Whether to linearize the input before descaling. If None, inferred from sigmoid.
        sigmoid: Whether to use sigmoid transfer curve. Can be True, False, or a tuple of (slope, center). `True`
            applies the defaults values (6.5, 0.75). Keep in mind sigmoid slope has to be in range 1.0-20.0
            (inclusive) and sigmoid center has to be in range 0.0-1.0 (inclusive).
        border_handling: Method for handling image borders during sampling.
        sample_grid_model: Model used to align sampling grid.
        sar: Sample aspect ratio to assume or convert to.
        dar: Desired display aspect ratio.
        dar_in: Input display aspect ratio, if different from clip's.
        keep_ar: Whether to adjust dimensions to preserve aspect ratio.
        blur: Amount of blur to apply during scaling.

    Returns:
        Scaled clip, optionally aspect-corrected and linearized.
    """
    kwargs.update(
        linear=linear,
        sigmoid=sigmoid,
        border_handling=border_handling,
        sample_grid_model=sample_grid_model,
        sar=sar,
        dar=dar,
        dar_in=dar_in,
        keep_ar=keep_ar,
        blur=blur,
    )

    shift_top, shift_left = shift

    if isinstance(shift_top, (int, float)) and isinstance(shift_left, (int, float)):
        return super().scale(clip, width, height, (shift_top, shift_left), **kwargs)

    assert check_variable_format(clip, self.scale)

    n_planes = clip.format.num_planes

    shift_top = normalize_seq(shift_top, n_planes)
    shift_left = normalize_seq(shift_left, n_planes)

    if n_planes == 1:
        return super().scale(clip, width, height, (shift_top[0], shift_left[0]), **kwargs)

    width, height = self._wh_norm(clip, width, height)

    format_in = clip.format
    format_out = get_video_format(fallback(kwargs.pop("format", None), self.kwargs.get("format"), clip.format))

    chromaloc = ChromaLocation.from_video(clip, func=self.scale)
    chromaloc_in = ChromaLocation(
        fallback(kwargs.pop("chromaloc_in", None), self.kwargs.get("chromaloc_in"), chromaloc)
    )
    chromaloc_out = ChromaLocation(fallback(kwargs.pop("chromaloc", None), self.kwargs.get("chromaloc"), chromaloc))

    off_left, off_top = chromaloc_in.get_offsets(format_in)
    off_left_out, off_top_out = chromaloc_out.get_offsets(format_out)

    factor_w = 1 / 2**format_in.subsampling_w
    factor_h = 1 / 2**format_in.subsampling_h

    # Offsets for format out
    offc_left = (abs(off_left) + off_left_out) * factor_w
    offc_top = (abs(off_top) + off_top_out) * factor_h

    # Offsets for scale out
    if format_out.subsampling_w:
        offc_left = ((abs(off_left) + off_left * (clip.width / width)) * factor_w) + offc_left
    if format_out.subsampling_h:
        offc_top = ((abs(off_top) + off_top * (clip.height / height)) * factor_h) + offc_top

    for i in range(1, n_planes):
        shift_left[i] += offc_left
        shift_top[i] += offc_top

    scaled_planes = list[vs.VideoNode]()

    for i, (plane, top, left) in enumerate(zip(split(clip), shift_top, shift_left)):
        if i:
            w = round(width * 1 / 2**format_out.subsampling_h)
            h = round(height * 1 / 2**format_out.subsampling_h)
        else:
            w, h = width, height

        scaled_planes.append(
            super().scale(
                plane,
                w,
                h,
                (top, left),
                format=format_out.replace(color_family=vs.GRAY, subsampling_w=0, subsampling_h=0),
                **kwargs,
            )
        )

    merged = vs.core.std.ShufflePlanes(scaled_planes, [0, 0, 0], format_out.color_family, clip)

    if chromaloc_in != chromaloc_out:
        return chromaloc_out.apply(merged)

    return merged

shift

shift(
    clip: VideoNode, shift: tuple[TopShift, LeftShift], /, **kwargs: Any
) -> ConstantFormatVideoNode
shift(
    clip: VideoNode,
    shift_top: float | list[float],
    shift_left: float | list[float],
    /,
    **kwargs: Any,
) -> ConstantFormatVideoNode
shift(
    clip: VideoNode,
    shifts_or_top: float | tuple[float, float] | list[float],
    shift_left: float | list[float] | None = None,
    /,
    **kwargs: Any,
) -> ConstantFormatVideoNode

Apply a subpixel shift to the clip using the kernel's scaling logic.

If a single float or tuple is provided, it is used uniformly. If a list is given, the shift is applied per plane.

Keyword arguments passed during initialization are automatically injected here, unless explicitly overridden by the arguments provided at call time. Only arguments that match named parameters in this method are injected.

Parameters:

  • clip

    (VideoNode) –

    The source clip.

  • shifts_or_top

    (float | tuple[float, float] | list[float]) –

    Either a single vertical shift, a (top, left) tuple, or a list of vertical shifts.

  • shift_left

    (float | list[float] | None, default: None ) –

    Horizontal shift or list of horizontal shifts. Ignored if shifts_or_top is a tuple.

  • **kwargs

    (Any, default: {} ) –

    Additional arguments passed to the internal scale call.

Returns:

  • ConstantFormatVideoNode

    A new clip with the applied shift.

Raises:

  • VariableFormatError

    If the input clip has variable format.

  • CustomValueError

    If the input clip is GRAY but lists of shift has been passed.

Source code in vskernels/abstract/base.py
845
846
847
848
849
850
851
852
853
854
855
856
857
858
859
860
861
862
863
864
865
866
867
868
869
870
871
872
873
874
875
876
877
878
879
880
881
882
883
884
885
886
887
888
889
890
891
892
893
894
895
896
897
898
899
900
901
902
903
def shift(
    self,
    clip: vs.VideoNode,
    shifts_or_top: float | tuple[float, float] | list[float],
    shift_left: float | list[float] | None = None,
    /,
    **kwargs: Any,
) -> ConstantFormatVideoNode:
    """
    Apply a subpixel shift to the clip using the kernel's scaling logic.

    If a single float or tuple is provided, it is used uniformly.
    If a list is given, the shift is applied per plane.

    Keyword arguments passed during initialization are automatically injected here,
    unless explicitly overridden by the arguments provided at call time.
    Only arguments that match named parameters in this method are injected.

    Args:
        clip: The source clip.
        shifts_or_top: Either a single vertical shift, a (top, left) tuple, or a list of vertical shifts.
        shift_left: Horizontal shift or list of horizontal shifts. Ignored if `shifts_or_top` is a tuple.
        **kwargs: Additional arguments passed to the internal `scale` call.

    Returns:
        A new clip with the applied shift.

    Raises:
        VariableFormatError: If the input clip has variable format.
        CustomValueError: If the input clip is GRAY but lists of shift has been passed.
    """
    assert check_variable_format(clip, self.shift)

    n_planes = clip.format.num_planes

    def _shift(src: vs.VideoNode, shift: tuple[TopShift, LeftShift] = (0, 0)) -> ConstantFormatVideoNode:
        return self.scale(src, shift=shift, **kwargs)  # type: ignore[return-value]

    if isinstance(shifts_or_top, tuple):
        return _shift(clip, shifts_or_top)

    if isinstance(shifts_or_top, (int, float)) and isinstance(shift_left, (int, float, NoneType)):
        return _shift(clip, (shifts_or_top, shift_left or 0))

    if shift_left is None:
        shift_left = 0.0

    shifts_top = normalize_seq(shifts_or_top, n_planes)
    shifts_left = normalize_seq(shift_left, n_planes)

    if n_planes == 1:
        return _shift(clip, (shifts_top[0], shifts_left[0]))

    shifted_planes = [
        plane if top == left == 0 else _shift(plane, (top, left))
        for plane, top, left in zip(split(clip), shifts_top, shifts_left)
    ]

    return core.std.ShufflePlanes(shifted_planes, [0, 0, 0], clip.format.color_family, clip)

supersample

supersample(
    clip: VideoNodeT,
    rfactor: float = 2.0,
    shift: tuple[TopShift, LeftShift] = (0, 0),
    **kwargs: Any
) -> VideoNodeT

Supersample a clip by a given scaling factor.

Keyword arguments passed during initialization are automatically injected here, unless explicitly overridden by the arguments provided at call time. Only arguments that match named parameters in this method are injected.

Parameters:

  • clip

    (VideoNodeT) –

    The source clip.

  • rfactor

    (float, default: 2.0 ) –

    Scaling factor for supersampling.

  • shift

    (tuple[TopShift, LeftShift], default: (0, 0) ) –

    Subpixel shift (top, left) applied during scaling.

  • **kwargs

    (Any, default: {} ) –

    Additional arguments forwarded to the scale function.

Raises:

  • CustomValueError

    If resulting resolution is non-positive.

Returns:

Source code in vskernels/abstract/base.py
496
497
498
499
500
501
502
503
504
505
506
507
508
509
510
511
512
513
514
515
516
517
518
519
520
521
522
523
524
525
526
527
528
529
def supersample(
    self, clip: VideoNodeT, rfactor: float = 2.0, shift: tuple[TopShift, LeftShift] = (0, 0), **kwargs: Any
) -> VideoNodeT:
    """
    Supersample a clip by a given scaling factor.

    Keyword arguments passed during initialization are automatically injected here,
    unless explicitly overridden by the arguments provided at call time.
    Only arguments that match named parameters in this method are injected.

    Args:
        clip: The source clip.
        rfactor: Scaling factor for supersampling.
        shift: Subpixel shift (top, left) applied during scaling.
        **kwargs: Additional arguments forwarded to the scale function.

    Raises:
        CustomValueError: If resulting resolution is non-positive.

    Returns:
        The supersampled clip.
    """
    assert check_variable_resolution(clip, self.supersample)

    dst_width, dst_height = ceil(clip.width * rfactor), ceil(clip.height * rfactor)

    if max(dst_width, dst_height) <= 0.0:
        raise CustomValueError(
            'Multiplying the resolution by "rfactor" must result in a positive resolution!',
            self.supersample,
            rfactor,
        )

    return self.scale(clip, dst_width, dst_height, shift, **kwargs)  # type: ignore[return-value]