Skip to content

placebo

Classes:

EwaBicubic

EwaBicubic(
    b: float = 0.0, c: float = 0.5, radius: int | None = None, **kwargs: Any
)

Bases: Placebo

Ewa Bicubic resizer.

Initialize the scaler with specific 'b' and 'c' parameters and optional arguments.

These keyword arguments are automatically forwarded to the _implemented_funcs methods but only if the method explicitly accepts them as named parameters. If the same keyword is passed to both __init__ and one of the _implemented_funcs, the one passed to func takes precedence.

Parameters:

  • b

    (float, default: 0.0 ) –

    The 'b' parameter for bicubic interpolation.

  • c

    (float, default: 0.5 ) –

    The 'c' parameter for bicubic interpolation.

  • radius

    (int | None, default: None ) –

    Overrides the filter kernel radius.

  • **kwargs

    (Any, default: {} ) –

    Keyword arguments that configure the internal scaling behavior.

Classes:

Methods:

  • ensure_obj

    Ensure that the input is a scaler instance, resolving it if necessary.

  • from_param

    Resolve and return a scaler type from a given input (string, type, or instance).

  • get_scale_args
  • kernel_radius
  • multi

    Deprecated alias for supersample.

  • pretty_string

    Cached property returning a user-friendly string representation.

  • scale

    Scale a clip to the given resolution, with aspect ratio and linear light support.

  • supersample

    Supersample a clip by a given scaling factor.

Attributes:

Source code
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
def __init__(self, b: float = 0.0, c: float = 0.5, radius: int | None = None, **kwargs: Any) -> None:
    """
    Initialize the scaler with specific 'b' and 'c' parameters and optional arguments.

    These keyword arguments are automatically forwarded to the `_implemented_funcs` methods
    but only if the method explicitly accepts them as named parameters.
    If the same keyword is passed to both `__init__` and one of the `_implemented_funcs`,
    the one passed to `func` takes precedence.

    Args:
        b: The 'b' parameter for bicubic interpolation.
        c: The 'c' parameter for bicubic interpolation.
        radius: Overrides the filter kernel radius.
        **kwargs: Keyword arguments that configure the internal scaling behavior.
    """
    if radius is None:
        radius = 1 if (b, c) == (0, 0) else 2

    super().__init__(radius, b, c, **kwargs)

antiring instance-attribute

antiring = antiring

b instance-attribute

b = b

blur instance-attribute

blur = blur

c instance-attribute

c = c

clamp instance-attribute

clamp = clamp

kwargs instance-attribute

kwargs: dict[str, Any] = kwargs

Arguments passed to the implemented funcs or internal scale function.

radius instance-attribute

radius = radius

scale_function class-attribute instance-attribute

scale_function = Resample

taper instance-attribute

taper = taper

cached_property

cached_property(func: Callable[Concatenate[_BaseScalerT, P], T_co])

Bases: cached_property[T_co]

Read only version of functools.cached_property.

Source code
291
def __init__(self, func: Callable[Concatenate[_BaseScalerT, P], T_co]) -> None: ...

ensure_obj classmethod

ensure_obj(
    scaler: str | type[Self] | Self | None = None,
    /,
    func_except: FuncExceptT | None = None,
) -> Self

Ensure that the input is a scaler instance, resolving it if necessary.

Parameters:

  • scaler

    (str | type[Self] | Self | None, default: None ) –

    Scaler identifier (string, class, or instance).

  • func_except

    (FuncExceptT | None, default: None ) –

    Function returned for custom error handling.

Returns:

  • Self

    Scaler instance.

Source code
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
@classmethod
def ensure_obj(
    cls,
    scaler: str | type[Self] | Self | None = None,
    /,
    func_except: FuncExceptT | None = None,
) -> Self:
    """
    Ensure that the input is a scaler instance, resolving it if necessary.

    Args:
        scaler: Scaler identifier (string, class, or instance).
        func_except: Function returned for custom error handling.

    Returns:
        Scaler instance.
    """
    return _base_ensure_obj(cls, scaler, func_except)

from_param classmethod

from_param(
    scaler: str | type[Self] | Self | None = None,
    /,
    func_except: FuncExceptT | None = None,
) -> type[Self]

Resolve and return a scaler type from a given input (string, type, or instance).

Parameters:

  • scaler

    (str | type[Self] | Self | None, default: None ) –

    Scaler identifier (string, class, or instance).

  • func_except

    (FuncExceptT | None, default: None ) –

    Function returned for custom error handling.

Returns:

  • type[Self]

    Resolved scaler type.

Source code
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
@classmethod
def from_param(
    cls,
    scaler: str | type[Self] | Self | None = None,
    /,
    func_except: FuncExceptT | None = None,
) -> type[Self]:
    """
    Resolve and return a scaler type from a given input (string, type, or instance).

    Args:
        scaler: Scaler identifier (string, class, or instance).
        func_except: Function returned for custom error handling.

    Returns:
        Resolved scaler type.
    """
    return _base_from_param(cls, scaler, cls._err_class, func_except)

get_scale_args

get_scale_args(
    clip: VideoNode,
    shift: tuple[TopShift, LeftShift] = (0, 0),
    width: int | None = None,
    height: int | None = None,
    **kwargs: Any
) -> dict[str, Any]
Source code
 76
 77
 78
 79
 80
 81
 82
 83
 84
 85
 86
 87
 88
 89
 90
 91
 92
 93
 94
 95
 96
 97
 98
 99
100
101
102
103
104
def get_scale_args(
    self,
    clip: vs.VideoNode,
    shift: tuple[TopShift, LeftShift] = (0, 0),
    width: int | None = None,
    height: int | None = None,
    **kwargs: Any,
) -> dict[str, Any]:
    kwargs.pop("format", None)

    return (
        {
            "sx": kwargs.pop("src_left", shift[1]),
            "sy": kwargs.pop("src_top", shift[0]),
            "width": width,
            "height": height,
            "filter": self._kernel,
            "radius": self.radius,
            "param1": self.b,
            "param2": self.c,
            "clamp": self.clamp,
            "taper": self.taper,
            "blur": self.blur,
            "antiring": self.antiring,
        }
        | self.kwargs
        | kwargs
        | {"linearize": False, "sigmoidize": False}
    )

kernel_radius

kernel_radius() -> int
Source code
106
107
108
109
110
111
112
113
114
115
116
117
@ComplexScaler.cached_property
def kernel_radius(self) -> int:
    if self.radius:
        return ceil(self.radius)

    if self.b or self.c:
        b, c = fallback(self.b, 0), fallback(self.c, 0.5)

        if (b, c) == (0, 0):
            return 1

    return 2

multi

multi(
    clip: VideoNodeT,
    multi: float = 2.0,
    shift: tuple[TopShift, LeftShift] = (0, 0),
    **kwargs: Any
) -> VideoNodeT

Deprecated alias for supersample.

Parameters:

  • clip

    (VideoNodeT) –

    The source clip.

  • multi

    (float, default: 2.0 ) –

    Supersampling factor.

  • shift

    (tuple[TopShift, LeftShift], default: (0, 0) ) –

    Subpixel shift (top, left) applied during scaling.

  • **kwargs

    (Any, default: {} ) –

    Additional arguments forwarded to the scale function.

Returns:

Source code
511
512
513
514
515
516
517
518
519
520
521
522
523
524
525
526
527
@deprecated('The "multi" method is deprecated. Use "supersample" instead.', category=DeprecationWarning)
def multi(
    self, clip: VideoNodeT, multi: float = 2.0, shift: tuple[TopShift, LeftShift] = (0, 0), **kwargs: Any
) -> VideoNodeT:
    """
    Deprecated alias for `supersample`.

    Args:
        clip: The source clip.
        multi: Supersampling factor.
        shift: Subpixel shift (top, left) applied during scaling.
        **kwargs: Additional arguments forwarded to the scale function.

    Returns:
        The supersampled clip.
    """
    return self.supersample(clip, multi, shift, **kwargs)

pretty_string

pretty_string() -> str

Cached property returning a user-friendly string representation.

Returns:

  • str

    Pretty-printed string with arguments.

Source code
419
420
421
422
423
424
425
426
427
@cached_property
def pretty_string(self) -> str:
    """
    Cached property returning a user-friendly string representation.

    Returns:
        Pretty-printed string with arguments.
    """
    return self._pretty_string()

scale

scale(
    clip: VideoNode,
    width: int | None = None,
    height: int | None = None,
    shift: tuple[TopShift | list[TopShift], LeftShift | list[LeftShift]] = (
        0,
        0,
    ),
    *,
    linear: bool | None = None,
    sigmoid: bool | tuple[Slope, Center] = False,
    border_handling: BorderHandling = MIRROR,
    sample_grid_model: SampleGridModel = MATCH_EDGES,
    sar: Sar | float | bool | None = None,
    dar: Dar | float | bool | None = None,
    dar_in: Dar | bool | float | None = None,
    keep_ar: bool | None = None,
    blur: float | None = None,
    **kwargs: Any
) -> VideoNode | ConstantFormatVideoNode

Scale a clip to the given resolution, with aspect ratio and linear light support.

Keyword arguments passed during initialization are automatically injected here, unless explicitly overridden by the arguments provided at call time. Only arguments that match named parameters in this method are injected.

Parameters:

  • clip

    (VideoNode) –

    The source clip.

  • width

    (int | None, default: None ) –

    Target width (defaults to clip width if None).

  • height

    (int | None, default: None ) –

    Target height (defaults to clip height if None).

  • shift

    (tuple[TopShift | list[TopShift], LeftShift | list[LeftShift]], default: (0, 0) ) –

    Subpixel shift (top, left) applied during scaling. If a tuple is provided, it is used uniformly. If a list is given, the shift is applied per plane.

  • linear

    (bool | None, default: None ) –

    Whether to linearize the input before descaling. If None, inferred from sigmoid.

  • sigmoid

    (bool | tuple[Slope, Center], default: False ) –

    Whether to use sigmoid transfer curve. Can be True, False, or a tuple of (slope, center). True applies the defaults values (6.5, 0.75). Keep in mind sigmoid slope has to be in range 1.0-20.0 (inclusive) and sigmoid center has to be in range 0.0-1.0 (inclusive).

  • border_handling

    (BorderHandling, default: MIRROR ) –

    Method for handling image borders during sampling.

  • sample_grid_model

    (SampleGridModel, default: MATCH_EDGES ) –

    Model used to align sampling grid.

  • sar

    (Sar | float | bool | None, default: None ) –

    Sample aspect ratio to assume or convert to.

  • dar

    (Dar | float | bool | None, default: None ) –

    Desired display aspect ratio.

  • dar_in

    (Dar | bool | float | None, default: None ) –

    Input display aspect ratio, if different from clip's.

  • keep_ar

    (bool | None, default: None ) –

    Whether to adjust dimensions to preserve aspect ratio.

  • blur

    (float | None, default: None ) –

    Amount of blur to apply during scaling.

Returns:

  • VideoNode | ConstantFormatVideoNode

    Scaled clip, optionally aspect-corrected and linearized.

Source code
527
528
529
530
531
532
533
534
535
536
537
538
539
540
541
542
543
544
545
546
547
548
549
550
551
552
553
554
555
556
557
558
559
560
561
562
563
564
565
566
567
568
569
570
571
572
573
574
575
576
577
578
579
580
581
582
583
584
585
586
587
588
589
590
591
592
593
594
595
596
597
598
599
600
601
602
603
604
605
606
607
608
609
610
611
612
613
614
615
616
617
618
619
620
621
622
623
624
625
626
627
628
629
630
631
632
633
634
635
636
637
638
639
640
641
642
643
644
645
646
647
648
649
650
651
652
653
654
655
656
657
658
659
660
661
662
663
664
665
666
667
668
def scale(
    self,
    clip: vs.VideoNode,
    width: int | None = None,
    height: int | None = None,
    # ComplexScaler adds shift per planes
    shift: tuple[TopShift | list[TopShift], LeftShift | list[LeftShift]] = (0, 0),
    *,
    # `linear` and `sigmoid` from LinearScaler
    linear: bool | None = None,
    sigmoid: bool | tuple[Slope, Center] = False,
    # `border_handling`, `sample_grid_model`, `sar`, `dar`, `dar_in` and `keep_ar` from KeepArScaler
    border_handling: BorderHandling = BorderHandling.MIRROR,
    sample_grid_model: SampleGridModel = SampleGridModel.MATCH_EDGES,
    sar: Sar | float | bool | None = None,
    dar: Dar | float | bool | None = None,
    dar_in: Dar | bool | float | None = None,
    keep_ar: bool | None = None,
    # ComplexScaler adds blur
    blur: float | None = None,
    **kwargs: Any,
) -> vs.VideoNode | ConstantFormatVideoNode:
    """
    Scale a clip to the given resolution, with aspect ratio and linear light support.

    Keyword arguments passed during initialization are automatically injected here,
    unless explicitly overridden by the arguments provided at call time.
    Only arguments that match named parameters in this method are injected.

    Args:
        clip: The source clip.
        width: Target width (defaults to clip width if None).
        height: Target height (defaults to clip height if None).
        shift: Subpixel shift (top, left) applied during scaling. If a tuple is provided, it is used uniformly. If a
            list is given, the shift is applied per plane.
        linear: Whether to linearize the input before descaling. If None, inferred from sigmoid.
        sigmoid: Whether to use sigmoid transfer curve. Can be True, False, or a tuple of (slope, center). `True`
            applies the defaults values (6.5, 0.75). Keep in mind sigmoid slope has to be in range 1.0-20.0
            (inclusive) and sigmoid center has to be in range 0.0-1.0 (inclusive).
        border_handling: Method for handling image borders during sampling.
        sample_grid_model: Model used to align sampling grid.
        sar: Sample aspect ratio to assume or convert to.
        dar: Desired display aspect ratio.
        dar_in: Input display aspect ratio, if different from clip's.
        keep_ar: Whether to adjust dimensions to preserve aspect ratio.
        blur: Amount of blur to apply during scaling.

    Returns:
        Scaled clip, optionally aspect-corrected and linearized.
    """
    kwargs.update(
        linear=linear,
        sigmoid=sigmoid,
        border_handling=border_handling,
        sample_grid_model=sample_grid_model,
        sar=sar,
        dar=dar,
        dar_in=dar_in,
        keep_ar=keep_ar,
        blur=blur,
    )

    shift_top, shift_left = shift

    if isinstance(shift_top, (int, float)) and isinstance(shift_left, (int, float)):
        return super().scale(clip, width, height, (shift_top, shift_left), **kwargs)

    assert check_variable_format(clip, self.scale)

    n_planes = clip.format.num_planes

    shift_top = normalize_seq(shift_top, n_planes)
    shift_left = normalize_seq(shift_left, n_planes)

    if n_planes == 1:
        if len(set(shift_top)) > 1 or len(set(shift_left)) > 1:
            raise CustomValueError(
                "Inconsistent shift values detected for a single plane. "
                "All shift values must be identical when passing a GRAY clip.",
                self.scale,
                (shift_top, shift_left),
            )

        return super().scale(clip, width, height, (shift_top[0], shift_left[0]), **kwargs)

    width, height = self._wh_norm(clip, width, height)

    format_in = clip.format
    format_out = get_video_format(fallback(kwargs.pop("format", None), self.kwargs.get("format"), clip.format))

    chromaloc = ChromaLocation.from_video(clip, func=self.scale)
    chromaloc_in = ChromaLocation(
        fallback(kwargs.pop("chromaloc_in", None), self.kwargs.get("chromaloc_in"), chromaloc)
    )
    chromaloc_out = ChromaLocation(fallback(kwargs.pop("chromaloc", None), self.kwargs.get("chromaloc"), chromaloc))

    off_left, off_top = chromaloc_in.get_offsets(format_in)
    off_left_out, off_top_out = chromaloc_out.get_offsets(format_out)

    factor_w = 1 / 2**format_in.subsampling_w
    factor_h = 1 / 2**format_in.subsampling_h

    # Offsets for format out
    offc_left = (abs(off_left) + off_left_out) * factor_w
    offc_top = (abs(off_top) + off_top_out) * factor_h

    # Offsets for scale out
    if format_out.subsampling_w:
        offc_left = ((abs(off_left) + off_left * (clip.width / width)) * factor_w) + offc_left
    if format_out.subsampling_h:
        offc_top = ((abs(off_top) + off_top * (clip.height / height)) * factor_h) + offc_top

    for i in range(1, n_planes):
        shift_left[i] += offc_left
        shift_top[i] += offc_top

    scaled_planes = list[vs.VideoNode]()

    for i, (plane, top, left) in enumerate(zip(split(clip), shift_top, shift_left)):
        if i:
            w = round(width * 1 / 2**format_out.subsampling_h)
            h = round(height * 1 / 2**format_out.subsampling_h)
        else:
            w, h = width, height

        scaled_planes.append(
            super().scale(
                plane,
                w,
                h,
                (top, left),
                format=format_out.replace(color_family=vs.GRAY, subsampling_w=0, subsampling_h=0),
                **kwargs,
            )
        )

    merged = vs.core.std.ShufflePlanes(scaled_planes, [0, 0, 0], format_out.color_family, clip)

    if chromaloc_in != chromaloc_out:
        return chromaloc_out.apply(merged)

    return merged

supersample

supersample(
    clip: VideoNodeT,
    rfactor: float = 2.0,
    shift: tuple[TopShift, LeftShift] = (0, 0),
    **kwargs: Any
) -> VideoNodeT

Supersample a clip by a given scaling factor.

Parameters:

  • clip

    (VideoNodeT) –

    The source clip.

  • rfactor

    (float, default: 2.0 ) –

    Scaling factor for supersampling.

  • shift

    (tuple[TopShift, LeftShift], default: (0, 0) ) –

    Subpixel shift (top, left) applied during scaling.

  • **kwargs

    (Any, default: {} ) –

    Additional arguments forwarded to the scale function.

Raises:

  • CustomValueError

    If resulting resolution is non-positive.

Returns:

Source code
480
481
482
483
484
485
486
487
488
489
490
491
492
493
494
495
496
497
498
499
500
501
502
503
504
505
506
507
508
509
def supersample(
    self, clip: VideoNodeT, rfactor: float = 2.0, shift: tuple[TopShift, LeftShift] = (0, 0), **kwargs: Any
) -> VideoNodeT:
    """
    Supersample a clip by a given scaling factor.

    Args:
        clip: The source clip.
        rfactor: Scaling factor for supersampling.
        shift: Subpixel shift (top, left) applied during scaling.
        **kwargs: Additional arguments forwarded to the scale function.

    Raises:
        CustomValueError: If resulting resolution is non-positive.

    Returns:
        The supersampled clip.
    """
    assert check_variable_resolution(clip, self.supersample)

    dst_width, dst_height = ceil(clip.width * rfactor), ceil(clip.height * rfactor)

    if max(dst_width, dst_height) <= 0.0:
        raise CustomValueError(
            'Multiplying the resolution by "rfactor" must result in a positive resolution!',
            self.supersample,
            rfactor,
        )

    return self.scale(clip, dst_width, dst_height, shift, **kwargs)  # type: ignore[return-value]

EwaGinseng

EwaGinseng(radius: float = 3.238315484166236, **kwargs: Any)

Bases: Placebo

Ewa Ginseng resizer.

Initialize the kernel with a specific radius and optional keyword arguments.

These keyword arguments are automatically forwarded to the _implemented_funcs methods but only if the method explicitly accepts them as named parameters. If the same keyword is passed to both __init__ and one of the _implemented_funcs, the one passed to func takes precedence.

Parameters:

  • radius

    (float, default: 3.238315484166236 ) –

    Overrides the filter kernel radius. Has no effect if the filter kernel is not resizeable.

  • **kwargs

    (Any, default: {} ) –

    Keyword arguments that configure the internal scaling behavior.

Classes:

Methods:

  • ensure_obj

    Ensure that the input is a scaler instance, resolving it if necessary.

  • from_param

    Resolve and return a scaler type from a given input (string, type, or instance).

  • get_scale_args
  • kernel_radius
  • multi

    Deprecated alias for supersample.

  • pretty_string

    Cached property returning a user-friendly string representation.

  • scale

    Scale a clip to the given resolution, with aspect ratio and linear light support.

  • supersample

    Supersample a clip by a given scaling factor.

Attributes:

Source code
256
257
258
259
260
261
262
263
264
265
266
267
268
269
def __init__(self, radius: float = 3.2383154841662362076499, **kwargs: Any) -> None:
    """
    Initialize the kernel with a specific radius and optional keyword arguments.

    These keyword arguments are automatically forwarded to the `_implemented_funcs` methods
    but only if the method explicitly accepts them as named parameters.
    If the same keyword is passed to both `__init__` and one of the `_implemented_funcs`,
    the one passed to `func` takes precedence.

    Args:
        radius: Overrides the filter kernel radius. Has no effect if the filter kernel is not resizeable.
        **kwargs: Keyword arguments that configure the internal scaling behavior.
    """
    super().__init__(radius, None, None, **kwargs)

antiring instance-attribute

antiring = antiring

b instance-attribute

b = b

blur instance-attribute

blur = blur

c instance-attribute

c = c

clamp instance-attribute

clamp = clamp

kwargs instance-attribute

kwargs: dict[str, Any] = kwargs

Arguments passed to the implemented funcs or internal scale function.

radius instance-attribute

radius = radius

scale_function class-attribute instance-attribute

scale_function = Resample

taper instance-attribute

taper = taper

cached_property

cached_property(func: Callable[Concatenate[_BaseScalerT, P], T_co])

Bases: cached_property[T_co]

Read only version of functools.cached_property.

Source code
291
def __init__(self, func: Callable[Concatenate[_BaseScalerT, P], T_co]) -> None: ...

ensure_obj classmethod

ensure_obj(
    scaler: str | type[Self] | Self | None = None,
    /,
    func_except: FuncExceptT | None = None,
) -> Self

Ensure that the input is a scaler instance, resolving it if necessary.

Parameters:

  • scaler

    (str | type[Self] | Self | None, default: None ) –

    Scaler identifier (string, class, or instance).

  • func_except

    (FuncExceptT | None, default: None ) –

    Function returned for custom error handling.

Returns:

  • Self

    Scaler instance.

Source code
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
@classmethod
def ensure_obj(
    cls,
    scaler: str | type[Self] | Self | None = None,
    /,
    func_except: FuncExceptT | None = None,
) -> Self:
    """
    Ensure that the input is a scaler instance, resolving it if necessary.

    Args:
        scaler: Scaler identifier (string, class, or instance).
        func_except: Function returned for custom error handling.

    Returns:
        Scaler instance.
    """
    return _base_ensure_obj(cls, scaler, func_except)

from_param classmethod

from_param(
    scaler: str | type[Self] | Self | None = None,
    /,
    func_except: FuncExceptT | None = None,
) -> type[Self]

Resolve and return a scaler type from a given input (string, type, or instance).

Parameters:

  • scaler

    (str | type[Self] | Self | None, default: None ) –

    Scaler identifier (string, class, or instance).

  • func_except

    (FuncExceptT | None, default: None ) –

    Function returned for custom error handling.

Returns:

  • type[Self]

    Resolved scaler type.

Source code
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
@classmethod
def from_param(
    cls,
    scaler: str | type[Self] | Self | None = None,
    /,
    func_except: FuncExceptT | None = None,
) -> type[Self]:
    """
    Resolve and return a scaler type from a given input (string, type, or instance).

    Args:
        scaler: Scaler identifier (string, class, or instance).
        func_except: Function returned for custom error handling.

    Returns:
        Resolved scaler type.
    """
    return _base_from_param(cls, scaler, cls._err_class, func_except)

get_scale_args

get_scale_args(
    clip: VideoNode,
    shift: tuple[TopShift, LeftShift] = (0, 0),
    width: int | None = None,
    height: int | None = None,
    **kwargs: Any
) -> dict[str, Any]
Source code
 76
 77
 78
 79
 80
 81
 82
 83
 84
 85
 86
 87
 88
 89
 90
 91
 92
 93
 94
 95
 96
 97
 98
 99
100
101
102
103
104
def get_scale_args(
    self,
    clip: vs.VideoNode,
    shift: tuple[TopShift, LeftShift] = (0, 0),
    width: int | None = None,
    height: int | None = None,
    **kwargs: Any,
) -> dict[str, Any]:
    kwargs.pop("format", None)

    return (
        {
            "sx": kwargs.pop("src_left", shift[1]),
            "sy": kwargs.pop("src_top", shift[0]),
            "width": width,
            "height": height,
            "filter": self._kernel,
            "radius": self.radius,
            "param1": self.b,
            "param2": self.c,
            "clamp": self.clamp,
            "taper": self.taper,
            "blur": self.blur,
            "antiring": self.antiring,
        }
        | self.kwargs
        | kwargs
        | {"linearize": False, "sigmoidize": False}
    )

kernel_radius

kernel_radius() -> int
Source code
106
107
108
109
110
111
112
113
114
115
116
117
@ComplexScaler.cached_property
def kernel_radius(self) -> int:
    if self.radius:
        return ceil(self.radius)

    if self.b or self.c:
        b, c = fallback(self.b, 0), fallback(self.c, 0.5)

        if (b, c) == (0, 0):
            return 1

    return 2

multi

multi(
    clip: VideoNodeT,
    multi: float = 2.0,
    shift: tuple[TopShift, LeftShift] = (0, 0),
    **kwargs: Any
) -> VideoNodeT

Deprecated alias for supersample.

Parameters:

  • clip

    (VideoNodeT) –

    The source clip.

  • multi

    (float, default: 2.0 ) –

    Supersampling factor.

  • shift

    (tuple[TopShift, LeftShift], default: (0, 0) ) –

    Subpixel shift (top, left) applied during scaling.

  • **kwargs

    (Any, default: {} ) –

    Additional arguments forwarded to the scale function.

Returns:

Source code
511
512
513
514
515
516
517
518
519
520
521
522
523
524
525
526
527
@deprecated('The "multi" method is deprecated. Use "supersample" instead.', category=DeprecationWarning)
def multi(
    self, clip: VideoNodeT, multi: float = 2.0, shift: tuple[TopShift, LeftShift] = (0, 0), **kwargs: Any
) -> VideoNodeT:
    """
    Deprecated alias for `supersample`.

    Args:
        clip: The source clip.
        multi: Supersampling factor.
        shift: Subpixel shift (top, left) applied during scaling.
        **kwargs: Additional arguments forwarded to the scale function.

    Returns:
        The supersampled clip.
    """
    return self.supersample(clip, multi, shift, **kwargs)

pretty_string

pretty_string() -> str

Cached property returning a user-friendly string representation.

Returns:

  • str

    Pretty-printed string with arguments.

Source code
419
420
421
422
423
424
425
426
427
@cached_property
def pretty_string(self) -> str:
    """
    Cached property returning a user-friendly string representation.

    Returns:
        Pretty-printed string with arguments.
    """
    return self._pretty_string()

scale

scale(
    clip: VideoNode,
    width: int | None = None,
    height: int | None = None,
    shift: tuple[TopShift | list[TopShift], LeftShift | list[LeftShift]] = (
        0,
        0,
    ),
    *,
    linear: bool | None = None,
    sigmoid: bool | tuple[Slope, Center] = False,
    border_handling: BorderHandling = MIRROR,
    sample_grid_model: SampleGridModel = MATCH_EDGES,
    sar: Sar | float | bool | None = None,
    dar: Dar | float | bool | None = None,
    dar_in: Dar | bool | float | None = None,
    keep_ar: bool | None = None,
    blur: float | None = None,
    **kwargs: Any
) -> VideoNode | ConstantFormatVideoNode

Scale a clip to the given resolution, with aspect ratio and linear light support.

Keyword arguments passed during initialization are automatically injected here, unless explicitly overridden by the arguments provided at call time. Only arguments that match named parameters in this method are injected.

Parameters:

  • clip

    (VideoNode) –

    The source clip.

  • width

    (int | None, default: None ) –

    Target width (defaults to clip width if None).

  • height

    (int | None, default: None ) –

    Target height (defaults to clip height if None).

  • shift

    (tuple[TopShift | list[TopShift], LeftShift | list[LeftShift]], default: (0, 0) ) –

    Subpixel shift (top, left) applied during scaling. If a tuple is provided, it is used uniformly. If a list is given, the shift is applied per plane.

  • linear

    (bool | None, default: None ) –

    Whether to linearize the input before descaling. If None, inferred from sigmoid.

  • sigmoid

    (bool | tuple[Slope, Center], default: False ) –

    Whether to use sigmoid transfer curve. Can be True, False, or a tuple of (slope, center). True applies the defaults values (6.5, 0.75). Keep in mind sigmoid slope has to be in range 1.0-20.0 (inclusive) and sigmoid center has to be in range 0.0-1.0 (inclusive).

  • border_handling

    (BorderHandling, default: MIRROR ) –

    Method for handling image borders during sampling.

  • sample_grid_model

    (SampleGridModel, default: MATCH_EDGES ) –

    Model used to align sampling grid.

  • sar

    (Sar | float | bool | None, default: None ) –

    Sample aspect ratio to assume or convert to.

  • dar

    (Dar | float | bool | None, default: None ) –

    Desired display aspect ratio.

  • dar_in

    (Dar | bool | float | None, default: None ) –

    Input display aspect ratio, if different from clip's.

  • keep_ar

    (bool | None, default: None ) –

    Whether to adjust dimensions to preserve aspect ratio.

  • blur

    (float | None, default: None ) –

    Amount of blur to apply during scaling.

Returns:

  • VideoNode | ConstantFormatVideoNode

    Scaled clip, optionally aspect-corrected and linearized.

Source code
527
528
529
530
531
532
533
534
535
536
537
538
539
540
541
542
543
544
545
546
547
548
549
550
551
552
553
554
555
556
557
558
559
560
561
562
563
564
565
566
567
568
569
570
571
572
573
574
575
576
577
578
579
580
581
582
583
584
585
586
587
588
589
590
591
592
593
594
595
596
597
598
599
600
601
602
603
604
605
606
607
608
609
610
611
612
613
614
615
616
617
618
619
620
621
622
623
624
625
626
627
628
629
630
631
632
633
634
635
636
637
638
639
640
641
642
643
644
645
646
647
648
649
650
651
652
653
654
655
656
657
658
659
660
661
662
663
664
665
666
667
668
def scale(
    self,
    clip: vs.VideoNode,
    width: int | None = None,
    height: int | None = None,
    # ComplexScaler adds shift per planes
    shift: tuple[TopShift | list[TopShift], LeftShift | list[LeftShift]] = (0, 0),
    *,
    # `linear` and `sigmoid` from LinearScaler
    linear: bool | None = None,
    sigmoid: bool | tuple[Slope, Center] = False,
    # `border_handling`, `sample_grid_model`, `sar`, `dar`, `dar_in` and `keep_ar` from KeepArScaler
    border_handling: BorderHandling = BorderHandling.MIRROR,
    sample_grid_model: SampleGridModel = SampleGridModel.MATCH_EDGES,
    sar: Sar | float | bool | None = None,
    dar: Dar | float | bool | None = None,
    dar_in: Dar | bool | float | None = None,
    keep_ar: bool | None = None,
    # ComplexScaler adds blur
    blur: float | None = None,
    **kwargs: Any,
) -> vs.VideoNode | ConstantFormatVideoNode:
    """
    Scale a clip to the given resolution, with aspect ratio and linear light support.

    Keyword arguments passed during initialization are automatically injected here,
    unless explicitly overridden by the arguments provided at call time.
    Only arguments that match named parameters in this method are injected.

    Args:
        clip: The source clip.
        width: Target width (defaults to clip width if None).
        height: Target height (defaults to clip height if None).
        shift: Subpixel shift (top, left) applied during scaling. If a tuple is provided, it is used uniformly. If a
            list is given, the shift is applied per plane.
        linear: Whether to linearize the input before descaling. If None, inferred from sigmoid.
        sigmoid: Whether to use sigmoid transfer curve. Can be True, False, or a tuple of (slope, center). `True`
            applies the defaults values (6.5, 0.75). Keep in mind sigmoid slope has to be in range 1.0-20.0
            (inclusive) and sigmoid center has to be in range 0.0-1.0 (inclusive).
        border_handling: Method for handling image borders during sampling.
        sample_grid_model: Model used to align sampling grid.
        sar: Sample aspect ratio to assume or convert to.
        dar: Desired display aspect ratio.
        dar_in: Input display aspect ratio, if different from clip's.
        keep_ar: Whether to adjust dimensions to preserve aspect ratio.
        blur: Amount of blur to apply during scaling.

    Returns:
        Scaled clip, optionally aspect-corrected and linearized.
    """
    kwargs.update(
        linear=linear,
        sigmoid=sigmoid,
        border_handling=border_handling,
        sample_grid_model=sample_grid_model,
        sar=sar,
        dar=dar,
        dar_in=dar_in,
        keep_ar=keep_ar,
        blur=blur,
    )

    shift_top, shift_left = shift

    if isinstance(shift_top, (int, float)) and isinstance(shift_left, (int, float)):
        return super().scale(clip, width, height, (shift_top, shift_left), **kwargs)

    assert check_variable_format(clip, self.scale)

    n_planes = clip.format.num_planes

    shift_top = normalize_seq(shift_top, n_planes)
    shift_left = normalize_seq(shift_left, n_planes)

    if n_planes == 1:
        if len(set(shift_top)) > 1 or len(set(shift_left)) > 1:
            raise CustomValueError(
                "Inconsistent shift values detected for a single plane. "
                "All shift values must be identical when passing a GRAY clip.",
                self.scale,
                (shift_top, shift_left),
            )

        return super().scale(clip, width, height, (shift_top[0], shift_left[0]), **kwargs)

    width, height = self._wh_norm(clip, width, height)

    format_in = clip.format
    format_out = get_video_format(fallback(kwargs.pop("format", None), self.kwargs.get("format"), clip.format))

    chromaloc = ChromaLocation.from_video(clip, func=self.scale)
    chromaloc_in = ChromaLocation(
        fallback(kwargs.pop("chromaloc_in", None), self.kwargs.get("chromaloc_in"), chromaloc)
    )
    chromaloc_out = ChromaLocation(fallback(kwargs.pop("chromaloc", None), self.kwargs.get("chromaloc"), chromaloc))

    off_left, off_top = chromaloc_in.get_offsets(format_in)
    off_left_out, off_top_out = chromaloc_out.get_offsets(format_out)

    factor_w = 1 / 2**format_in.subsampling_w
    factor_h = 1 / 2**format_in.subsampling_h

    # Offsets for format out
    offc_left = (abs(off_left) + off_left_out) * factor_w
    offc_top = (abs(off_top) + off_top_out) * factor_h

    # Offsets for scale out
    if format_out.subsampling_w:
        offc_left = ((abs(off_left) + off_left * (clip.width / width)) * factor_w) + offc_left
    if format_out.subsampling_h:
        offc_top = ((abs(off_top) + off_top * (clip.height / height)) * factor_h) + offc_top

    for i in range(1, n_planes):
        shift_left[i] += offc_left
        shift_top[i] += offc_top

    scaled_planes = list[vs.VideoNode]()

    for i, (plane, top, left) in enumerate(zip(split(clip), shift_top, shift_left)):
        if i:
            w = round(width * 1 / 2**format_out.subsampling_h)
            h = round(height * 1 / 2**format_out.subsampling_h)
        else:
            w, h = width, height

        scaled_planes.append(
            super().scale(
                plane,
                w,
                h,
                (top, left),
                format=format_out.replace(color_family=vs.GRAY, subsampling_w=0, subsampling_h=0),
                **kwargs,
            )
        )

    merged = vs.core.std.ShufflePlanes(scaled_planes, [0, 0, 0], format_out.color_family, clip)

    if chromaloc_in != chromaloc_out:
        return chromaloc_out.apply(merged)

    return merged

supersample

supersample(
    clip: VideoNodeT,
    rfactor: float = 2.0,
    shift: tuple[TopShift, LeftShift] = (0, 0),
    **kwargs: Any
) -> VideoNodeT

Supersample a clip by a given scaling factor.

Parameters:

  • clip

    (VideoNodeT) –

    The source clip.

  • rfactor

    (float, default: 2.0 ) –

    Scaling factor for supersampling.

  • shift

    (tuple[TopShift, LeftShift], default: (0, 0) ) –

    Subpixel shift (top, left) applied during scaling.

  • **kwargs

    (Any, default: {} ) –

    Additional arguments forwarded to the scale function.

Raises:

  • CustomValueError

    If resulting resolution is non-positive.

Returns:

Source code
480
481
482
483
484
485
486
487
488
489
490
491
492
493
494
495
496
497
498
499
500
501
502
503
504
505
506
507
508
509
def supersample(
    self, clip: VideoNodeT, rfactor: float = 2.0, shift: tuple[TopShift, LeftShift] = (0, 0), **kwargs: Any
) -> VideoNodeT:
    """
    Supersample a clip by a given scaling factor.

    Args:
        clip: The source clip.
        rfactor: Scaling factor for supersampling.
        shift: Subpixel shift (top, left) applied during scaling.
        **kwargs: Additional arguments forwarded to the scale function.

    Raises:
        CustomValueError: If resulting resolution is non-positive.

    Returns:
        The supersampled clip.
    """
    assert check_variable_resolution(clip, self.supersample)

    dst_width, dst_height = ceil(clip.width * rfactor), ceil(clip.height * rfactor)

    if max(dst_width, dst_height) <= 0.0:
        raise CustomValueError(
            'Multiplying the resolution by "rfactor" must result in a positive resolution!',
            self.supersample,
            rfactor,
        )

    return self.scale(clip, dst_width, dst_height, shift, **kwargs)  # type: ignore[return-value]

EwaHann

EwaHann(radius: float = 3.238315484166236, **kwargs: Any)

Bases: Placebo

Ewa Hann resizer.

Initialize the kernel with a specific radius and optional keyword arguments.

These keyword arguments are automatically forwarded to the _implemented_funcs methods but only if the method explicitly accepts them as named parameters. If the same keyword is passed to both __init__ and one of the _implemented_funcs, the one passed to func takes precedence.

Parameters:

  • radius

    (float, default: 3.238315484166236 ) –

    Overrides the filter kernel radius. Has no effect if the filter kernel is not resizeable.

  • **kwargs

    (Any, default: {} ) –

    Keyword arguments that configure the internal scaling behavior.

Classes:

Methods:

  • ensure_obj

    Ensure that the input is a scaler instance, resolving it if necessary.

  • from_param

    Resolve and return a scaler type from a given input (string, type, or instance).

  • get_scale_args
  • kernel_radius
  • multi

    Deprecated alias for supersample.

  • pretty_string

    Cached property returning a user-friendly string representation.

  • scale

    Scale a clip to the given resolution, with aspect ratio and linear light support.

  • supersample

    Supersample a clip by a given scaling factor.

Attributes:

Source code
279
280
281
282
283
284
285
286
287
288
289
290
291
292
def __init__(self, radius: float = 3.2383154841662362076499, **kwargs: Any) -> None:
    """
    Initialize the kernel with a specific radius and optional keyword arguments.

    These keyword arguments are automatically forwarded to the `_implemented_funcs` methods
    but only if the method explicitly accepts them as named parameters.
    If the same keyword is passed to both `__init__` and one of the `_implemented_funcs`,
    the one passed to `func` takes precedence.

    Args:
        radius: Overrides the filter kernel radius. Has no effect if the filter kernel is not resizeable.
        **kwargs: Keyword arguments that configure the internal scaling behavior.
    """
    super().__init__(radius, None, None, **kwargs)

antiring instance-attribute

antiring = antiring

b instance-attribute

b = b

blur instance-attribute

blur = blur

c instance-attribute

c = c

clamp instance-attribute

clamp = clamp

kwargs instance-attribute

kwargs: dict[str, Any] = kwargs

Arguments passed to the implemented funcs or internal scale function.

radius instance-attribute

radius = radius

scale_function class-attribute instance-attribute

scale_function = Resample

taper instance-attribute

taper = taper

cached_property

cached_property(func: Callable[Concatenate[_BaseScalerT, P], T_co])

Bases: cached_property[T_co]

Read only version of functools.cached_property.

Source code
291
def __init__(self, func: Callable[Concatenate[_BaseScalerT, P], T_co]) -> None: ...

ensure_obj classmethod

ensure_obj(
    scaler: str | type[Self] | Self | None = None,
    /,
    func_except: FuncExceptT | None = None,
) -> Self

Ensure that the input is a scaler instance, resolving it if necessary.

Parameters:

  • scaler

    (str | type[Self] | Self | None, default: None ) –

    Scaler identifier (string, class, or instance).

  • func_except

    (FuncExceptT | None, default: None ) –

    Function returned for custom error handling.

Returns:

  • Self

    Scaler instance.

Source code
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
@classmethod
def ensure_obj(
    cls,
    scaler: str | type[Self] | Self | None = None,
    /,
    func_except: FuncExceptT | None = None,
) -> Self:
    """
    Ensure that the input is a scaler instance, resolving it if necessary.

    Args:
        scaler: Scaler identifier (string, class, or instance).
        func_except: Function returned for custom error handling.

    Returns:
        Scaler instance.
    """
    return _base_ensure_obj(cls, scaler, func_except)

from_param classmethod

from_param(
    scaler: str | type[Self] | Self | None = None,
    /,
    func_except: FuncExceptT | None = None,
) -> type[Self]

Resolve and return a scaler type from a given input (string, type, or instance).

Parameters:

  • scaler

    (str | type[Self] | Self | None, default: None ) –

    Scaler identifier (string, class, or instance).

  • func_except

    (FuncExceptT | None, default: None ) –

    Function returned for custom error handling.

Returns:

  • type[Self]

    Resolved scaler type.

Source code
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
@classmethod
def from_param(
    cls,
    scaler: str | type[Self] | Self | None = None,
    /,
    func_except: FuncExceptT | None = None,
) -> type[Self]:
    """
    Resolve and return a scaler type from a given input (string, type, or instance).

    Args:
        scaler: Scaler identifier (string, class, or instance).
        func_except: Function returned for custom error handling.

    Returns:
        Resolved scaler type.
    """
    return _base_from_param(cls, scaler, cls._err_class, func_except)

get_scale_args

get_scale_args(
    clip: VideoNode,
    shift: tuple[TopShift, LeftShift] = (0, 0),
    width: int | None = None,
    height: int | None = None,
    **kwargs: Any
) -> dict[str, Any]
Source code
 76
 77
 78
 79
 80
 81
 82
 83
 84
 85
 86
 87
 88
 89
 90
 91
 92
 93
 94
 95
 96
 97
 98
 99
100
101
102
103
104
def get_scale_args(
    self,
    clip: vs.VideoNode,
    shift: tuple[TopShift, LeftShift] = (0, 0),
    width: int | None = None,
    height: int | None = None,
    **kwargs: Any,
) -> dict[str, Any]:
    kwargs.pop("format", None)

    return (
        {
            "sx": kwargs.pop("src_left", shift[1]),
            "sy": kwargs.pop("src_top", shift[0]),
            "width": width,
            "height": height,
            "filter": self._kernel,
            "radius": self.radius,
            "param1": self.b,
            "param2": self.c,
            "clamp": self.clamp,
            "taper": self.taper,
            "blur": self.blur,
            "antiring": self.antiring,
        }
        | self.kwargs
        | kwargs
        | {"linearize": False, "sigmoidize": False}
    )

kernel_radius

kernel_radius() -> int
Source code
106
107
108
109
110
111
112
113
114
115
116
117
@ComplexScaler.cached_property
def kernel_radius(self) -> int:
    if self.radius:
        return ceil(self.radius)

    if self.b or self.c:
        b, c = fallback(self.b, 0), fallback(self.c, 0.5)

        if (b, c) == (0, 0):
            return 1

    return 2

multi

multi(
    clip: VideoNodeT,
    multi: float = 2.0,
    shift: tuple[TopShift, LeftShift] = (0, 0),
    **kwargs: Any
) -> VideoNodeT

Deprecated alias for supersample.

Parameters:

  • clip

    (VideoNodeT) –

    The source clip.

  • multi

    (float, default: 2.0 ) –

    Supersampling factor.

  • shift

    (tuple[TopShift, LeftShift], default: (0, 0) ) –

    Subpixel shift (top, left) applied during scaling.

  • **kwargs

    (Any, default: {} ) –

    Additional arguments forwarded to the scale function.

Returns:

Source code
511
512
513
514
515
516
517
518
519
520
521
522
523
524
525
526
527
@deprecated('The "multi" method is deprecated. Use "supersample" instead.', category=DeprecationWarning)
def multi(
    self, clip: VideoNodeT, multi: float = 2.0, shift: tuple[TopShift, LeftShift] = (0, 0), **kwargs: Any
) -> VideoNodeT:
    """
    Deprecated alias for `supersample`.

    Args:
        clip: The source clip.
        multi: Supersampling factor.
        shift: Subpixel shift (top, left) applied during scaling.
        **kwargs: Additional arguments forwarded to the scale function.

    Returns:
        The supersampled clip.
    """
    return self.supersample(clip, multi, shift, **kwargs)

pretty_string

pretty_string() -> str

Cached property returning a user-friendly string representation.

Returns:

  • str

    Pretty-printed string with arguments.

Source code
419
420
421
422
423
424
425
426
427
@cached_property
def pretty_string(self) -> str:
    """
    Cached property returning a user-friendly string representation.

    Returns:
        Pretty-printed string with arguments.
    """
    return self._pretty_string()

scale

scale(
    clip: VideoNode,
    width: int | None = None,
    height: int | None = None,
    shift: tuple[TopShift | list[TopShift], LeftShift | list[LeftShift]] = (
        0,
        0,
    ),
    *,
    linear: bool | None = None,
    sigmoid: bool | tuple[Slope, Center] = False,
    border_handling: BorderHandling = MIRROR,
    sample_grid_model: SampleGridModel = MATCH_EDGES,
    sar: Sar | float | bool | None = None,
    dar: Dar | float | bool | None = None,
    dar_in: Dar | bool | float | None = None,
    keep_ar: bool | None = None,
    blur: float | None = None,
    **kwargs: Any
) -> VideoNode | ConstantFormatVideoNode

Scale a clip to the given resolution, with aspect ratio and linear light support.

Keyword arguments passed during initialization are automatically injected here, unless explicitly overridden by the arguments provided at call time. Only arguments that match named parameters in this method are injected.

Parameters:

  • clip

    (VideoNode) –

    The source clip.

  • width

    (int | None, default: None ) –

    Target width (defaults to clip width if None).

  • height

    (int | None, default: None ) –

    Target height (defaults to clip height if None).

  • shift

    (tuple[TopShift | list[TopShift], LeftShift | list[LeftShift]], default: (0, 0) ) –

    Subpixel shift (top, left) applied during scaling. If a tuple is provided, it is used uniformly. If a list is given, the shift is applied per plane.

  • linear

    (bool | None, default: None ) –

    Whether to linearize the input before descaling. If None, inferred from sigmoid.

  • sigmoid

    (bool | tuple[Slope, Center], default: False ) –

    Whether to use sigmoid transfer curve. Can be True, False, or a tuple of (slope, center). True applies the defaults values (6.5, 0.75). Keep in mind sigmoid slope has to be in range 1.0-20.0 (inclusive) and sigmoid center has to be in range 0.0-1.0 (inclusive).

  • border_handling

    (BorderHandling, default: MIRROR ) –

    Method for handling image borders during sampling.

  • sample_grid_model

    (SampleGridModel, default: MATCH_EDGES ) –

    Model used to align sampling grid.

  • sar

    (Sar | float | bool | None, default: None ) –

    Sample aspect ratio to assume or convert to.

  • dar

    (Dar | float | bool | None, default: None ) –

    Desired display aspect ratio.

  • dar_in

    (Dar | bool | float | None, default: None ) –

    Input display aspect ratio, if different from clip's.

  • keep_ar

    (bool | None, default: None ) –

    Whether to adjust dimensions to preserve aspect ratio.

  • blur

    (float | None, default: None ) –

    Amount of blur to apply during scaling.

Returns:

  • VideoNode | ConstantFormatVideoNode

    Scaled clip, optionally aspect-corrected and linearized.

Source code
527
528
529
530
531
532
533
534
535
536
537
538
539
540
541
542
543
544
545
546
547
548
549
550
551
552
553
554
555
556
557
558
559
560
561
562
563
564
565
566
567
568
569
570
571
572
573
574
575
576
577
578
579
580
581
582
583
584
585
586
587
588
589
590
591
592
593
594
595
596
597
598
599
600
601
602
603
604
605
606
607
608
609
610
611
612
613
614
615
616
617
618
619
620
621
622
623
624
625
626
627
628
629
630
631
632
633
634
635
636
637
638
639
640
641
642
643
644
645
646
647
648
649
650
651
652
653
654
655
656
657
658
659
660
661
662
663
664
665
666
667
668
def scale(
    self,
    clip: vs.VideoNode,
    width: int | None = None,
    height: int | None = None,
    # ComplexScaler adds shift per planes
    shift: tuple[TopShift | list[TopShift], LeftShift | list[LeftShift]] = (0, 0),
    *,
    # `linear` and `sigmoid` from LinearScaler
    linear: bool | None = None,
    sigmoid: bool | tuple[Slope, Center] = False,
    # `border_handling`, `sample_grid_model`, `sar`, `dar`, `dar_in` and `keep_ar` from KeepArScaler
    border_handling: BorderHandling = BorderHandling.MIRROR,
    sample_grid_model: SampleGridModel = SampleGridModel.MATCH_EDGES,
    sar: Sar | float | bool | None = None,
    dar: Dar | float | bool | None = None,
    dar_in: Dar | bool | float | None = None,
    keep_ar: bool | None = None,
    # ComplexScaler adds blur
    blur: float | None = None,
    **kwargs: Any,
) -> vs.VideoNode | ConstantFormatVideoNode:
    """
    Scale a clip to the given resolution, with aspect ratio and linear light support.

    Keyword arguments passed during initialization are automatically injected here,
    unless explicitly overridden by the arguments provided at call time.
    Only arguments that match named parameters in this method are injected.

    Args:
        clip: The source clip.
        width: Target width (defaults to clip width if None).
        height: Target height (defaults to clip height if None).
        shift: Subpixel shift (top, left) applied during scaling. If a tuple is provided, it is used uniformly. If a
            list is given, the shift is applied per plane.
        linear: Whether to linearize the input before descaling. If None, inferred from sigmoid.
        sigmoid: Whether to use sigmoid transfer curve. Can be True, False, or a tuple of (slope, center). `True`
            applies the defaults values (6.5, 0.75). Keep in mind sigmoid slope has to be in range 1.0-20.0
            (inclusive) and sigmoid center has to be in range 0.0-1.0 (inclusive).
        border_handling: Method for handling image borders during sampling.
        sample_grid_model: Model used to align sampling grid.
        sar: Sample aspect ratio to assume or convert to.
        dar: Desired display aspect ratio.
        dar_in: Input display aspect ratio, if different from clip's.
        keep_ar: Whether to adjust dimensions to preserve aspect ratio.
        blur: Amount of blur to apply during scaling.

    Returns:
        Scaled clip, optionally aspect-corrected and linearized.
    """
    kwargs.update(
        linear=linear,
        sigmoid=sigmoid,
        border_handling=border_handling,
        sample_grid_model=sample_grid_model,
        sar=sar,
        dar=dar,
        dar_in=dar_in,
        keep_ar=keep_ar,
        blur=blur,
    )

    shift_top, shift_left = shift

    if isinstance(shift_top, (int, float)) and isinstance(shift_left, (int, float)):
        return super().scale(clip, width, height, (shift_top, shift_left), **kwargs)

    assert check_variable_format(clip, self.scale)

    n_planes = clip.format.num_planes

    shift_top = normalize_seq(shift_top, n_planes)
    shift_left = normalize_seq(shift_left, n_planes)

    if n_planes == 1:
        if len(set(shift_top)) > 1 or len(set(shift_left)) > 1:
            raise CustomValueError(
                "Inconsistent shift values detected for a single plane. "
                "All shift values must be identical when passing a GRAY clip.",
                self.scale,
                (shift_top, shift_left),
            )

        return super().scale(clip, width, height, (shift_top[0], shift_left[0]), **kwargs)

    width, height = self._wh_norm(clip, width, height)

    format_in = clip.format
    format_out = get_video_format(fallback(kwargs.pop("format", None), self.kwargs.get("format"), clip.format))

    chromaloc = ChromaLocation.from_video(clip, func=self.scale)
    chromaloc_in = ChromaLocation(
        fallback(kwargs.pop("chromaloc_in", None), self.kwargs.get("chromaloc_in"), chromaloc)
    )
    chromaloc_out = ChromaLocation(fallback(kwargs.pop("chromaloc", None), self.kwargs.get("chromaloc"), chromaloc))

    off_left, off_top = chromaloc_in.get_offsets(format_in)
    off_left_out, off_top_out = chromaloc_out.get_offsets(format_out)

    factor_w = 1 / 2**format_in.subsampling_w
    factor_h = 1 / 2**format_in.subsampling_h

    # Offsets for format out
    offc_left = (abs(off_left) + off_left_out) * factor_w
    offc_top = (abs(off_top) + off_top_out) * factor_h

    # Offsets for scale out
    if format_out.subsampling_w:
        offc_left = ((abs(off_left) + off_left * (clip.width / width)) * factor_w) + offc_left
    if format_out.subsampling_h:
        offc_top = ((abs(off_top) + off_top * (clip.height / height)) * factor_h) + offc_top

    for i in range(1, n_planes):
        shift_left[i] += offc_left
        shift_top[i] += offc_top

    scaled_planes = list[vs.VideoNode]()

    for i, (plane, top, left) in enumerate(zip(split(clip), shift_top, shift_left)):
        if i:
            w = round(width * 1 / 2**format_out.subsampling_h)
            h = round(height * 1 / 2**format_out.subsampling_h)
        else:
            w, h = width, height

        scaled_planes.append(
            super().scale(
                plane,
                w,
                h,
                (top, left),
                format=format_out.replace(color_family=vs.GRAY, subsampling_w=0, subsampling_h=0),
                **kwargs,
            )
        )

    merged = vs.core.std.ShufflePlanes(scaled_planes, [0, 0, 0], format_out.color_family, clip)

    if chromaloc_in != chromaloc_out:
        return chromaloc_out.apply(merged)

    return merged

supersample

supersample(
    clip: VideoNodeT,
    rfactor: float = 2.0,
    shift: tuple[TopShift, LeftShift] = (0, 0),
    **kwargs: Any
) -> VideoNodeT

Supersample a clip by a given scaling factor.

Parameters:

  • clip

    (VideoNodeT) –

    The source clip.

  • rfactor

    (float, default: 2.0 ) –

    Scaling factor for supersampling.

  • shift

    (tuple[TopShift, LeftShift], default: (0, 0) ) –

    Subpixel shift (top, left) applied during scaling.

  • **kwargs

    (Any, default: {} ) –

    Additional arguments forwarded to the scale function.

Raises:

  • CustomValueError

    If resulting resolution is non-positive.

Returns:

Source code
480
481
482
483
484
485
486
487
488
489
490
491
492
493
494
495
496
497
498
499
500
501
502
503
504
505
506
507
508
509
def supersample(
    self, clip: VideoNodeT, rfactor: float = 2.0, shift: tuple[TopShift, LeftShift] = (0, 0), **kwargs: Any
) -> VideoNodeT:
    """
    Supersample a clip by a given scaling factor.

    Args:
        clip: The source clip.
        rfactor: Scaling factor for supersampling.
        shift: Subpixel shift (top, left) applied during scaling.
        **kwargs: Additional arguments forwarded to the scale function.

    Raises:
        CustomValueError: If resulting resolution is non-positive.

    Returns:
        The supersampled clip.
    """
    assert check_variable_resolution(clip, self.supersample)

    dst_width, dst_height = ceil(clip.width * rfactor), ceil(clip.height * rfactor)

    if max(dst_width, dst_height) <= 0.0:
        raise CustomValueError(
            'Multiplying the resolution by "rfactor" must result in a positive resolution!',
            self.supersample,
            rfactor,
        )

    return self.scale(clip, dst_width, dst_height, shift, **kwargs)  # type: ignore[return-value]

EwaJinc

EwaJinc(radius: float = 3.238315484166236, **kwargs: Any)

Bases: Placebo

Ewa Jinc resizer.

Initialize the kernel with a specific radius and optional keyword arguments.

These keyword arguments are automatically forwarded to the _implemented_funcs methods but only if the method explicitly accepts them as named parameters. If the same keyword is passed to both __init__ and one of the _implemented_funcs, the one passed to func takes precedence.

Parameters:

  • radius

    (float, default: 3.238315484166236 ) –

    Overrides the filter kernel radius. Has no effect if the filter kernel is not resizeable.

  • **kwargs

    (Any, default: {} ) –

    Keyword arguments that configure the internal scaling behavior.

Classes:

Methods:

  • ensure_obj

    Ensure that the input is a scaler instance, resolving it if necessary.

  • from_param

    Resolve and return a scaler type from a given input (string, type, or instance).

  • get_scale_args
  • kernel_radius
  • multi

    Deprecated alias for supersample.

  • pretty_string

    Cached property returning a user-friendly string representation.

  • scale

    Scale a clip to the given resolution, with aspect ratio and linear light support.

  • supersample

    Supersample a clip by a given scaling factor.

Attributes:

Source code
233
234
235
236
237
238
239
240
241
242
243
244
245
246
def __init__(self, radius: float = 3.2383154841662362076499, **kwargs: Any) -> None:
    """
    Initialize the kernel with a specific radius and optional keyword arguments.

    These keyword arguments are automatically forwarded to the `_implemented_funcs` methods
    but only if the method explicitly accepts them as named parameters.
    If the same keyword is passed to both `__init__` and one of the `_implemented_funcs`,
    the one passed to `func` takes precedence.

    Args:
        radius: Overrides the filter kernel radius. Has no effect if the filter kernel is not resizeable.
        **kwargs: Keyword arguments that configure the internal scaling behavior.
    """
    super().__init__(radius, None, None, **kwargs)

antiring instance-attribute

antiring = antiring

b instance-attribute

b = b

blur instance-attribute

blur = blur

c instance-attribute

c = c

clamp instance-attribute

clamp = clamp

kwargs instance-attribute

kwargs: dict[str, Any] = kwargs

Arguments passed to the implemented funcs or internal scale function.

radius instance-attribute

radius = radius

scale_function class-attribute instance-attribute

scale_function = Resample

taper instance-attribute

taper = taper

cached_property

cached_property(func: Callable[Concatenate[_BaseScalerT, P], T_co])

Bases: cached_property[T_co]

Read only version of functools.cached_property.

Source code
291
def __init__(self, func: Callable[Concatenate[_BaseScalerT, P], T_co]) -> None: ...

ensure_obj classmethod

ensure_obj(
    scaler: str | type[Self] | Self | None = None,
    /,
    func_except: FuncExceptT | None = None,
) -> Self

Ensure that the input is a scaler instance, resolving it if necessary.

Parameters:

  • scaler

    (str | type[Self] | Self | None, default: None ) –

    Scaler identifier (string, class, or instance).

  • func_except

    (FuncExceptT | None, default: None ) –

    Function returned for custom error handling.

Returns:

  • Self

    Scaler instance.

Source code
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
@classmethod
def ensure_obj(
    cls,
    scaler: str | type[Self] | Self | None = None,
    /,
    func_except: FuncExceptT | None = None,
) -> Self:
    """
    Ensure that the input is a scaler instance, resolving it if necessary.

    Args:
        scaler: Scaler identifier (string, class, or instance).
        func_except: Function returned for custom error handling.

    Returns:
        Scaler instance.
    """
    return _base_ensure_obj(cls, scaler, func_except)

from_param classmethod

from_param(
    scaler: str | type[Self] | Self | None = None,
    /,
    func_except: FuncExceptT | None = None,
) -> type[Self]

Resolve and return a scaler type from a given input (string, type, or instance).

Parameters:

  • scaler

    (str | type[Self] | Self | None, default: None ) –

    Scaler identifier (string, class, or instance).

  • func_except

    (FuncExceptT | None, default: None ) –

    Function returned for custom error handling.

Returns:

  • type[Self]

    Resolved scaler type.

Source code
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
@classmethod
def from_param(
    cls,
    scaler: str | type[Self] | Self | None = None,
    /,
    func_except: FuncExceptT | None = None,
) -> type[Self]:
    """
    Resolve and return a scaler type from a given input (string, type, or instance).

    Args:
        scaler: Scaler identifier (string, class, or instance).
        func_except: Function returned for custom error handling.

    Returns:
        Resolved scaler type.
    """
    return _base_from_param(cls, scaler, cls._err_class, func_except)

get_scale_args

get_scale_args(
    clip: VideoNode,
    shift: tuple[TopShift, LeftShift] = (0, 0),
    width: int | None = None,
    height: int | None = None,
    **kwargs: Any
) -> dict[str, Any]
Source code
 76
 77
 78
 79
 80
 81
 82
 83
 84
 85
 86
 87
 88
 89
 90
 91
 92
 93
 94
 95
 96
 97
 98
 99
100
101
102
103
104
def get_scale_args(
    self,
    clip: vs.VideoNode,
    shift: tuple[TopShift, LeftShift] = (0, 0),
    width: int | None = None,
    height: int | None = None,
    **kwargs: Any,
) -> dict[str, Any]:
    kwargs.pop("format", None)

    return (
        {
            "sx": kwargs.pop("src_left", shift[1]),
            "sy": kwargs.pop("src_top", shift[0]),
            "width": width,
            "height": height,
            "filter": self._kernel,
            "radius": self.radius,
            "param1": self.b,
            "param2": self.c,
            "clamp": self.clamp,
            "taper": self.taper,
            "blur": self.blur,
            "antiring": self.antiring,
        }
        | self.kwargs
        | kwargs
        | {"linearize": False, "sigmoidize": False}
    )

kernel_radius

kernel_radius() -> int
Source code
106
107
108
109
110
111
112
113
114
115
116
117
@ComplexScaler.cached_property
def kernel_radius(self) -> int:
    if self.radius:
        return ceil(self.radius)

    if self.b or self.c:
        b, c = fallback(self.b, 0), fallback(self.c, 0.5)

        if (b, c) == (0, 0):
            return 1

    return 2

multi

multi(
    clip: VideoNodeT,
    multi: float = 2.0,
    shift: tuple[TopShift, LeftShift] = (0, 0),
    **kwargs: Any
) -> VideoNodeT

Deprecated alias for supersample.

Parameters:

  • clip

    (VideoNodeT) –

    The source clip.

  • multi

    (float, default: 2.0 ) –

    Supersampling factor.

  • shift

    (tuple[TopShift, LeftShift], default: (0, 0) ) –

    Subpixel shift (top, left) applied during scaling.

  • **kwargs

    (Any, default: {} ) –

    Additional arguments forwarded to the scale function.

Returns:

Source code
511
512
513
514
515
516
517
518
519
520
521
522
523
524
525
526
527
@deprecated('The "multi" method is deprecated. Use "supersample" instead.', category=DeprecationWarning)
def multi(
    self, clip: VideoNodeT, multi: float = 2.0, shift: tuple[TopShift, LeftShift] = (0, 0), **kwargs: Any
) -> VideoNodeT:
    """
    Deprecated alias for `supersample`.

    Args:
        clip: The source clip.
        multi: Supersampling factor.
        shift: Subpixel shift (top, left) applied during scaling.
        **kwargs: Additional arguments forwarded to the scale function.

    Returns:
        The supersampled clip.
    """
    return self.supersample(clip, multi, shift, **kwargs)

pretty_string

pretty_string() -> str

Cached property returning a user-friendly string representation.

Returns:

  • str

    Pretty-printed string with arguments.

Source code
419
420
421
422
423
424
425
426
427
@cached_property
def pretty_string(self) -> str:
    """
    Cached property returning a user-friendly string representation.

    Returns:
        Pretty-printed string with arguments.
    """
    return self._pretty_string()

scale

scale(
    clip: VideoNode,
    width: int | None = None,
    height: int | None = None,
    shift: tuple[TopShift | list[TopShift], LeftShift | list[LeftShift]] = (
        0,
        0,
    ),
    *,
    linear: bool | None = None,
    sigmoid: bool | tuple[Slope, Center] = False,
    border_handling: BorderHandling = MIRROR,
    sample_grid_model: SampleGridModel = MATCH_EDGES,
    sar: Sar | float | bool | None = None,
    dar: Dar | float | bool | None = None,
    dar_in: Dar | bool | float | None = None,
    keep_ar: bool | None = None,
    blur: float | None = None,
    **kwargs: Any
) -> VideoNode | ConstantFormatVideoNode

Scale a clip to the given resolution, with aspect ratio and linear light support.

Keyword arguments passed during initialization are automatically injected here, unless explicitly overridden by the arguments provided at call time. Only arguments that match named parameters in this method are injected.

Parameters:

  • clip

    (VideoNode) –

    The source clip.

  • width

    (int | None, default: None ) –

    Target width (defaults to clip width if None).

  • height

    (int | None, default: None ) –

    Target height (defaults to clip height if None).

  • shift

    (tuple[TopShift | list[TopShift], LeftShift | list[LeftShift]], default: (0, 0) ) –

    Subpixel shift (top, left) applied during scaling. If a tuple is provided, it is used uniformly. If a list is given, the shift is applied per plane.

  • linear

    (bool | None, default: None ) –

    Whether to linearize the input before descaling. If None, inferred from sigmoid.

  • sigmoid

    (bool | tuple[Slope, Center], default: False ) –

    Whether to use sigmoid transfer curve. Can be True, False, or a tuple of (slope, center). True applies the defaults values (6.5, 0.75). Keep in mind sigmoid slope has to be in range 1.0-20.0 (inclusive) and sigmoid center has to be in range 0.0-1.0 (inclusive).

  • border_handling

    (BorderHandling, default: MIRROR ) –

    Method for handling image borders during sampling.

  • sample_grid_model

    (SampleGridModel, default: MATCH_EDGES ) –

    Model used to align sampling grid.

  • sar

    (Sar | float | bool | None, default: None ) –

    Sample aspect ratio to assume or convert to.

  • dar

    (Dar | float | bool | None, default: None ) –

    Desired display aspect ratio.

  • dar_in

    (Dar | bool | float | None, default: None ) –

    Input display aspect ratio, if different from clip's.

  • keep_ar

    (bool | None, default: None ) –

    Whether to adjust dimensions to preserve aspect ratio.

  • blur

    (float | None, default: None ) –

    Amount of blur to apply during scaling.

Returns:

  • VideoNode | ConstantFormatVideoNode

    Scaled clip, optionally aspect-corrected and linearized.

Source code
527
528
529
530
531
532
533
534
535
536
537
538
539
540
541
542
543
544
545
546
547
548
549
550
551
552
553
554
555
556
557
558
559
560
561
562
563
564
565
566
567
568
569
570
571
572
573
574
575
576
577
578
579
580
581
582
583
584
585
586
587
588
589
590
591
592
593
594
595
596
597
598
599
600
601
602
603
604
605
606
607
608
609
610
611
612
613
614
615
616
617
618
619
620
621
622
623
624
625
626
627
628
629
630
631
632
633
634
635
636
637
638
639
640
641
642
643
644
645
646
647
648
649
650
651
652
653
654
655
656
657
658
659
660
661
662
663
664
665
666
667
668
def scale(
    self,
    clip: vs.VideoNode,
    width: int | None = None,
    height: int | None = None,
    # ComplexScaler adds shift per planes
    shift: tuple[TopShift | list[TopShift], LeftShift | list[LeftShift]] = (0, 0),
    *,
    # `linear` and `sigmoid` from LinearScaler
    linear: bool | None = None,
    sigmoid: bool | tuple[Slope, Center] = False,
    # `border_handling`, `sample_grid_model`, `sar`, `dar`, `dar_in` and `keep_ar` from KeepArScaler
    border_handling: BorderHandling = BorderHandling.MIRROR,
    sample_grid_model: SampleGridModel = SampleGridModel.MATCH_EDGES,
    sar: Sar | float | bool | None = None,
    dar: Dar | float | bool | None = None,
    dar_in: Dar | bool | float | None = None,
    keep_ar: bool | None = None,
    # ComplexScaler adds blur
    blur: float | None = None,
    **kwargs: Any,
) -> vs.VideoNode | ConstantFormatVideoNode:
    """
    Scale a clip to the given resolution, with aspect ratio and linear light support.

    Keyword arguments passed during initialization are automatically injected here,
    unless explicitly overridden by the arguments provided at call time.
    Only arguments that match named parameters in this method are injected.

    Args:
        clip: The source clip.
        width: Target width (defaults to clip width if None).
        height: Target height (defaults to clip height if None).
        shift: Subpixel shift (top, left) applied during scaling. If a tuple is provided, it is used uniformly. If a
            list is given, the shift is applied per plane.
        linear: Whether to linearize the input before descaling. If None, inferred from sigmoid.
        sigmoid: Whether to use sigmoid transfer curve. Can be True, False, or a tuple of (slope, center). `True`
            applies the defaults values (6.5, 0.75). Keep in mind sigmoid slope has to be in range 1.0-20.0
            (inclusive) and sigmoid center has to be in range 0.0-1.0 (inclusive).
        border_handling: Method for handling image borders during sampling.
        sample_grid_model: Model used to align sampling grid.
        sar: Sample aspect ratio to assume or convert to.
        dar: Desired display aspect ratio.
        dar_in: Input display aspect ratio, if different from clip's.
        keep_ar: Whether to adjust dimensions to preserve aspect ratio.
        blur: Amount of blur to apply during scaling.

    Returns:
        Scaled clip, optionally aspect-corrected and linearized.
    """
    kwargs.update(
        linear=linear,
        sigmoid=sigmoid,
        border_handling=border_handling,
        sample_grid_model=sample_grid_model,
        sar=sar,
        dar=dar,
        dar_in=dar_in,
        keep_ar=keep_ar,
        blur=blur,
    )

    shift_top, shift_left = shift

    if isinstance(shift_top, (int, float)) and isinstance(shift_left, (int, float)):
        return super().scale(clip, width, height, (shift_top, shift_left), **kwargs)

    assert check_variable_format(clip, self.scale)

    n_planes = clip.format.num_planes

    shift_top = normalize_seq(shift_top, n_planes)
    shift_left = normalize_seq(shift_left, n_planes)

    if n_planes == 1:
        if len(set(shift_top)) > 1 or len(set(shift_left)) > 1:
            raise CustomValueError(
                "Inconsistent shift values detected for a single plane. "
                "All shift values must be identical when passing a GRAY clip.",
                self.scale,
                (shift_top, shift_left),
            )

        return super().scale(clip, width, height, (shift_top[0], shift_left[0]), **kwargs)

    width, height = self._wh_norm(clip, width, height)

    format_in = clip.format
    format_out = get_video_format(fallback(kwargs.pop("format", None), self.kwargs.get("format"), clip.format))

    chromaloc = ChromaLocation.from_video(clip, func=self.scale)
    chromaloc_in = ChromaLocation(
        fallback(kwargs.pop("chromaloc_in", None), self.kwargs.get("chromaloc_in"), chromaloc)
    )
    chromaloc_out = ChromaLocation(fallback(kwargs.pop("chromaloc", None), self.kwargs.get("chromaloc"), chromaloc))

    off_left, off_top = chromaloc_in.get_offsets(format_in)
    off_left_out, off_top_out = chromaloc_out.get_offsets(format_out)

    factor_w = 1 / 2**format_in.subsampling_w
    factor_h = 1 / 2**format_in.subsampling_h

    # Offsets for format out
    offc_left = (abs(off_left) + off_left_out) * factor_w
    offc_top = (abs(off_top) + off_top_out) * factor_h

    # Offsets for scale out
    if format_out.subsampling_w:
        offc_left = ((abs(off_left) + off_left * (clip.width / width)) * factor_w) + offc_left
    if format_out.subsampling_h:
        offc_top = ((abs(off_top) + off_top * (clip.height / height)) * factor_h) + offc_top

    for i in range(1, n_planes):
        shift_left[i] += offc_left
        shift_top[i] += offc_top

    scaled_planes = list[vs.VideoNode]()

    for i, (plane, top, left) in enumerate(zip(split(clip), shift_top, shift_left)):
        if i:
            w = round(width * 1 / 2**format_out.subsampling_h)
            h = round(height * 1 / 2**format_out.subsampling_h)
        else:
            w, h = width, height

        scaled_planes.append(
            super().scale(
                plane,
                w,
                h,
                (top, left),
                format=format_out.replace(color_family=vs.GRAY, subsampling_w=0, subsampling_h=0),
                **kwargs,
            )
        )

    merged = vs.core.std.ShufflePlanes(scaled_planes, [0, 0, 0], format_out.color_family, clip)

    if chromaloc_in != chromaloc_out:
        return chromaloc_out.apply(merged)

    return merged

supersample

supersample(
    clip: VideoNodeT,
    rfactor: float = 2.0,
    shift: tuple[TopShift, LeftShift] = (0, 0),
    **kwargs: Any
) -> VideoNodeT

Supersample a clip by a given scaling factor.

Parameters:

  • clip

    (VideoNodeT) –

    The source clip.

  • rfactor

    (float, default: 2.0 ) –

    Scaling factor for supersampling.

  • shift

    (tuple[TopShift, LeftShift], default: (0, 0) ) –

    Subpixel shift (top, left) applied during scaling.

  • **kwargs

    (Any, default: {} ) –

    Additional arguments forwarded to the scale function.

Raises:

  • CustomValueError

    If resulting resolution is non-positive.

Returns:

Source code
480
481
482
483
484
485
486
487
488
489
490
491
492
493
494
495
496
497
498
499
500
501
502
503
504
505
506
507
508
509
def supersample(
    self, clip: VideoNodeT, rfactor: float = 2.0, shift: tuple[TopShift, LeftShift] = (0, 0), **kwargs: Any
) -> VideoNodeT:
    """
    Supersample a clip by a given scaling factor.

    Args:
        clip: The source clip.
        rfactor: Scaling factor for supersampling.
        shift: Subpixel shift (top, left) applied during scaling.
        **kwargs: Additional arguments forwarded to the scale function.

    Raises:
        CustomValueError: If resulting resolution is non-positive.

    Returns:
        The supersampled clip.
    """
    assert check_variable_resolution(clip, self.supersample)

    dst_width, dst_height = ceil(clip.width * rfactor), ceil(clip.height * rfactor)

    if max(dst_width, dst_height) <= 0.0:
        raise CustomValueError(
            'Multiplying the resolution by "rfactor" must result in a positive resolution!',
            self.supersample,
            rfactor,
        )

    return self.scale(clip, dst_width, dst_height, shift, **kwargs)  # type: ignore[return-value]

EwaLanczos

EwaLanczos(radius: float = 3.238315484166236, **kwargs: Any)

Bases: Placebo

Ewa Lanczos resizer.

Initialize the kernel with a specific radius and optional keyword arguments.

These keyword arguments are automatically forwarded to the _implemented_funcs methods but only if the method explicitly accepts them as named parameters. If the same keyword is passed to both __init__ and one of the _implemented_funcs, the one passed to func takes precedence.

Parameters:

  • radius

    (float, default: 3.238315484166236 ) –

    Overrides the filter kernel radius. Has no effect if the filter kernel is not resizeable.

  • **kwargs

    (Any, default: {} ) –

    Keyword arguments that configure the internal scaling behavior.

Classes:

Methods:

  • ensure_obj

    Ensure that the input is a scaler instance, resolving it if necessary.

  • from_param

    Resolve and return a scaler type from a given input (string, type, or instance).

  • get_scale_args
  • kernel_radius
  • multi

    Deprecated alias for supersample.

  • pretty_string

    Cached property returning a user-friendly string representation.

  • scale

    Scale a clip to the given resolution, with aspect ratio and linear light support.

  • supersample

    Supersample a clip by a given scaling factor.

Attributes:

Source code
155
156
157
158
159
160
161
162
163
164
165
166
167
168
def __init__(self, radius: float = 3.2383154841662362076499, **kwargs: Any) -> None:
    """
    Initialize the kernel with a specific radius and optional keyword arguments.

    These keyword arguments are automatically forwarded to the `_implemented_funcs` methods
    but only if the method explicitly accepts them as named parameters.
    If the same keyword is passed to both `__init__` and one of the `_implemented_funcs`,
    the one passed to `func` takes precedence.

    Args:
        radius: Overrides the filter kernel radius. Has no effect if the filter kernel is not resizeable.
        **kwargs: Keyword arguments that configure the internal scaling behavior.
    """
    super().__init__(radius, None, None, **kwargs)

antiring instance-attribute

antiring = antiring

b instance-attribute

b = b

blur instance-attribute

blur = blur

c instance-attribute

c = c

clamp instance-attribute

clamp = clamp

kwargs instance-attribute

kwargs: dict[str, Any] = kwargs

Arguments passed to the implemented funcs or internal scale function.

radius instance-attribute

radius = radius

scale_function class-attribute instance-attribute

scale_function = Resample

taper instance-attribute

taper = taper

cached_property

cached_property(func: Callable[Concatenate[_BaseScalerT, P], T_co])

Bases: cached_property[T_co]

Read only version of functools.cached_property.

Source code
291
def __init__(self, func: Callable[Concatenate[_BaseScalerT, P], T_co]) -> None: ...

ensure_obj classmethod

ensure_obj(
    scaler: str | type[Self] | Self | None = None,
    /,
    func_except: FuncExceptT | None = None,
) -> Self

Ensure that the input is a scaler instance, resolving it if necessary.

Parameters:

  • scaler

    (str | type[Self] | Self | None, default: None ) –

    Scaler identifier (string, class, or instance).

  • func_except

    (FuncExceptT | None, default: None ) –

    Function returned for custom error handling.

Returns:

  • Self

    Scaler instance.

Source code
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
@classmethod
def ensure_obj(
    cls,
    scaler: str | type[Self] | Self | None = None,
    /,
    func_except: FuncExceptT | None = None,
) -> Self:
    """
    Ensure that the input is a scaler instance, resolving it if necessary.

    Args:
        scaler: Scaler identifier (string, class, or instance).
        func_except: Function returned for custom error handling.

    Returns:
        Scaler instance.
    """
    return _base_ensure_obj(cls, scaler, func_except)

from_param classmethod

from_param(
    scaler: str | type[Self] | Self | None = None,
    /,
    func_except: FuncExceptT | None = None,
) -> type[Self]

Resolve and return a scaler type from a given input (string, type, or instance).

Parameters:

  • scaler

    (str | type[Self] | Self | None, default: None ) –

    Scaler identifier (string, class, or instance).

  • func_except

    (FuncExceptT | None, default: None ) –

    Function returned for custom error handling.

Returns:

  • type[Self]

    Resolved scaler type.

Source code
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
@classmethod
def from_param(
    cls,
    scaler: str | type[Self] | Self | None = None,
    /,
    func_except: FuncExceptT | None = None,
) -> type[Self]:
    """
    Resolve and return a scaler type from a given input (string, type, or instance).

    Args:
        scaler: Scaler identifier (string, class, or instance).
        func_except: Function returned for custom error handling.

    Returns:
        Resolved scaler type.
    """
    return _base_from_param(cls, scaler, cls._err_class, func_except)

get_scale_args

get_scale_args(
    clip: VideoNode,
    shift: tuple[TopShift, LeftShift] = (0, 0),
    width: int | None = None,
    height: int | None = None,
    **kwargs: Any
) -> dict[str, Any]
Source code
 76
 77
 78
 79
 80
 81
 82
 83
 84
 85
 86
 87
 88
 89
 90
 91
 92
 93
 94
 95
 96
 97
 98
 99
100
101
102
103
104
def get_scale_args(
    self,
    clip: vs.VideoNode,
    shift: tuple[TopShift, LeftShift] = (0, 0),
    width: int | None = None,
    height: int | None = None,
    **kwargs: Any,
) -> dict[str, Any]:
    kwargs.pop("format", None)

    return (
        {
            "sx": kwargs.pop("src_left", shift[1]),
            "sy": kwargs.pop("src_top", shift[0]),
            "width": width,
            "height": height,
            "filter": self._kernel,
            "radius": self.radius,
            "param1": self.b,
            "param2": self.c,
            "clamp": self.clamp,
            "taper": self.taper,
            "blur": self.blur,
            "antiring": self.antiring,
        }
        | self.kwargs
        | kwargs
        | {"linearize": False, "sigmoidize": False}
    )

kernel_radius

kernel_radius() -> int
Source code
106
107
108
109
110
111
112
113
114
115
116
117
@ComplexScaler.cached_property
def kernel_radius(self) -> int:
    if self.radius:
        return ceil(self.radius)

    if self.b or self.c:
        b, c = fallback(self.b, 0), fallback(self.c, 0.5)

        if (b, c) == (0, 0):
            return 1

    return 2

multi

multi(
    clip: VideoNodeT,
    multi: float = 2.0,
    shift: tuple[TopShift, LeftShift] = (0, 0),
    **kwargs: Any
) -> VideoNodeT

Deprecated alias for supersample.

Parameters:

  • clip

    (VideoNodeT) –

    The source clip.

  • multi

    (float, default: 2.0 ) –

    Supersampling factor.

  • shift

    (tuple[TopShift, LeftShift], default: (0, 0) ) –

    Subpixel shift (top, left) applied during scaling.

  • **kwargs

    (Any, default: {} ) –

    Additional arguments forwarded to the scale function.

Returns:

Source code
511
512
513
514
515
516
517
518
519
520
521
522
523
524
525
526
527
@deprecated('The "multi" method is deprecated. Use "supersample" instead.', category=DeprecationWarning)
def multi(
    self, clip: VideoNodeT, multi: float = 2.0, shift: tuple[TopShift, LeftShift] = (0, 0), **kwargs: Any
) -> VideoNodeT:
    """
    Deprecated alias for `supersample`.

    Args:
        clip: The source clip.
        multi: Supersampling factor.
        shift: Subpixel shift (top, left) applied during scaling.
        **kwargs: Additional arguments forwarded to the scale function.

    Returns:
        The supersampled clip.
    """
    return self.supersample(clip, multi, shift, **kwargs)

pretty_string

pretty_string() -> str

Cached property returning a user-friendly string representation.

Returns:

  • str

    Pretty-printed string with arguments.

Source code
419
420
421
422
423
424
425
426
427
@cached_property
def pretty_string(self) -> str:
    """
    Cached property returning a user-friendly string representation.

    Returns:
        Pretty-printed string with arguments.
    """
    return self._pretty_string()

scale

scale(
    clip: VideoNode,
    width: int | None = None,
    height: int | None = None,
    shift: tuple[TopShift | list[TopShift], LeftShift | list[LeftShift]] = (
        0,
        0,
    ),
    *,
    linear: bool | None = None,
    sigmoid: bool | tuple[Slope, Center] = False,
    border_handling: BorderHandling = MIRROR,
    sample_grid_model: SampleGridModel = MATCH_EDGES,
    sar: Sar | float | bool | None = None,
    dar: Dar | float | bool | None = None,
    dar_in: Dar | bool | float | None = None,
    keep_ar: bool | None = None,
    blur: float | None = None,
    **kwargs: Any
) -> VideoNode | ConstantFormatVideoNode

Scale a clip to the given resolution, with aspect ratio and linear light support.

Keyword arguments passed during initialization are automatically injected here, unless explicitly overridden by the arguments provided at call time. Only arguments that match named parameters in this method are injected.

Parameters:

  • clip

    (VideoNode) –

    The source clip.

  • width

    (int | None, default: None ) –

    Target width (defaults to clip width if None).

  • height

    (int | None, default: None ) –

    Target height (defaults to clip height if None).

  • shift

    (tuple[TopShift | list[TopShift], LeftShift | list[LeftShift]], default: (0, 0) ) –

    Subpixel shift (top, left) applied during scaling. If a tuple is provided, it is used uniformly. If a list is given, the shift is applied per plane.

  • linear

    (bool | None, default: None ) –

    Whether to linearize the input before descaling. If None, inferred from sigmoid.

  • sigmoid

    (bool | tuple[Slope, Center], default: False ) –

    Whether to use sigmoid transfer curve. Can be True, False, or a tuple of (slope, center). True applies the defaults values (6.5, 0.75). Keep in mind sigmoid slope has to be in range 1.0-20.0 (inclusive) and sigmoid center has to be in range 0.0-1.0 (inclusive).

  • border_handling

    (BorderHandling, default: MIRROR ) –

    Method for handling image borders during sampling.

  • sample_grid_model

    (SampleGridModel, default: MATCH_EDGES ) –

    Model used to align sampling grid.

  • sar

    (Sar | float | bool | None, default: None ) –

    Sample aspect ratio to assume or convert to.

  • dar

    (Dar | float | bool | None, default: None ) –

    Desired display aspect ratio.

  • dar_in

    (Dar | bool | float | None, default: None ) –

    Input display aspect ratio, if different from clip's.

  • keep_ar

    (bool | None, default: None ) –

    Whether to adjust dimensions to preserve aspect ratio.

  • blur

    (float | None, default: None ) –

    Amount of blur to apply during scaling.

Returns:

  • VideoNode | ConstantFormatVideoNode

    Scaled clip, optionally aspect-corrected and linearized.

Source code
527
528
529
530
531
532
533
534
535
536
537
538
539
540
541
542
543
544
545
546
547
548
549
550
551
552
553
554
555
556
557
558
559
560
561
562
563
564
565
566
567
568
569
570
571
572
573
574
575
576
577
578
579
580
581
582
583
584
585
586
587
588
589
590
591
592
593
594
595
596
597
598
599
600
601
602
603
604
605
606
607
608
609
610
611
612
613
614
615
616
617
618
619
620
621
622
623
624
625
626
627
628
629
630
631
632
633
634
635
636
637
638
639
640
641
642
643
644
645
646
647
648
649
650
651
652
653
654
655
656
657
658
659
660
661
662
663
664
665
666
667
668
def scale(
    self,
    clip: vs.VideoNode,
    width: int | None = None,
    height: int | None = None,
    # ComplexScaler adds shift per planes
    shift: tuple[TopShift | list[TopShift], LeftShift | list[LeftShift]] = (0, 0),
    *,
    # `linear` and `sigmoid` from LinearScaler
    linear: bool | None = None,
    sigmoid: bool | tuple[Slope, Center] = False,
    # `border_handling`, `sample_grid_model`, `sar`, `dar`, `dar_in` and `keep_ar` from KeepArScaler
    border_handling: BorderHandling = BorderHandling.MIRROR,
    sample_grid_model: SampleGridModel = SampleGridModel.MATCH_EDGES,
    sar: Sar | float | bool | None = None,
    dar: Dar | float | bool | None = None,
    dar_in: Dar | bool | float | None = None,
    keep_ar: bool | None = None,
    # ComplexScaler adds blur
    blur: float | None = None,
    **kwargs: Any,
) -> vs.VideoNode | ConstantFormatVideoNode:
    """
    Scale a clip to the given resolution, with aspect ratio and linear light support.

    Keyword arguments passed during initialization are automatically injected here,
    unless explicitly overridden by the arguments provided at call time.
    Only arguments that match named parameters in this method are injected.

    Args:
        clip: The source clip.
        width: Target width (defaults to clip width if None).
        height: Target height (defaults to clip height if None).
        shift: Subpixel shift (top, left) applied during scaling. If a tuple is provided, it is used uniformly. If a
            list is given, the shift is applied per plane.
        linear: Whether to linearize the input before descaling. If None, inferred from sigmoid.
        sigmoid: Whether to use sigmoid transfer curve. Can be True, False, or a tuple of (slope, center). `True`
            applies the defaults values (6.5, 0.75). Keep in mind sigmoid slope has to be in range 1.0-20.0
            (inclusive) and sigmoid center has to be in range 0.0-1.0 (inclusive).
        border_handling: Method for handling image borders during sampling.
        sample_grid_model: Model used to align sampling grid.
        sar: Sample aspect ratio to assume or convert to.
        dar: Desired display aspect ratio.
        dar_in: Input display aspect ratio, if different from clip's.
        keep_ar: Whether to adjust dimensions to preserve aspect ratio.
        blur: Amount of blur to apply during scaling.

    Returns:
        Scaled clip, optionally aspect-corrected and linearized.
    """
    kwargs.update(
        linear=linear,
        sigmoid=sigmoid,
        border_handling=border_handling,
        sample_grid_model=sample_grid_model,
        sar=sar,
        dar=dar,
        dar_in=dar_in,
        keep_ar=keep_ar,
        blur=blur,
    )

    shift_top, shift_left = shift

    if isinstance(shift_top, (int, float)) and isinstance(shift_left, (int, float)):
        return super().scale(clip, width, height, (shift_top, shift_left), **kwargs)

    assert check_variable_format(clip, self.scale)

    n_planes = clip.format.num_planes

    shift_top = normalize_seq(shift_top, n_planes)
    shift_left = normalize_seq(shift_left, n_planes)

    if n_planes == 1:
        if len(set(shift_top)) > 1 or len(set(shift_left)) > 1:
            raise CustomValueError(
                "Inconsistent shift values detected for a single plane. "
                "All shift values must be identical when passing a GRAY clip.",
                self.scale,
                (shift_top, shift_left),
            )

        return super().scale(clip, width, height, (shift_top[0], shift_left[0]), **kwargs)

    width, height = self._wh_norm(clip, width, height)

    format_in = clip.format
    format_out = get_video_format(fallback(kwargs.pop("format", None), self.kwargs.get("format"), clip.format))

    chromaloc = ChromaLocation.from_video(clip, func=self.scale)
    chromaloc_in = ChromaLocation(
        fallback(kwargs.pop("chromaloc_in", None), self.kwargs.get("chromaloc_in"), chromaloc)
    )
    chromaloc_out = ChromaLocation(fallback(kwargs.pop("chromaloc", None), self.kwargs.get("chromaloc"), chromaloc))

    off_left, off_top = chromaloc_in.get_offsets(format_in)
    off_left_out, off_top_out = chromaloc_out.get_offsets(format_out)

    factor_w = 1 / 2**format_in.subsampling_w
    factor_h = 1 / 2**format_in.subsampling_h

    # Offsets for format out
    offc_left = (abs(off_left) + off_left_out) * factor_w
    offc_top = (abs(off_top) + off_top_out) * factor_h

    # Offsets for scale out
    if format_out.subsampling_w:
        offc_left = ((abs(off_left) + off_left * (clip.width / width)) * factor_w) + offc_left
    if format_out.subsampling_h:
        offc_top = ((abs(off_top) + off_top * (clip.height / height)) * factor_h) + offc_top

    for i in range(1, n_planes):
        shift_left[i] += offc_left
        shift_top[i] += offc_top

    scaled_planes = list[vs.VideoNode]()

    for i, (plane, top, left) in enumerate(zip(split(clip), shift_top, shift_left)):
        if i:
            w = round(width * 1 / 2**format_out.subsampling_h)
            h = round(height * 1 / 2**format_out.subsampling_h)
        else:
            w, h = width, height

        scaled_planes.append(
            super().scale(
                plane,
                w,
                h,
                (top, left),
                format=format_out.replace(color_family=vs.GRAY, subsampling_w=0, subsampling_h=0),
                **kwargs,
            )
        )

    merged = vs.core.std.ShufflePlanes(scaled_planes, [0, 0, 0], format_out.color_family, clip)

    if chromaloc_in != chromaloc_out:
        return chromaloc_out.apply(merged)

    return merged

supersample

supersample(
    clip: VideoNodeT,
    rfactor: float = 2.0,
    shift: tuple[TopShift, LeftShift] = (0, 0),
    **kwargs: Any
) -> VideoNodeT

Supersample a clip by a given scaling factor.

Parameters:

  • clip

    (VideoNodeT) –

    The source clip.

  • rfactor

    (float, default: 2.0 ) –

    Scaling factor for supersampling.

  • shift

    (tuple[TopShift, LeftShift], default: (0, 0) ) –

    Subpixel shift (top, left) applied during scaling.

  • **kwargs

    (Any, default: {} ) –

    Additional arguments forwarded to the scale function.

Raises:

  • CustomValueError

    If resulting resolution is non-positive.

Returns:

Source code
480
481
482
483
484
485
486
487
488
489
490
491
492
493
494
495
496
497
498
499
500
501
502
503
504
505
506
507
508
509
def supersample(
    self, clip: VideoNodeT, rfactor: float = 2.0, shift: tuple[TopShift, LeftShift] = (0, 0), **kwargs: Any
) -> VideoNodeT:
    """
    Supersample a clip by a given scaling factor.

    Args:
        clip: The source clip.
        rfactor: Scaling factor for supersampling.
        shift: Subpixel shift (top, left) applied during scaling.
        **kwargs: Additional arguments forwarded to the scale function.

    Raises:
        CustomValueError: If resulting resolution is non-positive.

    Returns:
        The supersampled clip.
    """
    assert check_variable_resolution(clip, self.supersample)

    dst_width, dst_height = ceil(clip.width * rfactor), ceil(clip.height * rfactor)

    if max(dst_width, dst_height) <= 0.0:
        raise CustomValueError(
            'Multiplying the resolution by "rfactor" must result in a positive resolution!',
            self.supersample,
            rfactor,
        )

    return self.scale(clip, dst_width, dst_height, shift, **kwargs)  # type: ignore[return-value]

EwaLanczos4Sharpest

EwaLanczos4Sharpest(
    radius: float = 4.24106286379607,
    blur: float = 0.8845120932605005,
    antiring: float = 0.8,
    **kwargs: Any
)

Bases: Placebo

Ewa Lanczos resizer.

Initialize the kernel with a specific radius and optional keyword arguments.

These keyword arguments are automatically forwarded to the _implemented_funcs methods but only if the method explicitly accepts them as named parameters. If the same keyword is passed to both __init__ and one of the _implemented_funcs, the one passed to func takes precedence.

Parameters:

  • radius

    (float, default: 4.24106286379607 ) –

    Overrides the filter kernel radius. Has no effect if the filter kernel is not resizeable.

  • blur

    (float, default: 0.8845120932605005 ) –

    Additional blur coefficient. This effectively stretches the kernel, without changing the effective radius of the filter radius.

  • antiring

    (float, default: 0.8 ) –

    Antiringing strength.

  • **kwargs

    (Any, default: {} ) –

    Keyword arguments that configure the internal scaling behavior.

Classes:

Methods:

  • ensure_obj

    Ensure that the input is a scaler instance, resolving it if necessary.

  • from_param

    Resolve and return a scaler type from a given input (string, type, or instance).

  • get_scale_args
  • kernel_radius
  • multi

    Deprecated alias for supersample.

  • pretty_string

    Cached property returning a user-friendly string representation.

  • scale

    Scale a clip to the given resolution, with aspect ratio and linear light support.

  • supersample

    Supersample a clip by a given scaling factor.

Attributes:

Source code
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
def __init__(
    self,
    radius: float = 4.2410628637960698819573,
    blur: float = 0.88451209326050047745788,
    antiring: float = 0.8,
    **kwargs: Any,
) -> None:
    """
    Initialize the kernel with a specific radius and optional keyword arguments.

    These keyword arguments are automatically forwarded to the `_implemented_funcs` methods
    but only if the method explicitly accepts them as named parameters.
    If the same keyword is passed to both `__init__` and one of the `_implemented_funcs`,
    the one passed to `func` takes precedence.

    Args:
        radius: Overrides the filter kernel radius. Has no effect if the filter kernel is not resizeable.
        blur: Additional blur coefficient. This effectively stretches the kernel,
            without changing the effective radius of the filter radius.
        antiring: Antiringing strength.
        **kwargs: Keyword arguments that configure the internal scaling behavior.
    """
    super().__init__(radius, None, None, blur=blur, antiring=antiring, **kwargs)

antiring instance-attribute

antiring = antiring

b instance-attribute

b = b

blur instance-attribute

blur = blur

c instance-attribute

c = c

clamp instance-attribute

clamp = clamp

kwargs instance-attribute

kwargs: dict[str, Any] = kwargs

Arguments passed to the implemented funcs or internal scale function.

radius instance-attribute

radius = radius

scale_function class-attribute instance-attribute

scale_function = Resample

taper instance-attribute

taper = taper

cached_property

cached_property(func: Callable[Concatenate[_BaseScalerT, P], T_co])

Bases: cached_property[T_co]

Read only version of functools.cached_property.

Source code
291
def __init__(self, func: Callable[Concatenate[_BaseScalerT, P], T_co]) -> None: ...

ensure_obj classmethod

ensure_obj(
    scaler: str | type[Self] | Self | None = None,
    /,
    func_except: FuncExceptT | None = None,
) -> Self

Ensure that the input is a scaler instance, resolving it if necessary.

Parameters:

  • scaler

    (str | type[Self] | Self | None, default: None ) –

    Scaler identifier (string, class, or instance).

  • func_except

    (FuncExceptT | None, default: None ) –

    Function returned for custom error handling.

Returns:

  • Self

    Scaler instance.

Source code
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
@classmethod
def ensure_obj(
    cls,
    scaler: str | type[Self] | Self | None = None,
    /,
    func_except: FuncExceptT | None = None,
) -> Self:
    """
    Ensure that the input is a scaler instance, resolving it if necessary.

    Args:
        scaler: Scaler identifier (string, class, or instance).
        func_except: Function returned for custom error handling.

    Returns:
        Scaler instance.
    """
    return _base_ensure_obj(cls, scaler, func_except)

from_param classmethod

from_param(
    scaler: str | type[Self] | Self | None = None,
    /,
    func_except: FuncExceptT | None = None,
) -> type[Self]

Resolve and return a scaler type from a given input (string, type, or instance).

Parameters:

  • scaler

    (str | type[Self] | Self | None, default: None ) –

    Scaler identifier (string, class, or instance).

  • func_except

    (FuncExceptT | None, default: None ) –

    Function returned for custom error handling.

Returns:

  • type[Self]

    Resolved scaler type.

Source code
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
@classmethod
def from_param(
    cls,
    scaler: str | type[Self] | Self | None = None,
    /,
    func_except: FuncExceptT | None = None,
) -> type[Self]:
    """
    Resolve and return a scaler type from a given input (string, type, or instance).

    Args:
        scaler: Scaler identifier (string, class, or instance).
        func_except: Function returned for custom error handling.

    Returns:
        Resolved scaler type.
    """
    return _base_from_param(cls, scaler, cls._err_class, func_except)

get_scale_args

get_scale_args(
    clip: VideoNode,
    shift: tuple[TopShift, LeftShift] = (0, 0),
    width: int | None = None,
    height: int | None = None,
    **kwargs: Any
) -> dict[str, Any]
Source code
 76
 77
 78
 79
 80
 81
 82
 83
 84
 85
 86
 87
 88
 89
 90
 91
 92
 93
 94
 95
 96
 97
 98
 99
100
101
102
103
104
def get_scale_args(
    self,
    clip: vs.VideoNode,
    shift: tuple[TopShift, LeftShift] = (0, 0),
    width: int | None = None,
    height: int | None = None,
    **kwargs: Any,
) -> dict[str, Any]:
    kwargs.pop("format", None)

    return (
        {
            "sx": kwargs.pop("src_left", shift[1]),
            "sy": kwargs.pop("src_top", shift[0]),
            "width": width,
            "height": height,
            "filter": self._kernel,
            "radius": self.radius,
            "param1": self.b,
            "param2": self.c,
            "clamp": self.clamp,
            "taper": self.taper,
            "blur": self.blur,
            "antiring": self.antiring,
        }
        | self.kwargs
        | kwargs
        | {"linearize": False, "sigmoidize": False}
    )

kernel_radius

kernel_radius() -> int
Source code
106
107
108
109
110
111
112
113
114
115
116
117
@ComplexScaler.cached_property
def kernel_radius(self) -> int:
    if self.radius:
        return ceil(self.radius)

    if self.b or self.c:
        b, c = fallback(self.b, 0), fallback(self.c, 0.5)

        if (b, c) == (0, 0):
            return 1

    return 2

multi

multi(
    clip: VideoNodeT,
    multi: float = 2.0,
    shift: tuple[TopShift, LeftShift] = (0, 0),
    **kwargs: Any
) -> VideoNodeT

Deprecated alias for supersample.

Parameters:

  • clip

    (VideoNodeT) –

    The source clip.

  • multi

    (float, default: 2.0 ) –

    Supersampling factor.

  • shift

    (tuple[TopShift, LeftShift], default: (0, 0) ) –

    Subpixel shift (top, left) applied during scaling.

  • **kwargs

    (Any, default: {} ) –

    Additional arguments forwarded to the scale function.

Returns:

Source code
511
512
513
514
515
516
517
518
519
520
521
522
523
524
525
526
527
@deprecated('The "multi" method is deprecated. Use "supersample" instead.', category=DeprecationWarning)
def multi(
    self, clip: VideoNodeT, multi: float = 2.0, shift: tuple[TopShift, LeftShift] = (0, 0), **kwargs: Any
) -> VideoNodeT:
    """
    Deprecated alias for `supersample`.

    Args:
        clip: The source clip.
        multi: Supersampling factor.
        shift: Subpixel shift (top, left) applied during scaling.
        **kwargs: Additional arguments forwarded to the scale function.

    Returns:
        The supersampled clip.
    """
    return self.supersample(clip, multi, shift, **kwargs)

pretty_string

pretty_string() -> str

Cached property returning a user-friendly string representation.

Returns:

  • str

    Pretty-printed string with arguments.

Source code
419
420
421
422
423
424
425
426
427
@cached_property
def pretty_string(self) -> str:
    """
    Cached property returning a user-friendly string representation.

    Returns:
        Pretty-printed string with arguments.
    """
    return self._pretty_string()

scale

scale(
    clip: VideoNode,
    width: int | None = None,
    height: int | None = None,
    shift: tuple[TopShift | list[TopShift], LeftShift | list[LeftShift]] = (
        0,
        0,
    ),
    *,
    linear: bool | None = None,
    sigmoid: bool | tuple[Slope, Center] = False,
    border_handling: BorderHandling = MIRROR,
    sample_grid_model: SampleGridModel = MATCH_EDGES,
    sar: Sar | float | bool | None = None,
    dar: Dar | float | bool | None = None,
    dar_in: Dar | bool | float | None = None,
    keep_ar: bool | None = None,
    blur: float | None = None,
    **kwargs: Any
) -> VideoNode | ConstantFormatVideoNode

Scale a clip to the given resolution, with aspect ratio and linear light support.

Keyword arguments passed during initialization are automatically injected here, unless explicitly overridden by the arguments provided at call time. Only arguments that match named parameters in this method are injected.

Parameters:

  • clip

    (VideoNode) –

    The source clip.

  • width

    (int | None, default: None ) –

    Target width (defaults to clip width if None).

  • height

    (int | None, default: None ) –

    Target height (defaults to clip height if None).

  • shift

    (tuple[TopShift | list[TopShift], LeftShift | list[LeftShift]], default: (0, 0) ) –

    Subpixel shift (top, left) applied during scaling. If a tuple is provided, it is used uniformly. If a list is given, the shift is applied per plane.

  • linear

    (bool | None, default: None ) –

    Whether to linearize the input before descaling. If None, inferred from sigmoid.

  • sigmoid

    (bool | tuple[Slope, Center], default: False ) –

    Whether to use sigmoid transfer curve. Can be True, False, or a tuple of (slope, center). True applies the defaults values (6.5, 0.75). Keep in mind sigmoid slope has to be in range 1.0-20.0 (inclusive) and sigmoid center has to be in range 0.0-1.0 (inclusive).

  • border_handling

    (BorderHandling, default: MIRROR ) –

    Method for handling image borders during sampling.

  • sample_grid_model

    (SampleGridModel, default: MATCH_EDGES ) –

    Model used to align sampling grid.

  • sar

    (Sar | float | bool | None, default: None ) –

    Sample aspect ratio to assume or convert to.

  • dar

    (Dar | float | bool | None, default: None ) –

    Desired display aspect ratio.

  • dar_in

    (Dar | bool | float | None, default: None ) –

    Input display aspect ratio, if different from clip's.

  • keep_ar

    (bool | None, default: None ) –

    Whether to adjust dimensions to preserve aspect ratio.

  • blur

    (float | None, default: None ) –

    Amount of blur to apply during scaling.

Returns:

  • VideoNode | ConstantFormatVideoNode

    Scaled clip, optionally aspect-corrected and linearized.

Source code
527
528
529
530
531
532
533
534
535
536
537
538
539
540
541
542
543
544
545
546
547
548
549
550
551
552
553
554
555
556
557
558
559
560
561
562
563
564
565
566
567
568
569
570
571
572
573
574
575
576
577
578
579
580
581
582
583
584
585
586
587
588
589
590
591
592
593
594
595
596
597
598
599
600
601
602
603
604
605
606
607
608
609
610
611
612
613
614
615
616
617
618
619
620
621
622
623
624
625
626
627
628
629
630
631
632
633
634
635
636
637
638
639
640
641
642
643
644
645
646
647
648
649
650
651
652
653
654
655
656
657
658
659
660
661
662
663
664
665
666
667
668
def scale(
    self,
    clip: vs.VideoNode,
    width: int | None = None,
    height: int | None = None,
    # ComplexScaler adds shift per planes
    shift: tuple[TopShift | list[TopShift], LeftShift | list[LeftShift]] = (0, 0),
    *,
    # `linear` and `sigmoid` from LinearScaler
    linear: bool | None = None,
    sigmoid: bool | tuple[Slope, Center] = False,
    # `border_handling`, `sample_grid_model`, `sar`, `dar`, `dar_in` and `keep_ar` from KeepArScaler
    border_handling: BorderHandling = BorderHandling.MIRROR,
    sample_grid_model: SampleGridModel = SampleGridModel.MATCH_EDGES,
    sar: Sar | float | bool | None = None,
    dar: Dar | float | bool | None = None,
    dar_in: Dar | bool | float | None = None,
    keep_ar: bool | None = None,
    # ComplexScaler adds blur
    blur: float | None = None,
    **kwargs: Any,
) -> vs.VideoNode | ConstantFormatVideoNode:
    """
    Scale a clip to the given resolution, with aspect ratio and linear light support.

    Keyword arguments passed during initialization are automatically injected here,
    unless explicitly overridden by the arguments provided at call time.
    Only arguments that match named parameters in this method are injected.

    Args:
        clip: The source clip.
        width: Target width (defaults to clip width if None).
        height: Target height (defaults to clip height if None).
        shift: Subpixel shift (top, left) applied during scaling. If a tuple is provided, it is used uniformly. If a
            list is given, the shift is applied per plane.
        linear: Whether to linearize the input before descaling. If None, inferred from sigmoid.
        sigmoid: Whether to use sigmoid transfer curve. Can be True, False, or a tuple of (slope, center). `True`
            applies the defaults values (6.5, 0.75). Keep in mind sigmoid slope has to be in range 1.0-20.0
            (inclusive) and sigmoid center has to be in range 0.0-1.0 (inclusive).
        border_handling: Method for handling image borders during sampling.
        sample_grid_model: Model used to align sampling grid.
        sar: Sample aspect ratio to assume or convert to.
        dar: Desired display aspect ratio.
        dar_in: Input display aspect ratio, if different from clip's.
        keep_ar: Whether to adjust dimensions to preserve aspect ratio.
        blur: Amount of blur to apply during scaling.

    Returns:
        Scaled clip, optionally aspect-corrected and linearized.
    """
    kwargs.update(
        linear=linear,
        sigmoid=sigmoid,
        border_handling=border_handling,
        sample_grid_model=sample_grid_model,
        sar=sar,
        dar=dar,
        dar_in=dar_in,
        keep_ar=keep_ar,
        blur=blur,
    )

    shift_top, shift_left = shift

    if isinstance(shift_top, (int, float)) and isinstance(shift_left, (int, float)):
        return super().scale(clip, width, height, (shift_top, shift_left), **kwargs)

    assert check_variable_format(clip, self.scale)

    n_planes = clip.format.num_planes

    shift_top = normalize_seq(shift_top, n_planes)
    shift_left = normalize_seq(shift_left, n_planes)

    if n_planes == 1:
        if len(set(shift_top)) > 1 or len(set(shift_left)) > 1:
            raise CustomValueError(
                "Inconsistent shift values detected for a single plane. "
                "All shift values must be identical when passing a GRAY clip.",
                self.scale,
                (shift_top, shift_left),
            )

        return super().scale(clip, width, height, (shift_top[0], shift_left[0]), **kwargs)

    width, height = self._wh_norm(clip, width, height)

    format_in = clip.format
    format_out = get_video_format(fallback(kwargs.pop("format", None), self.kwargs.get("format"), clip.format))

    chromaloc = ChromaLocation.from_video(clip, func=self.scale)
    chromaloc_in = ChromaLocation(
        fallback(kwargs.pop("chromaloc_in", None), self.kwargs.get("chromaloc_in"), chromaloc)
    )
    chromaloc_out = ChromaLocation(fallback(kwargs.pop("chromaloc", None), self.kwargs.get("chromaloc"), chromaloc))

    off_left, off_top = chromaloc_in.get_offsets(format_in)
    off_left_out, off_top_out = chromaloc_out.get_offsets(format_out)

    factor_w = 1 / 2**format_in.subsampling_w
    factor_h = 1 / 2**format_in.subsampling_h

    # Offsets for format out
    offc_left = (abs(off_left) + off_left_out) * factor_w
    offc_top = (abs(off_top) + off_top_out) * factor_h

    # Offsets for scale out
    if format_out.subsampling_w:
        offc_left = ((abs(off_left) + off_left * (clip.width / width)) * factor_w) + offc_left
    if format_out.subsampling_h:
        offc_top = ((abs(off_top) + off_top * (clip.height / height)) * factor_h) + offc_top

    for i in range(1, n_planes):
        shift_left[i] += offc_left
        shift_top[i] += offc_top

    scaled_planes = list[vs.VideoNode]()

    for i, (plane, top, left) in enumerate(zip(split(clip), shift_top, shift_left)):
        if i:
            w = round(width * 1 / 2**format_out.subsampling_h)
            h = round(height * 1 / 2**format_out.subsampling_h)
        else:
            w, h = width, height

        scaled_planes.append(
            super().scale(
                plane,
                w,
                h,
                (top, left),
                format=format_out.replace(color_family=vs.GRAY, subsampling_w=0, subsampling_h=0),
                **kwargs,
            )
        )

    merged = vs.core.std.ShufflePlanes(scaled_planes, [0, 0, 0], format_out.color_family, clip)

    if chromaloc_in != chromaloc_out:
        return chromaloc_out.apply(merged)

    return merged

supersample

supersample(
    clip: VideoNodeT,
    rfactor: float = 2.0,
    shift: tuple[TopShift, LeftShift] = (0, 0),
    **kwargs: Any
) -> VideoNodeT

Supersample a clip by a given scaling factor.

Parameters:

  • clip

    (VideoNodeT) –

    The source clip.

  • rfactor

    (float, default: 2.0 ) –

    Scaling factor for supersampling.

  • shift

    (tuple[TopShift, LeftShift], default: (0, 0) ) –

    Subpixel shift (top, left) applied during scaling.

  • **kwargs

    (Any, default: {} ) –

    Additional arguments forwarded to the scale function.

Raises:

  • CustomValueError

    If resulting resolution is non-positive.

Returns:

Source code
480
481
482
483
484
485
486
487
488
489
490
491
492
493
494
495
496
497
498
499
500
501
502
503
504
505
506
507
508
509
def supersample(
    self, clip: VideoNodeT, rfactor: float = 2.0, shift: tuple[TopShift, LeftShift] = (0, 0), **kwargs: Any
) -> VideoNodeT:
    """
    Supersample a clip by a given scaling factor.

    Args:
        clip: The source clip.
        rfactor: Scaling factor for supersampling.
        shift: Subpixel shift (top, left) applied during scaling.
        **kwargs: Additional arguments forwarded to the scale function.

    Raises:
        CustomValueError: If resulting resolution is non-positive.

    Returns:
        The supersampled clip.
    """
    assert check_variable_resolution(clip, self.supersample)

    dst_width, dst_height = ceil(clip.width * rfactor), ceil(clip.height * rfactor)

    if max(dst_width, dst_height) <= 0.0:
        raise CustomValueError(
            'Multiplying the resolution by "rfactor" must result in a positive resolution!',
            self.supersample,
            rfactor,
        )

    return self.scale(clip, dst_width, dst_height, shift, **kwargs)  # type: ignore[return-value]

EwaLanczosSharp

EwaLanczosSharp(
    radius: float = 3.238315484166236,
    blur: float = 0.9812505837223707,
    **kwargs: Any
)

Bases: Placebo

Ewa Lanczos resizer.

Initialize the kernel with a specific radius and optional keyword arguments.

These keyword arguments are automatically forwarded to the _implemented_funcs methods but only if the method explicitly accepts them as named parameters. If the same keyword is passed to both __init__ and one of the _implemented_funcs, the one passed to func takes precedence.

Parameters:

  • radius

    (float, default: 3.238315484166236 ) –

    Overrides the filter kernel radius. Has no effect if the filter kernel is not resizeable.

  • blur

    (float, default: 0.9812505837223707 ) –

    Additional blur coefficient. This effectively stretches the kernel, without changing the effective radius of the filter radius.

  • **kwargs

    (Any, default: {} ) –

    Keyword arguments that configure the internal scaling behavior.

Classes:

Methods:

  • ensure_obj

    Ensure that the input is a scaler instance, resolving it if necessary.

  • from_param

    Resolve and return a scaler type from a given input (string, type, or instance).

  • get_scale_args
  • kernel_radius
  • multi

    Deprecated alias for supersample.

  • pretty_string

    Cached property returning a user-friendly string representation.

  • scale

    Scale a clip to the given resolution, with aspect ratio and linear light support.

  • supersample

    Supersample a clip by a given scaling factor.

Attributes:

Source code
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
def __init__(
    self, radius: float = 3.2383154841662362076499, blur: float = 0.98125058372237073562493, **kwargs: Any
) -> None:
    """
    Initialize the kernel with a specific radius and optional keyword arguments.

    These keyword arguments are automatically forwarded to the `_implemented_funcs` methods
    but only if the method explicitly accepts them as named parameters.
    If the same keyword is passed to both `__init__` and one of the `_implemented_funcs`,
    the one passed to `func` takes precedence.

    Args:
        radius: Overrides the filter kernel radius. Has no effect if the filter kernel is not resizeable.
        blur: Additional blur coefficient. This effectively stretches the kernel,
            without changing the effective radius of the filter radius.
        **kwargs: Keyword arguments that configure the internal scaling behavior.
    """
    super().__init__(radius, None, None, blur=blur, **kwargs)

antiring instance-attribute

antiring = antiring

b instance-attribute

b = b

blur instance-attribute

blur = blur

c instance-attribute

c = c

clamp instance-attribute

clamp = clamp

kwargs instance-attribute

kwargs: dict[str, Any] = kwargs

Arguments passed to the implemented funcs or internal scale function.

radius instance-attribute

radius = radius

scale_function class-attribute instance-attribute

scale_function = Resample

taper instance-attribute

taper = taper

cached_property

cached_property(func: Callable[Concatenate[_BaseScalerT, P], T_co])

Bases: cached_property[T_co]

Read only version of functools.cached_property.

Source code
291
def __init__(self, func: Callable[Concatenate[_BaseScalerT, P], T_co]) -> None: ...

ensure_obj classmethod

ensure_obj(
    scaler: str | type[Self] | Self | None = None,
    /,
    func_except: FuncExceptT | None = None,
) -> Self

Ensure that the input is a scaler instance, resolving it if necessary.

Parameters:

  • scaler

    (str | type[Self] | Self | None, default: None ) –

    Scaler identifier (string, class, or instance).

  • func_except

    (FuncExceptT | None, default: None ) –

    Function returned for custom error handling.

Returns:

  • Self

    Scaler instance.

Source code
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
@classmethod
def ensure_obj(
    cls,
    scaler: str | type[Self] | Self | None = None,
    /,
    func_except: FuncExceptT | None = None,
) -> Self:
    """
    Ensure that the input is a scaler instance, resolving it if necessary.

    Args:
        scaler: Scaler identifier (string, class, or instance).
        func_except: Function returned for custom error handling.

    Returns:
        Scaler instance.
    """
    return _base_ensure_obj(cls, scaler, func_except)

from_param classmethod

from_param(
    scaler: str | type[Self] | Self | None = None,
    /,
    func_except: FuncExceptT | None = None,
) -> type[Self]

Resolve and return a scaler type from a given input (string, type, or instance).

Parameters:

  • scaler

    (str | type[Self] | Self | None, default: None ) –

    Scaler identifier (string, class, or instance).

  • func_except

    (FuncExceptT | None, default: None ) –

    Function returned for custom error handling.

Returns:

  • type[Self]

    Resolved scaler type.

Source code
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
@classmethod
def from_param(
    cls,
    scaler: str | type[Self] | Self | None = None,
    /,
    func_except: FuncExceptT | None = None,
) -> type[Self]:
    """
    Resolve and return a scaler type from a given input (string, type, or instance).

    Args:
        scaler: Scaler identifier (string, class, or instance).
        func_except: Function returned for custom error handling.

    Returns:
        Resolved scaler type.
    """
    return _base_from_param(cls, scaler, cls._err_class, func_except)

get_scale_args

get_scale_args(
    clip: VideoNode,
    shift: tuple[TopShift, LeftShift] = (0, 0),
    width: int | None = None,
    height: int | None = None,
    **kwargs: Any
) -> dict[str, Any]
Source code
 76
 77
 78
 79
 80
 81
 82
 83
 84
 85
 86
 87
 88
 89
 90
 91
 92
 93
 94
 95
 96
 97
 98
 99
100
101
102
103
104
def get_scale_args(
    self,
    clip: vs.VideoNode,
    shift: tuple[TopShift, LeftShift] = (0, 0),
    width: int | None = None,
    height: int | None = None,
    **kwargs: Any,
) -> dict[str, Any]:
    kwargs.pop("format", None)

    return (
        {
            "sx": kwargs.pop("src_left", shift[1]),
            "sy": kwargs.pop("src_top", shift[0]),
            "width": width,
            "height": height,
            "filter": self._kernel,
            "radius": self.radius,
            "param1": self.b,
            "param2": self.c,
            "clamp": self.clamp,
            "taper": self.taper,
            "blur": self.blur,
            "antiring": self.antiring,
        }
        | self.kwargs
        | kwargs
        | {"linearize": False, "sigmoidize": False}
    )

kernel_radius

kernel_radius() -> int
Source code
106
107
108
109
110
111
112
113
114
115
116
117
@ComplexScaler.cached_property
def kernel_radius(self) -> int:
    if self.radius:
        return ceil(self.radius)

    if self.b or self.c:
        b, c = fallback(self.b, 0), fallback(self.c, 0.5)

        if (b, c) == (0, 0):
            return 1

    return 2

multi

multi(
    clip: VideoNodeT,
    multi: float = 2.0,
    shift: tuple[TopShift, LeftShift] = (0, 0),
    **kwargs: Any
) -> VideoNodeT

Deprecated alias for supersample.

Parameters:

  • clip

    (VideoNodeT) –

    The source clip.

  • multi

    (float, default: 2.0 ) –

    Supersampling factor.

  • shift

    (tuple[TopShift, LeftShift], default: (0, 0) ) –

    Subpixel shift (top, left) applied during scaling.

  • **kwargs

    (Any, default: {} ) –

    Additional arguments forwarded to the scale function.

Returns:

Source code
511
512
513
514
515
516
517
518
519
520
521
522
523
524
525
526
527
@deprecated('The "multi" method is deprecated. Use "supersample" instead.', category=DeprecationWarning)
def multi(
    self, clip: VideoNodeT, multi: float = 2.0, shift: tuple[TopShift, LeftShift] = (0, 0), **kwargs: Any
) -> VideoNodeT:
    """
    Deprecated alias for `supersample`.

    Args:
        clip: The source clip.
        multi: Supersampling factor.
        shift: Subpixel shift (top, left) applied during scaling.
        **kwargs: Additional arguments forwarded to the scale function.

    Returns:
        The supersampled clip.
    """
    return self.supersample(clip, multi, shift, **kwargs)

pretty_string

pretty_string() -> str

Cached property returning a user-friendly string representation.

Returns:

  • str

    Pretty-printed string with arguments.

Source code
419
420
421
422
423
424
425
426
427
@cached_property
def pretty_string(self) -> str:
    """
    Cached property returning a user-friendly string representation.

    Returns:
        Pretty-printed string with arguments.
    """
    return self._pretty_string()

scale

scale(
    clip: VideoNode,
    width: int | None = None,
    height: int | None = None,
    shift: tuple[TopShift | list[TopShift], LeftShift | list[LeftShift]] = (
        0,
        0,
    ),
    *,
    linear: bool | None = None,
    sigmoid: bool | tuple[Slope, Center] = False,
    border_handling: BorderHandling = MIRROR,
    sample_grid_model: SampleGridModel = MATCH_EDGES,
    sar: Sar | float | bool | None = None,
    dar: Dar | float | bool | None = None,
    dar_in: Dar | bool | float | None = None,
    keep_ar: bool | None = None,
    blur: float | None = None,
    **kwargs: Any
) -> VideoNode | ConstantFormatVideoNode

Scale a clip to the given resolution, with aspect ratio and linear light support.

Keyword arguments passed during initialization are automatically injected here, unless explicitly overridden by the arguments provided at call time. Only arguments that match named parameters in this method are injected.

Parameters:

  • clip

    (VideoNode) –

    The source clip.

  • width

    (int | None, default: None ) –

    Target width (defaults to clip width if None).

  • height

    (int | None, default: None ) –

    Target height (defaults to clip height if None).

  • shift

    (tuple[TopShift | list[TopShift], LeftShift | list[LeftShift]], default: (0, 0) ) –

    Subpixel shift (top, left) applied during scaling. If a tuple is provided, it is used uniformly. If a list is given, the shift is applied per plane.

  • linear

    (bool | None, default: None ) –

    Whether to linearize the input before descaling. If None, inferred from sigmoid.

  • sigmoid

    (bool | tuple[Slope, Center], default: False ) –

    Whether to use sigmoid transfer curve. Can be True, False, or a tuple of (slope, center). True applies the defaults values (6.5, 0.75). Keep in mind sigmoid slope has to be in range 1.0-20.0 (inclusive) and sigmoid center has to be in range 0.0-1.0 (inclusive).

  • border_handling

    (BorderHandling, default: MIRROR ) –

    Method for handling image borders during sampling.

  • sample_grid_model

    (SampleGridModel, default: MATCH_EDGES ) –

    Model used to align sampling grid.

  • sar

    (Sar | float | bool | None, default: None ) –

    Sample aspect ratio to assume or convert to.

  • dar

    (Dar | float | bool | None, default: None ) –

    Desired display aspect ratio.

  • dar_in

    (Dar | bool | float | None, default: None ) –

    Input display aspect ratio, if different from clip's.

  • keep_ar

    (bool | None, default: None ) –

    Whether to adjust dimensions to preserve aspect ratio.

  • blur

    (float | None, default: None ) –

    Amount of blur to apply during scaling.

Returns:

  • VideoNode | ConstantFormatVideoNode

    Scaled clip, optionally aspect-corrected and linearized.

Source code
527
528
529
530
531
532
533
534
535
536
537
538
539
540
541
542
543
544
545
546
547
548
549
550
551
552
553
554
555
556
557
558
559
560
561
562
563
564
565
566
567
568
569
570
571
572
573
574
575
576
577
578
579
580
581
582
583
584
585
586
587
588
589
590
591
592
593
594
595
596
597
598
599
600
601
602
603
604
605
606
607
608
609
610
611
612
613
614
615
616
617
618
619
620
621
622
623
624
625
626
627
628
629
630
631
632
633
634
635
636
637
638
639
640
641
642
643
644
645
646
647
648
649
650
651
652
653
654
655
656
657
658
659
660
661
662
663
664
665
666
667
668
def scale(
    self,
    clip: vs.VideoNode,
    width: int | None = None,
    height: int | None = None,
    # ComplexScaler adds shift per planes
    shift: tuple[TopShift | list[TopShift], LeftShift | list[LeftShift]] = (0, 0),
    *,
    # `linear` and `sigmoid` from LinearScaler
    linear: bool | None = None,
    sigmoid: bool | tuple[Slope, Center] = False,
    # `border_handling`, `sample_grid_model`, `sar`, `dar`, `dar_in` and `keep_ar` from KeepArScaler
    border_handling: BorderHandling = BorderHandling.MIRROR,
    sample_grid_model: SampleGridModel = SampleGridModel.MATCH_EDGES,
    sar: Sar | float | bool | None = None,
    dar: Dar | float | bool | None = None,
    dar_in: Dar | bool | float | None = None,
    keep_ar: bool | None = None,
    # ComplexScaler adds blur
    blur: float | None = None,
    **kwargs: Any,
) -> vs.VideoNode | ConstantFormatVideoNode:
    """
    Scale a clip to the given resolution, with aspect ratio and linear light support.

    Keyword arguments passed during initialization are automatically injected here,
    unless explicitly overridden by the arguments provided at call time.
    Only arguments that match named parameters in this method are injected.

    Args:
        clip: The source clip.
        width: Target width (defaults to clip width if None).
        height: Target height (defaults to clip height if None).
        shift: Subpixel shift (top, left) applied during scaling. If a tuple is provided, it is used uniformly. If a
            list is given, the shift is applied per plane.
        linear: Whether to linearize the input before descaling. If None, inferred from sigmoid.
        sigmoid: Whether to use sigmoid transfer curve. Can be True, False, or a tuple of (slope, center). `True`
            applies the defaults values (6.5, 0.75). Keep in mind sigmoid slope has to be in range 1.0-20.0
            (inclusive) and sigmoid center has to be in range 0.0-1.0 (inclusive).
        border_handling: Method for handling image borders during sampling.
        sample_grid_model: Model used to align sampling grid.
        sar: Sample aspect ratio to assume or convert to.
        dar: Desired display aspect ratio.
        dar_in: Input display aspect ratio, if different from clip's.
        keep_ar: Whether to adjust dimensions to preserve aspect ratio.
        blur: Amount of blur to apply during scaling.

    Returns:
        Scaled clip, optionally aspect-corrected and linearized.
    """
    kwargs.update(
        linear=linear,
        sigmoid=sigmoid,
        border_handling=border_handling,
        sample_grid_model=sample_grid_model,
        sar=sar,
        dar=dar,
        dar_in=dar_in,
        keep_ar=keep_ar,
        blur=blur,
    )

    shift_top, shift_left = shift

    if isinstance(shift_top, (int, float)) and isinstance(shift_left, (int, float)):
        return super().scale(clip, width, height, (shift_top, shift_left), **kwargs)

    assert check_variable_format(clip, self.scale)

    n_planes = clip.format.num_planes

    shift_top = normalize_seq(shift_top, n_planes)
    shift_left = normalize_seq(shift_left, n_planes)

    if n_planes == 1:
        if len(set(shift_top)) > 1 or len(set(shift_left)) > 1:
            raise CustomValueError(
                "Inconsistent shift values detected for a single plane. "
                "All shift values must be identical when passing a GRAY clip.",
                self.scale,
                (shift_top, shift_left),
            )

        return super().scale(clip, width, height, (shift_top[0], shift_left[0]), **kwargs)

    width, height = self._wh_norm(clip, width, height)

    format_in = clip.format
    format_out = get_video_format(fallback(kwargs.pop("format", None), self.kwargs.get("format"), clip.format))

    chromaloc = ChromaLocation.from_video(clip, func=self.scale)
    chromaloc_in = ChromaLocation(
        fallback(kwargs.pop("chromaloc_in", None), self.kwargs.get("chromaloc_in"), chromaloc)
    )
    chromaloc_out = ChromaLocation(fallback(kwargs.pop("chromaloc", None), self.kwargs.get("chromaloc"), chromaloc))

    off_left, off_top = chromaloc_in.get_offsets(format_in)
    off_left_out, off_top_out = chromaloc_out.get_offsets(format_out)

    factor_w = 1 / 2**format_in.subsampling_w
    factor_h = 1 / 2**format_in.subsampling_h

    # Offsets for format out
    offc_left = (abs(off_left) + off_left_out) * factor_w
    offc_top = (abs(off_top) + off_top_out) * factor_h

    # Offsets for scale out
    if format_out.subsampling_w:
        offc_left = ((abs(off_left) + off_left * (clip.width / width)) * factor_w) + offc_left
    if format_out.subsampling_h:
        offc_top = ((abs(off_top) + off_top * (clip.height / height)) * factor_h) + offc_top

    for i in range(1, n_planes):
        shift_left[i] += offc_left
        shift_top[i] += offc_top

    scaled_planes = list[vs.VideoNode]()

    for i, (plane, top, left) in enumerate(zip(split(clip), shift_top, shift_left)):
        if i:
            w = round(width * 1 / 2**format_out.subsampling_h)
            h = round(height * 1 / 2**format_out.subsampling_h)
        else:
            w, h = width, height

        scaled_planes.append(
            super().scale(
                plane,
                w,
                h,
                (top, left),
                format=format_out.replace(color_family=vs.GRAY, subsampling_w=0, subsampling_h=0),
                **kwargs,
            )
        )

    merged = vs.core.std.ShufflePlanes(scaled_planes, [0, 0, 0], format_out.color_family, clip)

    if chromaloc_in != chromaloc_out:
        return chromaloc_out.apply(merged)

    return merged

supersample

supersample(
    clip: VideoNodeT,
    rfactor: float = 2.0,
    shift: tuple[TopShift, LeftShift] = (0, 0),
    **kwargs: Any
) -> VideoNodeT

Supersample a clip by a given scaling factor.

Parameters:

  • clip

    (VideoNodeT) –

    The source clip.

  • rfactor

    (float, default: 2.0 ) –

    Scaling factor for supersampling.

  • shift

    (tuple[TopShift, LeftShift], default: (0, 0) ) –

    Subpixel shift (top, left) applied during scaling.

  • **kwargs

    (Any, default: {} ) –

    Additional arguments forwarded to the scale function.

Raises:

  • CustomValueError

    If resulting resolution is non-positive.

Returns:

Source code
480
481
482
483
484
485
486
487
488
489
490
491
492
493
494
495
496
497
498
499
500
501
502
503
504
505
506
507
508
509
def supersample(
    self, clip: VideoNodeT, rfactor: float = 2.0, shift: tuple[TopShift, LeftShift] = (0, 0), **kwargs: Any
) -> VideoNodeT:
    """
    Supersample a clip by a given scaling factor.

    Args:
        clip: The source clip.
        rfactor: Scaling factor for supersampling.
        shift: Subpixel shift (top, left) applied during scaling.
        **kwargs: Additional arguments forwarded to the scale function.

    Raises:
        CustomValueError: If resulting resolution is non-positive.

    Returns:
        The supersampled clip.
    """
    assert check_variable_resolution(clip, self.supersample)

    dst_width, dst_height = ceil(clip.width * rfactor), ceil(clip.height * rfactor)

    if max(dst_width, dst_height) <= 0.0:
        raise CustomValueError(
            'Multiplying the resolution by "rfactor" must result in a positive resolution!',
            self.supersample,
            rfactor,
        )

    return self.scale(clip, dst_width, dst_height, shift, **kwargs)  # type: ignore[return-value]

EwaRobidoux

EwaRobidoux(**kwargs: Any)

Bases: Placebo

Ewa Robidoux resizer.

Initialize the kernel with optional keyword arguments.

These keyword arguments are automatically forwarded to the _implemented_funcs methods but only if the method explicitly accepts them as named parameters. If the same keyword is passed to both __init__ and one of the _implemented_funcs, the one passed to func takes precedence.

Parameters:

  • **kwargs

    (Any, default: {} ) –

    Keyword arguments that configure the internal scaling behavior.

Classes:

Methods:

  • ensure_obj

    Ensure that the input is a scaler instance, resolving it if necessary.

  • from_param

    Resolve and return a scaler type from a given input (string, type, or instance).

  • get_scale_args
  • kernel_radius
  • multi

    Deprecated alias for supersample.

  • pretty_string

    Cached property returning a user-friendly string representation.

  • scale

    Scale a clip to the given resolution, with aspect ratio and linear light support.

  • supersample

    Supersample a clip by a given scaling factor.

Attributes:

Source code
302
303
304
305
306
307
308
309
310
311
312
313
314
def __init__(self, **kwargs: Any) -> None:
    """
    Initialize the kernel with optional keyword arguments.

    These keyword arguments are automatically forwarded to the `_implemented_funcs` methods
    but only if the method explicitly accepts them as named parameters.
    If the same keyword is passed to both `__init__` and one of the `_implemented_funcs`,
    the one passed to `func` takes precedence.

    Args:
        **kwargs: Keyword arguments that configure the internal scaling behavior.
    """
    super().__init__(None, None, None, **kwargs)

antiring instance-attribute

antiring = antiring

b instance-attribute

b = b

blur instance-attribute

blur = blur

c instance-attribute

c = c

clamp instance-attribute

clamp = clamp

kwargs instance-attribute

kwargs: dict[str, Any] = kwargs

Arguments passed to the implemented funcs or internal scale function.

radius instance-attribute

radius = radius

scale_function class-attribute instance-attribute

scale_function = Resample

taper instance-attribute

taper = taper

cached_property

cached_property(func: Callable[Concatenate[_BaseScalerT, P], T_co])

Bases: cached_property[T_co]

Read only version of functools.cached_property.

Source code
291
def __init__(self, func: Callable[Concatenate[_BaseScalerT, P], T_co]) -> None: ...

ensure_obj classmethod

ensure_obj(
    scaler: str | type[Self] | Self | None = None,
    /,
    func_except: FuncExceptT | None = None,
) -> Self

Ensure that the input is a scaler instance, resolving it if necessary.

Parameters:

  • scaler

    (str | type[Self] | Self | None, default: None ) –

    Scaler identifier (string, class, or instance).

  • func_except

    (FuncExceptT | None, default: None ) –

    Function returned for custom error handling.

Returns:

  • Self

    Scaler instance.

Source code
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
@classmethod
def ensure_obj(
    cls,
    scaler: str | type[Self] | Self | None = None,
    /,
    func_except: FuncExceptT | None = None,
) -> Self:
    """
    Ensure that the input is a scaler instance, resolving it if necessary.

    Args:
        scaler: Scaler identifier (string, class, or instance).
        func_except: Function returned for custom error handling.

    Returns:
        Scaler instance.
    """
    return _base_ensure_obj(cls, scaler, func_except)

from_param classmethod

from_param(
    scaler: str | type[Self] | Self | None = None,
    /,
    func_except: FuncExceptT | None = None,
) -> type[Self]

Resolve and return a scaler type from a given input (string, type, or instance).

Parameters:

  • scaler

    (str | type[Self] | Self | None, default: None ) –

    Scaler identifier (string, class, or instance).

  • func_except

    (FuncExceptT | None, default: None ) –

    Function returned for custom error handling.

Returns:

  • type[Self]

    Resolved scaler type.

Source code
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
@classmethod
def from_param(
    cls,
    scaler: str | type[Self] | Self | None = None,
    /,
    func_except: FuncExceptT | None = None,
) -> type[Self]:
    """
    Resolve and return a scaler type from a given input (string, type, or instance).

    Args:
        scaler: Scaler identifier (string, class, or instance).
        func_except: Function returned for custom error handling.

    Returns:
        Resolved scaler type.
    """
    return _base_from_param(cls, scaler, cls._err_class, func_except)

get_scale_args

get_scale_args(
    clip: VideoNode,
    shift: tuple[TopShift, LeftShift] = (0, 0),
    width: int | None = None,
    height: int | None = None,
    **kwargs: Any
) -> dict[str, Any]
Source code
 76
 77
 78
 79
 80
 81
 82
 83
 84
 85
 86
 87
 88
 89
 90
 91
 92
 93
 94
 95
 96
 97
 98
 99
100
101
102
103
104
def get_scale_args(
    self,
    clip: vs.VideoNode,
    shift: tuple[TopShift, LeftShift] = (0, 0),
    width: int | None = None,
    height: int | None = None,
    **kwargs: Any,
) -> dict[str, Any]:
    kwargs.pop("format", None)

    return (
        {
            "sx": kwargs.pop("src_left", shift[1]),
            "sy": kwargs.pop("src_top", shift[0]),
            "width": width,
            "height": height,
            "filter": self._kernel,
            "radius": self.radius,
            "param1": self.b,
            "param2": self.c,
            "clamp": self.clamp,
            "taper": self.taper,
            "blur": self.blur,
            "antiring": self.antiring,
        }
        | self.kwargs
        | kwargs
        | {"linearize": False, "sigmoidize": False}
    )

kernel_radius

kernel_radius() -> int
Source code
106
107
108
109
110
111
112
113
114
115
116
117
@ComplexScaler.cached_property
def kernel_radius(self) -> int:
    if self.radius:
        return ceil(self.radius)

    if self.b or self.c:
        b, c = fallback(self.b, 0), fallback(self.c, 0.5)

        if (b, c) == (0, 0):
            return 1

    return 2

multi

multi(
    clip: VideoNodeT,
    multi: float = 2.0,
    shift: tuple[TopShift, LeftShift] = (0, 0),
    **kwargs: Any
) -> VideoNodeT

Deprecated alias for supersample.

Parameters:

  • clip

    (VideoNodeT) –

    The source clip.

  • multi

    (float, default: 2.0 ) –

    Supersampling factor.

  • shift

    (tuple[TopShift, LeftShift], default: (0, 0) ) –

    Subpixel shift (top, left) applied during scaling.

  • **kwargs

    (Any, default: {} ) –

    Additional arguments forwarded to the scale function.

Returns:

Source code
511
512
513
514
515
516
517
518
519
520
521
522
523
524
525
526
527
@deprecated('The "multi" method is deprecated. Use "supersample" instead.', category=DeprecationWarning)
def multi(
    self, clip: VideoNodeT, multi: float = 2.0, shift: tuple[TopShift, LeftShift] = (0, 0), **kwargs: Any
) -> VideoNodeT:
    """
    Deprecated alias for `supersample`.

    Args:
        clip: The source clip.
        multi: Supersampling factor.
        shift: Subpixel shift (top, left) applied during scaling.
        **kwargs: Additional arguments forwarded to the scale function.

    Returns:
        The supersampled clip.
    """
    return self.supersample(clip, multi, shift, **kwargs)

pretty_string

pretty_string() -> str

Cached property returning a user-friendly string representation.

Returns:

  • str

    Pretty-printed string with arguments.

Source code
419
420
421
422
423
424
425
426
427
@cached_property
def pretty_string(self) -> str:
    """
    Cached property returning a user-friendly string representation.

    Returns:
        Pretty-printed string with arguments.
    """
    return self._pretty_string()

scale

scale(
    clip: VideoNode,
    width: int | None = None,
    height: int | None = None,
    shift: tuple[TopShift | list[TopShift], LeftShift | list[LeftShift]] = (
        0,
        0,
    ),
    *,
    linear: bool | None = None,
    sigmoid: bool | tuple[Slope, Center] = False,
    border_handling: BorderHandling = MIRROR,
    sample_grid_model: SampleGridModel = MATCH_EDGES,
    sar: Sar | float | bool | None = None,
    dar: Dar | float | bool | None = None,
    dar_in: Dar | bool | float | None = None,
    keep_ar: bool | None = None,
    blur: float | None = None,
    **kwargs: Any
) -> VideoNode | ConstantFormatVideoNode

Scale a clip to the given resolution, with aspect ratio and linear light support.

Keyword arguments passed during initialization are automatically injected here, unless explicitly overridden by the arguments provided at call time. Only arguments that match named parameters in this method are injected.

Parameters:

  • clip

    (VideoNode) –

    The source clip.

  • width

    (int | None, default: None ) –

    Target width (defaults to clip width if None).

  • height

    (int | None, default: None ) –

    Target height (defaults to clip height if None).

  • shift

    (tuple[TopShift | list[TopShift], LeftShift | list[LeftShift]], default: (0, 0) ) –

    Subpixel shift (top, left) applied during scaling. If a tuple is provided, it is used uniformly. If a list is given, the shift is applied per plane.

  • linear

    (bool | None, default: None ) –

    Whether to linearize the input before descaling. If None, inferred from sigmoid.

  • sigmoid

    (bool | tuple[Slope, Center], default: False ) –

    Whether to use sigmoid transfer curve. Can be True, False, or a tuple of (slope, center). True applies the defaults values (6.5, 0.75). Keep in mind sigmoid slope has to be in range 1.0-20.0 (inclusive) and sigmoid center has to be in range 0.0-1.0 (inclusive).

  • border_handling

    (BorderHandling, default: MIRROR ) –

    Method for handling image borders during sampling.

  • sample_grid_model

    (SampleGridModel, default: MATCH_EDGES ) –

    Model used to align sampling grid.

  • sar

    (Sar | float | bool | None, default: None ) –

    Sample aspect ratio to assume or convert to.

  • dar

    (Dar | float | bool | None, default: None ) –

    Desired display aspect ratio.

  • dar_in

    (Dar | bool | float | None, default: None ) –

    Input display aspect ratio, if different from clip's.

  • keep_ar

    (bool | None, default: None ) –

    Whether to adjust dimensions to preserve aspect ratio.

  • blur

    (float | None, default: None ) –

    Amount of blur to apply during scaling.

Returns:

  • VideoNode | ConstantFormatVideoNode

    Scaled clip, optionally aspect-corrected and linearized.

Source code
527
528
529
530
531
532
533
534
535
536
537
538
539
540
541
542
543
544
545
546
547
548
549
550
551
552
553
554
555
556
557
558
559
560
561
562
563
564
565
566
567
568
569
570
571
572
573
574
575
576
577
578
579
580
581
582
583
584
585
586
587
588
589
590
591
592
593
594
595
596
597
598
599
600
601
602
603
604
605
606
607
608
609
610
611
612
613
614
615
616
617
618
619
620
621
622
623
624
625
626
627
628
629
630
631
632
633
634
635
636
637
638
639
640
641
642
643
644
645
646
647
648
649
650
651
652
653
654
655
656
657
658
659
660
661
662
663
664
665
666
667
668
def scale(
    self,
    clip: vs.VideoNode,
    width: int | None = None,
    height: int | None = None,
    # ComplexScaler adds shift per planes
    shift: tuple[TopShift | list[TopShift], LeftShift | list[LeftShift]] = (0, 0),
    *,
    # `linear` and `sigmoid` from LinearScaler
    linear: bool | None = None,
    sigmoid: bool | tuple[Slope, Center] = False,
    # `border_handling`, `sample_grid_model`, `sar`, `dar`, `dar_in` and `keep_ar` from KeepArScaler
    border_handling: BorderHandling = BorderHandling.MIRROR,
    sample_grid_model: SampleGridModel = SampleGridModel.MATCH_EDGES,
    sar: Sar | float | bool | None = None,
    dar: Dar | float | bool | None = None,
    dar_in: Dar | bool | float | None = None,
    keep_ar: bool | None = None,
    # ComplexScaler adds blur
    blur: float | None = None,
    **kwargs: Any,
) -> vs.VideoNode | ConstantFormatVideoNode:
    """
    Scale a clip to the given resolution, with aspect ratio and linear light support.

    Keyword arguments passed during initialization are automatically injected here,
    unless explicitly overridden by the arguments provided at call time.
    Only arguments that match named parameters in this method are injected.

    Args:
        clip: The source clip.
        width: Target width (defaults to clip width if None).
        height: Target height (defaults to clip height if None).
        shift: Subpixel shift (top, left) applied during scaling. If a tuple is provided, it is used uniformly. If a
            list is given, the shift is applied per plane.
        linear: Whether to linearize the input before descaling. If None, inferred from sigmoid.
        sigmoid: Whether to use sigmoid transfer curve. Can be True, False, or a tuple of (slope, center). `True`
            applies the defaults values (6.5, 0.75). Keep in mind sigmoid slope has to be in range 1.0-20.0
            (inclusive) and sigmoid center has to be in range 0.0-1.0 (inclusive).
        border_handling: Method for handling image borders during sampling.
        sample_grid_model: Model used to align sampling grid.
        sar: Sample aspect ratio to assume or convert to.
        dar: Desired display aspect ratio.
        dar_in: Input display aspect ratio, if different from clip's.
        keep_ar: Whether to adjust dimensions to preserve aspect ratio.
        blur: Amount of blur to apply during scaling.

    Returns:
        Scaled clip, optionally aspect-corrected and linearized.
    """
    kwargs.update(
        linear=linear,
        sigmoid=sigmoid,
        border_handling=border_handling,
        sample_grid_model=sample_grid_model,
        sar=sar,
        dar=dar,
        dar_in=dar_in,
        keep_ar=keep_ar,
        blur=blur,
    )

    shift_top, shift_left = shift

    if isinstance(shift_top, (int, float)) and isinstance(shift_left, (int, float)):
        return super().scale(clip, width, height, (shift_top, shift_left), **kwargs)

    assert check_variable_format(clip, self.scale)

    n_planes = clip.format.num_planes

    shift_top = normalize_seq(shift_top, n_planes)
    shift_left = normalize_seq(shift_left, n_planes)

    if n_planes == 1:
        if len(set(shift_top)) > 1 or len(set(shift_left)) > 1:
            raise CustomValueError(
                "Inconsistent shift values detected for a single plane. "
                "All shift values must be identical when passing a GRAY clip.",
                self.scale,
                (shift_top, shift_left),
            )

        return super().scale(clip, width, height, (shift_top[0], shift_left[0]), **kwargs)

    width, height = self._wh_norm(clip, width, height)

    format_in = clip.format
    format_out = get_video_format(fallback(kwargs.pop("format", None), self.kwargs.get("format"), clip.format))

    chromaloc = ChromaLocation.from_video(clip, func=self.scale)
    chromaloc_in = ChromaLocation(
        fallback(kwargs.pop("chromaloc_in", None), self.kwargs.get("chromaloc_in"), chromaloc)
    )
    chromaloc_out = ChromaLocation(fallback(kwargs.pop("chromaloc", None), self.kwargs.get("chromaloc"), chromaloc))

    off_left, off_top = chromaloc_in.get_offsets(format_in)
    off_left_out, off_top_out = chromaloc_out.get_offsets(format_out)

    factor_w = 1 / 2**format_in.subsampling_w
    factor_h = 1 / 2**format_in.subsampling_h

    # Offsets for format out
    offc_left = (abs(off_left) + off_left_out) * factor_w
    offc_top = (abs(off_top) + off_top_out) * factor_h

    # Offsets for scale out
    if format_out.subsampling_w:
        offc_left = ((abs(off_left) + off_left * (clip.width / width)) * factor_w) + offc_left
    if format_out.subsampling_h:
        offc_top = ((abs(off_top) + off_top * (clip.height / height)) * factor_h) + offc_top

    for i in range(1, n_planes):
        shift_left[i] += offc_left
        shift_top[i] += offc_top

    scaled_planes = list[vs.VideoNode]()

    for i, (plane, top, left) in enumerate(zip(split(clip), shift_top, shift_left)):
        if i:
            w = round(width * 1 / 2**format_out.subsampling_h)
            h = round(height * 1 / 2**format_out.subsampling_h)
        else:
            w, h = width, height

        scaled_planes.append(
            super().scale(
                plane,
                w,
                h,
                (top, left),
                format=format_out.replace(color_family=vs.GRAY, subsampling_w=0, subsampling_h=0),
                **kwargs,
            )
        )

    merged = vs.core.std.ShufflePlanes(scaled_planes, [0, 0, 0], format_out.color_family, clip)

    if chromaloc_in != chromaloc_out:
        return chromaloc_out.apply(merged)

    return merged

supersample

supersample(
    clip: VideoNodeT,
    rfactor: float = 2.0,
    shift: tuple[TopShift, LeftShift] = (0, 0),
    **kwargs: Any
) -> VideoNodeT

Supersample a clip by a given scaling factor.

Parameters:

  • clip

    (VideoNodeT) –

    The source clip.

  • rfactor

    (float, default: 2.0 ) –

    Scaling factor for supersampling.

  • shift

    (tuple[TopShift, LeftShift], default: (0, 0) ) –

    Subpixel shift (top, left) applied during scaling.

  • **kwargs

    (Any, default: {} ) –

    Additional arguments forwarded to the scale function.

Raises:

  • CustomValueError

    If resulting resolution is non-positive.

Returns:

Source code
480
481
482
483
484
485
486
487
488
489
490
491
492
493
494
495
496
497
498
499
500
501
502
503
504
505
506
507
508
509
def supersample(
    self, clip: VideoNodeT, rfactor: float = 2.0, shift: tuple[TopShift, LeftShift] = (0, 0), **kwargs: Any
) -> VideoNodeT:
    """
    Supersample a clip by a given scaling factor.

    Args:
        clip: The source clip.
        rfactor: Scaling factor for supersampling.
        shift: Subpixel shift (top, left) applied during scaling.
        **kwargs: Additional arguments forwarded to the scale function.

    Raises:
        CustomValueError: If resulting resolution is non-positive.

    Returns:
        The supersampled clip.
    """
    assert check_variable_resolution(clip, self.supersample)

    dst_width, dst_height = ceil(clip.width * rfactor), ceil(clip.height * rfactor)

    if max(dst_width, dst_height) <= 0.0:
        raise CustomValueError(
            'Multiplying the resolution by "rfactor" must result in a positive resolution!',
            self.supersample,
            rfactor,
        )

    return self.scale(clip, dst_width, dst_height, shift, **kwargs)  # type: ignore[return-value]

EwaRobidouxSharp

EwaRobidouxSharp(**kwargs: Any)

Bases: Placebo

Ewa Robidoux Sharp resizer.

Initialize the kernel with optional keyword arguments.

These keyword arguments are automatically forwarded to the _implemented_funcs methods but only if the method explicitly accepts them as named parameters. If the same keyword is passed to both __init__ and one of the _implemented_funcs, the one passed to func takes precedence.

Parameters:

  • **kwargs

    (Any, default: {} ) –

    Keyword arguments that configure the internal scaling behavior.

Classes:

Methods:

  • ensure_obj

    Ensure that the input is a scaler instance, resolving it if necessary.

  • from_param

    Resolve and return a scaler type from a given input (string, type, or instance).

  • get_scale_args
  • kernel_radius
  • multi

    Deprecated alias for supersample.

  • pretty_string

    Cached property returning a user-friendly string representation.

  • scale

    Scale a clip to the given resolution, with aspect ratio and linear light support.

  • supersample

    Supersample a clip by a given scaling factor.

Attributes:

Source code
324
325
326
327
328
329
330
331
332
333
334
335
336
def __init__(self, **kwargs: Any) -> None:
    """
    Initialize the kernel with optional keyword arguments.

    These keyword arguments are automatically forwarded to the `_implemented_funcs` methods
    but only if the method explicitly accepts them as named parameters.
    If the same keyword is passed to both `__init__` and one of the `_implemented_funcs`,
    the one passed to `func` takes precedence.

    Args:
        **kwargs: Keyword arguments that configure the internal scaling behavior.
    """
    super().__init__(None, None, None, **kwargs)

antiring instance-attribute

antiring = antiring

b instance-attribute

b = b

blur instance-attribute

blur = blur

c instance-attribute

c = c

clamp instance-attribute

clamp = clamp

kwargs instance-attribute

kwargs: dict[str, Any] = kwargs

Arguments passed to the implemented funcs or internal scale function.

radius instance-attribute

radius = radius

scale_function class-attribute instance-attribute

scale_function = Resample

taper instance-attribute

taper = taper

cached_property

cached_property(func: Callable[Concatenate[_BaseScalerT, P], T_co])

Bases: cached_property[T_co]

Read only version of functools.cached_property.

Source code
291
def __init__(self, func: Callable[Concatenate[_BaseScalerT, P], T_co]) -> None: ...

ensure_obj classmethod

ensure_obj(
    scaler: str | type[Self] | Self | None = None,
    /,
    func_except: FuncExceptT | None = None,
) -> Self

Ensure that the input is a scaler instance, resolving it if necessary.

Parameters:

  • scaler

    (str | type[Self] | Self | None, default: None ) –

    Scaler identifier (string, class, or instance).

  • func_except

    (FuncExceptT | None, default: None ) –

    Function returned for custom error handling.

Returns:

  • Self

    Scaler instance.

Source code
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
@classmethod
def ensure_obj(
    cls,
    scaler: str | type[Self] | Self | None = None,
    /,
    func_except: FuncExceptT | None = None,
) -> Self:
    """
    Ensure that the input is a scaler instance, resolving it if necessary.

    Args:
        scaler: Scaler identifier (string, class, or instance).
        func_except: Function returned for custom error handling.

    Returns:
        Scaler instance.
    """
    return _base_ensure_obj(cls, scaler, func_except)

from_param classmethod

from_param(
    scaler: str | type[Self] | Self | None = None,
    /,
    func_except: FuncExceptT | None = None,
) -> type[Self]

Resolve and return a scaler type from a given input (string, type, or instance).

Parameters:

  • scaler

    (str | type[Self] | Self | None, default: None ) –

    Scaler identifier (string, class, or instance).

  • func_except

    (FuncExceptT | None, default: None ) –

    Function returned for custom error handling.

Returns:

  • type[Self]

    Resolved scaler type.

Source code
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
@classmethod
def from_param(
    cls,
    scaler: str | type[Self] | Self | None = None,
    /,
    func_except: FuncExceptT | None = None,
) -> type[Self]:
    """
    Resolve and return a scaler type from a given input (string, type, or instance).

    Args:
        scaler: Scaler identifier (string, class, or instance).
        func_except: Function returned for custom error handling.

    Returns:
        Resolved scaler type.
    """
    return _base_from_param(cls, scaler, cls._err_class, func_except)

get_scale_args

get_scale_args(
    clip: VideoNode,
    shift: tuple[TopShift, LeftShift] = (0, 0),
    width: int | None = None,
    height: int | None = None,
    **kwargs: Any
) -> dict[str, Any]
Source code
 76
 77
 78
 79
 80
 81
 82
 83
 84
 85
 86
 87
 88
 89
 90
 91
 92
 93
 94
 95
 96
 97
 98
 99
100
101
102
103
104
def get_scale_args(
    self,
    clip: vs.VideoNode,
    shift: tuple[TopShift, LeftShift] = (0, 0),
    width: int | None = None,
    height: int | None = None,
    **kwargs: Any,
) -> dict[str, Any]:
    kwargs.pop("format", None)

    return (
        {
            "sx": kwargs.pop("src_left", shift[1]),
            "sy": kwargs.pop("src_top", shift[0]),
            "width": width,
            "height": height,
            "filter": self._kernel,
            "radius": self.radius,
            "param1": self.b,
            "param2": self.c,
            "clamp": self.clamp,
            "taper": self.taper,
            "blur": self.blur,
            "antiring": self.antiring,
        }
        | self.kwargs
        | kwargs
        | {"linearize": False, "sigmoidize": False}
    )

kernel_radius

kernel_radius() -> int
Source code
106
107
108
109
110
111
112
113
114
115
116
117
@ComplexScaler.cached_property
def kernel_radius(self) -> int:
    if self.radius:
        return ceil(self.radius)

    if self.b or self.c:
        b, c = fallback(self.b, 0), fallback(self.c, 0.5)

        if (b, c) == (0, 0):
            return 1

    return 2

multi

multi(
    clip: VideoNodeT,
    multi: float = 2.0,
    shift: tuple[TopShift, LeftShift] = (0, 0),
    **kwargs: Any
) -> VideoNodeT

Deprecated alias for supersample.

Parameters:

  • clip

    (VideoNodeT) –

    The source clip.

  • multi

    (float, default: 2.0 ) –

    Supersampling factor.

  • shift

    (tuple[TopShift, LeftShift], default: (0, 0) ) –

    Subpixel shift (top, left) applied during scaling.

  • **kwargs

    (Any, default: {} ) –

    Additional arguments forwarded to the scale function.

Returns:

Source code
511
512
513
514
515
516
517
518
519
520
521
522
523
524
525
526
527
@deprecated('The "multi" method is deprecated. Use "supersample" instead.', category=DeprecationWarning)
def multi(
    self, clip: VideoNodeT, multi: float = 2.0, shift: tuple[TopShift, LeftShift] = (0, 0), **kwargs: Any
) -> VideoNodeT:
    """
    Deprecated alias for `supersample`.

    Args:
        clip: The source clip.
        multi: Supersampling factor.
        shift: Subpixel shift (top, left) applied during scaling.
        **kwargs: Additional arguments forwarded to the scale function.

    Returns:
        The supersampled clip.
    """
    return self.supersample(clip, multi, shift, **kwargs)

pretty_string

pretty_string() -> str

Cached property returning a user-friendly string representation.

Returns:

  • str

    Pretty-printed string with arguments.

Source code
419
420
421
422
423
424
425
426
427
@cached_property
def pretty_string(self) -> str:
    """
    Cached property returning a user-friendly string representation.

    Returns:
        Pretty-printed string with arguments.
    """
    return self._pretty_string()

scale

scale(
    clip: VideoNode,
    width: int | None = None,
    height: int | None = None,
    shift: tuple[TopShift | list[TopShift], LeftShift | list[LeftShift]] = (
        0,
        0,
    ),
    *,
    linear: bool | None = None,
    sigmoid: bool | tuple[Slope, Center] = False,
    border_handling: BorderHandling = MIRROR,
    sample_grid_model: SampleGridModel = MATCH_EDGES,
    sar: Sar | float | bool | None = None,
    dar: Dar | float | bool | None = None,
    dar_in: Dar | bool | float | None = None,
    keep_ar: bool | None = None,
    blur: float | None = None,
    **kwargs: Any
) -> VideoNode | ConstantFormatVideoNode

Scale a clip to the given resolution, with aspect ratio and linear light support.

Keyword arguments passed during initialization are automatically injected here, unless explicitly overridden by the arguments provided at call time. Only arguments that match named parameters in this method are injected.

Parameters:

  • clip

    (VideoNode) –

    The source clip.

  • width

    (int | None, default: None ) –

    Target width (defaults to clip width if None).

  • height

    (int | None, default: None ) –

    Target height (defaults to clip height if None).

  • shift

    (tuple[TopShift | list[TopShift], LeftShift | list[LeftShift]], default: (0, 0) ) –

    Subpixel shift (top, left) applied during scaling. If a tuple is provided, it is used uniformly. If a list is given, the shift is applied per plane.

  • linear

    (bool | None, default: None ) –

    Whether to linearize the input before descaling. If None, inferred from sigmoid.

  • sigmoid

    (bool | tuple[Slope, Center], default: False ) –

    Whether to use sigmoid transfer curve. Can be True, False, or a tuple of (slope, center). True applies the defaults values (6.5, 0.75). Keep in mind sigmoid slope has to be in range 1.0-20.0 (inclusive) and sigmoid center has to be in range 0.0-1.0 (inclusive).

  • border_handling

    (BorderHandling, default: MIRROR ) –

    Method for handling image borders during sampling.

  • sample_grid_model

    (SampleGridModel, default: MATCH_EDGES ) –

    Model used to align sampling grid.

  • sar

    (Sar | float | bool | None, default: None ) –

    Sample aspect ratio to assume or convert to.

  • dar

    (Dar | float | bool | None, default: None ) –

    Desired display aspect ratio.

  • dar_in

    (Dar | bool | float | None, default: None ) –

    Input display aspect ratio, if different from clip's.

  • keep_ar

    (bool | None, default: None ) –

    Whether to adjust dimensions to preserve aspect ratio.

  • blur

    (float | None, default: None ) –

    Amount of blur to apply during scaling.

Returns:

  • VideoNode | ConstantFormatVideoNode

    Scaled clip, optionally aspect-corrected and linearized.

Source code
527
528
529
530
531
532
533
534
535
536
537
538
539
540
541
542
543
544
545
546
547
548
549
550
551
552
553
554
555
556
557
558
559
560
561
562
563
564
565
566
567
568
569
570
571
572
573
574
575
576
577
578
579
580
581
582
583
584
585
586
587
588
589
590
591
592
593
594
595
596
597
598
599
600
601
602
603
604
605
606
607
608
609
610
611
612
613
614
615
616
617
618
619
620
621
622
623
624
625
626
627
628
629
630
631
632
633
634
635
636
637
638
639
640
641
642
643
644
645
646
647
648
649
650
651
652
653
654
655
656
657
658
659
660
661
662
663
664
665
666
667
668
def scale(
    self,
    clip: vs.VideoNode,
    width: int | None = None,
    height: int | None = None,
    # ComplexScaler adds shift per planes
    shift: tuple[TopShift | list[TopShift], LeftShift | list[LeftShift]] = (0, 0),
    *,
    # `linear` and `sigmoid` from LinearScaler
    linear: bool | None = None,
    sigmoid: bool | tuple[Slope, Center] = False,
    # `border_handling`, `sample_grid_model`, `sar`, `dar`, `dar_in` and `keep_ar` from KeepArScaler
    border_handling: BorderHandling = BorderHandling.MIRROR,
    sample_grid_model: SampleGridModel = SampleGridModel.MATCH_EDGES,
    sar: Sar | float | bool | None = None,
    dar: Dar | float | bool | None = None,
    dar_in: Dar | bool | float | None = None,
    keep_ar: bool | None = None,
    # ComplexScaler adds blur
    blur: float | None = None,
    **kwargs: Any,
) -> vs.VideoNode | ConstantFormatVideoNode:
    """
    Scale a clip to the given resolution, with aspect ratio and linear light support.

    Keyword arguments passed during initialization are automatically injected here,
    unless explicitly overridden by the arguments provided at call time.
    Only arguments that match named parameters in this method are injected.

    Args:
        clip: The source clip.
        width: Target width (defaults to clip width if None).
        height: Target height (defaults to clip height if None).
        shift: Subpixel shift (top, left) applied during scaling. If a tuple is provided, it is used uniformly. If a
            list is given, the shift is applied per plane.
        linear: Whether to linearize the input before descaling. If None, inferred from sigmoid.
        sigmoid: Whether to use sigmoid transfer curve. Can be True, False, or a tuple of (slope, center). `True`
            applies the defaults values (6.5, 0.75). Keep in mind sigmoid slope has to be in range 1.0-20.0
            (inclusive) and sigmoid center has to be in range 0.0-1.0 (inclusive).
        border_handling: Method for handling image borders during sampling.
        sample_grid_model: Model used to align sampling grid.
        sar: Sample aspect ratio to assume or convert to.
        dar: Desired display aspect ratio.
        dar_in: Input display aspect ratio, if different from clip's.
        keep_ar: Whether to adjust dimensions to preserve aspect ratio.
        blur: Amount of blur to apply during scaling.

    Returns:
        Scaled clip, optionally aspect-corrected and linearized.
    """
    kwargs.update(
        linear=linear,
        sigmoid=sigmoid,
        border_handling=border_handling,
        sample_grid_model=sample_grid_model,
        sar=sar,
        dar=dar,
        dar_in=dar_in,
        keep_ar=keep_ar,
        blur=blur,
    )

    shift_top, shift_left = shift

    if isinstance(shift_top, (int, float)) and isinstance(shift_left, (int, float)):
        return super().scale(clip, width, height, (shift_top, shift_left), **kwargs)

    assert check_variable_format(clip, self.scale)

    n_planes = clip.format.num_planes

    shift_top = normalize_seq(shift_top, n_planes)
    shift_left = normalize_seq(shift_left, n_planes)

    if n_planes == 1:
        if len(set(shift_top)) > 1 or len(set(shift_left)) > 1:
            raise CustomValueError(
                "Inconsistent shift values detected for a single plane. "
                "All shift values must be identical when passing a GRAY clip.",
                self.scale,
                (shift_top, shift_left),
            )

        return super().scale(clip, width, height, (shift_top[0], shift_left[0]), **kwargs)

    width, height = self._wh_norm(clip, width, height)

    format_in = clip.format
    format_out = get_video_format(fallback(kwargs.pop("format", None), self.kwargs.get("format"), clip.format))

    chromaloc = ChromaLocation.from_video(clip, func=self.scale)
    chromaloc_in = ChromaLocation(
        fallback(kwargs.pop("chromaloc_in", None), self.kwargs.get("chromaloc_in"), chromaloc)
    )
    chromaloc_out = ChromaLocation(fallback(kwargs.pop("chromaloc", None), self.kwargs.get("chromaloc"), chromaloc))

    off_left, off_top = chromaloc_in.get_offsets(format_in)
    off_left_out, off_top_out = chromaloc_out.get_offsets(format_out)

    factor_w = 1 / 2**format_in.subsampling_w
    factor_h = 1 / 2**format_in.subsampling_h

    # Offsets for format out
    offc_left = (abs(off_left) + off_left_out) * factor_w
    offc_top = (abs(off_top) + off_top_out) * factor_h

    # Offsets for scale out
    if format_out.subsampling_w:
        offc_left = ((abs(off_left) + off_left * (clip.width / width)) * factor_w) + offc_left
    if format_out.subsampling_h:
        offc_top = ((abs(off_top) + off_top * (clip.height / height)) * factor_h) + offc_top

    for i in range(1, n_planes):
        shift_left[i] += offc_left
        shift_top[i] += offc_top

    scaled_planes = list[vs.VideoNode]()

    for i, (plane, top, left) in enumerate(zip(split(clip), shift_top, shift_left)):
        if i:
            w = round(width * 1 / 2**format_out.subsampling_h)
            h = round(height * 1 / 2**format_out.subsampling_h)
        else:
            w, h = width, height

        scaled_planes.append(
            super().scale(
                plane,
                w,
                h,
                (top, left),
                format=format_out.replace(color_family=vs.GRAY, subsampling_w=0, subsampling_h=0),
                **kwargs,
            )
        )

    merged = vs.core.std.ShufflePlanes(scaled_planes, [0, 0, 0], format_out.color_family, clip)

    if chromaloc_in != chromaloc_out:
        return chromaloc_out.apply(merged)

    return merged

supersample

supersample(
    clip: VideoNodeT,
    rfactor: float = 2.0,
    shift: tuple[TopShift, LeftShift] = (0, 0),
    **kwargs: Any
) -> VideoNodeT

Supersample a clip by a given scaling factor.

Parameters:

  • clip

    (VideoNodeT) –

    The source clip.

  • rfactor

    (float, default: 2.0 ) –

    Scaling factor for supersampling.

  • shift

    (tuple[TopShift, LeftShift], default: (0, 0) ) –

    Subpixel shift (top, left) applied during scaling.

  • **kwargs

    (Any, default: {} ) –

    Additional arguments forwarded to the scale function.

Raises:

  • CustomValueError

    If resulting resolution is non-positive.

Returns:

Source code
480
481
482
483
484
485
486
487
488
489
490
491
492
493
494
495
496
497
498
499
500
501
502
503
504
505
506
507
508
509
def supersample(
    self, clip: VideoNodeT, rfactor: float = 2.0, shift: tuple[TopShift, LeftShift] = (0, 0), **kwargs: Any
) -> VideoNodeT:
    """
    Supersample a clip by a given scaling factor.

    Args:
        clip: The source clip.
        rfactor: Scaling factor for supersampling.
        shift: Subpixel shift (top, left) applied during scaling.
        **kwargs: Additional arguments forwarded to the scale function.

    Raises:
        CustomValueError: If resulting resolution is non-positive.

    Returns:
        The supersampled clip.
    """
    assert check_variable_resolution(clip, self.supersample)

    dst_width, dst_height = ceil(clip.width * rfactor), ceil(clip.height * rfactor)

    if max(dst_width, dst_height) <= 0.0:
        raise CustomValueError(
            'Multiplying the resolution by "rfactor" must result in a positive resolution!',
            self.supersample,
            rfactor,
        )

    return self.scale(clip, dst_width, dst_height, shift, **kwargs)  # type: ignore[return-value]

Placebo

Placebo(
    radius: float | None = None,
    b: float | None = None,
    c: float | None = None,
    clamp: float = 0.0,
    blur: float = 0.0,
    taper: float = 0.0,
    antiring: float = 0.0,
    **kwargs: Any
)

Bases: ComplexScaler

Abstract Placebo scaler class.

This class and its subclasses depend on vs-placebo

Initialize the scaler with optional arguments.

Keyword arguments passed during initialization are automatically injected here, unless explicitly overridden by the arguments provided at call time. Only arguments that match named parameters in this method are injected.

Parameters:

  • radius

    (float | None, default: None ) –

    Overrides the filter kernel radius. Has no effect if the filter kernel is not resizeable.

  • b

    (float | None, default: None ) –

    The 'b' parameter for bicubic interpolation.

  • c

    (float | None, default: None ) –

    The 'c' parameter for bicubic interpolation.

  • clamp

    (float, default: 0.0 ) –

    Represents an extra weighting/clamping coefficient for negative weights. A value of 0.0 represents no clamping. A value of 1.0 represents full clamping, i.e. all negative lobes will be removed.

  • blur

    (float, default: 0.0 ) –

    Additional blur coefficient. This effectively stretches the kernel, without changing the effective radius of the filter radius.

  • taper

    (float, default: 0.0 ) –

    Additional taper coefficient. This essentially flattens the function's center.

  • antiring

    (float, default: 0.0 ) –

    Antiringing strength.

  • **kwargs

    (Any, default: {} ) –

    Keyword arguments that configure the internal scaling behavior.

Classes:

Methods:

  • ensure_obj

    Ensure that the input is a scaler instance, resolving it if necessary.

  • from_param

    Resolve and return a scaler type from a given input (string, type, or instance).

  • get_scale_args
  • kernel_radius
  • multi

    Deprecated alias for supersample.

  • pretty_string

    Cached property returning a user-friendly string representation.

  • scale

    Scale a clip to the given resolution, with aspect ratio and linear light support.

  • supersample

    Supersample a clip by a given scaling factor.

Attributes:

Source code
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
def __init__(
    self,
    radius: float | None = None,
    b: float | None = None,
    c: float | None = None,
    clamp: float = 0.0,
    blur: float = 0.0,
    taper: float = 0.0,
    antiring: float = 0.0,
    **kwargs: Any,
) -> None:
    """
    Initialize the scaler with optional arguments.

    Keyword arguments passed during initialization are automatically injected here,
    unless explicitly overridden by the arguments provided at call time.
    Only arguments that match named parameters in this method are injected.

    Args:
        radius: Overrides the filter kernel radius. Has no effect if the filter kernel is not resizeable.
        b: The 'b' parameter for bicubic interpolation.
        c: The 'c' parameter for bicubic interpolation.
        clamp: Represents an extra weighting/clamping coefficient for negative weights. A value of 0.0 represents no
            clamping. A value of 1.0 represents full clamping, i.e. all negative lobes will be removed.
        blur: Additional blur coefficient. This effectively stretches the kernel, without changing the effective
            radius of the filter radius.
        taper: Additional taper coefficient. This essentially flattens the function's center.
        antiring: Antiringing strength.
        **kwargs: Keyword arguments that configure the internal scaling behavior.
    """
    self.radius = radius
    self.b = b
    self.c = c
    self.clamp = clamp
    self.blur = blur
    self.taper = taper
    self.antiring = antiring
    super().__init__(**kwargs)

antiring instance-attribute

antiring = antiring

b instance-attribute

b = b

blur instance-attribute

blur = blur

c instance-attribute

c = c

clamp instance-attribute

clamp = clamp

kwargs instance-attribute

kwargs: dict[str, Any] = kwargs

Arguments passed to the implemented funcs or internal scale function.

radius instance-attribute

radius = radius

scale_function class-attribute instance-attribute

scale_function = Resample

taper instance-attribute

taper = taper

cached_property

cached_property(func: Callable[Concatenate[_BaseScalerT, P], T_co])

Bases: cached_property[T_co]

Read only version of functools.cached_property.

Source code
291
def __init__(self, func: Callable[Concatenate[_BaseScalerT, P], T_co]) -> None: ...

ensure_obj classmethod

ensure_obj(
    scaler: str | type[Self] | Self | None = None,
    /,
    func_except: FuncExceptT | None = None,
) -> Self

Ensure that the input is a scaler instance, resolving it if necessary.

Parameters:

  • scaler

    (str | type[Self] | Self | None, default: None ) –

    Scaler identifier (string, class, or instance).

  • func_except

    (FuncExceptT | None, default: None ) –

    Function returned for custom error handling.

Returns:

  • Self

    Scaler instance.

Source code
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
@classmethod
def ensure_obj(
    cls,
    scaler: str | type[Self] | Self | None = None,
    /,
    func_except: FuncExceptT | None = None,
) -> Self:
    """
    Ensure that the input is a scaler instance, resolving it if necessary.

    Args:
        scaler: Scaler identifier (string, class, or instance).
        func_except: Function returned for custom error handling.

    Returns:
        Scaler instance.
    """
    return _base_ensure_obj(cls, scaler, func_except)

from_param classmethod

from_param(
    scaler: str | type[Self] | Self | None = None,
    /,
    func_except: FuncExceptT | None = None,
) -> type[Self]

Resolve and return a scaler type from a given input (string, type, or instance).

Parameters:

  • scaler

    (str | type[Self] | Self | None, default: None ) –

    Scaler identifier (string, class, or instance).

  • func_except

    (FuncExceptT | None, default: None ) –

    Function returned for custom error handling.

Returns:

  • type[Self]

    Resolved scaler type.

Source code
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
@classmethod
def from_param(
    cls,
    scaler: str | type[Self] | Self | None = None,
    /,
    func_except: FuncExceptT | None = None,
) -> type[Self]:
    """
    Resolve and return a scaler type from a given input (string, type, or instance).

    Args:
        scaler: Scaler identifier (string, class, or instance).
        func_except: Function returned for custom error handling.

    Returns:
        Resolved scaler type.
    """
    return _base_from_param(cls, scaler, cls._err_class, func_except)

get_scale_args

get_scale_args(
    clip: VideoNode,
    shift: tuple[TopShift, LeftShift] = (0, 0),
    width: int | None = None,
    height: int | None = None,
    **kwargs: Any
) -> dict[str, Any]
Source code
 76
 77
 78
 79
 80
 81
 82
 83
 84
 85
 86
 87
 88
 89
 90
 91
 92
 93
 94
 95
 96
 97
 98
 99
100
101
102
103
104
def get_scale_args(
    self,
    clip: vs.VideoNode,
    shift: tuple[TopShift, LeftShift] = (0, 0),
    width: int | None = None,
    height: int | None = None,
    **kwargs: Any,
) -> dict[str, Any]:
    kwargs.pop("format", None)

    return (
        {
            "sx": kwargs.pop("src_left", shift[1]),
            "sy": kwargs.pop("src_top", shift[0]),
            "width": width,
            "height": height,
            "filter": self._kernel,
            "radius": self.radius,
            "param1": self.b,
            "param2": self.c,
            "clamp": self.clamp,
            "taper": self.taper,
            "blur": self.blur,
            "antiring": self.antiring,
        }
        | self.kwargs
        | kwargs
        | {"linearize": False, "sigmoidize": False}
    )

kernel_radius

kernel_radius() -> int
Source code
106
107
108
109
110
111
112
113
114
115
116
117
@ComplexScaler.cached_property
def kernel_radius(self) -> int:
    if self.radius:
        return ceil(self.radius)

    if self.b or self.c:
        b, c = fallback(self.b, 0), fallback(self.c, 0.5)

        if (b, c) == (0, 0):
            return 1

    return 2

multi

multi(
    clip: VideoNodeT,
    multi: float = 2.0,
    shift: tuple[TopShift, LeftShift] = (0, 0),
    **kwargs: Any
) -> VideoNodeT

Deprecated alias for supersample.

Parameters:

  • clip

    (VideoNodeT) –

    The source clip.

  • multi

    (float, default: 2.0 ) –

    Supersampling factor.

  • shift

    (tuple[TopShift, LeftShift], default: (0, 0) ) –

    Subpixel shift (top, left) applied during scaling.

  • **kwargs

    (Any, default: {} ) –

    Additional arguments forwarded to the scale function.

Returns:

Source code
511
512
513
514
515
516
517
518
519
520
521
522
523
524
525
526
527
@deprecated('The "multi" method is deprecated. Use "supersample" instead.', category=DeprecationWarning)
def multi(
    self, clip: VideoNodeT, multi: float = 2.0, shift: tuple[TopShift, LeftShift] = (0, 0), **kwargs: Any
) -> VideoNodeT:
    """
    Deprecated alias for `supersample`.

    Args:
        clip: The source clip.
        multi: Supersampling factor.
        shift: Subpixel shift (top, left) applied during scaling.
        **kwargs: Additional arguments forwarded to the scale function.

    Returns:
        The supersampled clip.
    """
    return self.supersample(clip, multi, shift, **kwargs)

pretty_string

pretty_string() -> str

Cached property returning a user-friendly string representation.

Returns:

  • str

    Pretty-printed string with arguments.

Source code
419
420
421
422
423
424
425
426
427
@cached_property
def pretty_string(self) -> str:
    """
    Cached property returning a user-friendly string representation.

    Returns:
        Pretty-printed string with arguments.
    """
    return self._pretty_string()

scale

scale(
    clip: VideoNode,
    width: int | None = None,
    height: int | None = None,
    shift: tuple[TopShift | list[TopShift], LeftShift | list[LeftShift]] = (
        0,
        0,
    ),
    *,
    linear: bool | None = None,
    sigmoid: bool | tuple[Slope, Center] = False,
    border_handling: BorderHandling = MIRROR,
    sample_grid_model: SampleGridModel = MATCH_EDGES,
    sar: Sar | float | bool | None = None,
    dar: Dar | float | bool | None = None,
    dar_in: Dar | bool | float | None = None,
    keep_ar: bool | None = None,
    blur: float | None = None,
    **kwargs: Any
) -> VideoNode | ConstantFormatVideoNode

Scale a clip to the given resolution, with aspect ratio and linear light support.

Keyword arguments passed during initialization are automatically injected here, unless explicitly overridden by the arguments provided at call time. Only arguments that match named parameters in this method are injected.

Parameters:

  • clip

    (VideoNode) –

    The source clip.

  • width

    (int | None, default: None ) –

    Target width (defaults to clip width if None).

  • height

    (int | None, default: None ) –

    Target height (defaults to clip height if None).

  • shift

    (tuple[TopShift | list[TopShift], LeftShift | list[LeftShift]], default: (0, 0) ) –

    Subpixel shift (top, left) applied during scaling. If a tuple is provided, it is used uniformly. If a list is given, the shift is applied per plane.

  • linear

    (bool | None, default: None ) –

    Whether to linearize the input before descaling. If None, inferred from sigmoid.

  • sigmoid

    (bool | tuple[Slope, Center], default: False ) –

    Whether to use sigmoid transfer curve. Can be True, False, or a tuple of (slope, center). True applies the defaults values (6.5, 0.75). Keep in mind sigmoid slope has to be in range 1.0-20.0 (inclusive) and sigmoid center has to be in range 0.0-1.0 (inclusive).

  • border_handling

    (BorderHandling, default: MIRROR ) –

    Method for handling image borders during sampling.

  • sample_grid_model

    (SampleGridModel, default: MATCH_EDGES ) –

    Model used to align sampling grid.

  • sar

    (Sar | float | bool | None, default: None ) –

    Sample aspect ratio to assume or convert to.

  • dar

    (Dar | float | bool | None, default: None ) –

    Desired display aspect ratio.

  • dar_in

    (Dar | bool | float | None, default: None ) –

    Input display aspect ratio, if different from clip's.

  • keep_ar

    (bool | None, default: None ) –

    Whether to adjust dimensions to preserve aspect ratio.

  • blur

    (float | None, default: None ) –

    Amount of blur to apply during scaling.

Returns:

  • VideoNode | ConstantFormatVideoNode

    Scaled clip, optionally aspect-corrected and linearized.

Source code
527
528
529
530
531
532
533
534
535
536
537
538
539
540
541
542
543
544
545
546
547
548
549
550
551
552
553
554
555
556
557
558
559
560
561
562
563
564
565
566
567
568
569
570
571
572
573
574
575
576
577
578
579
580
581
582
583
584
585
586
587
588
589
590
591
592
593
594
595
596
597
598
599
600
601
602
603
604
605
606
607
608
609
610
611
612
613
614
615
616
617
618
619
620
621
622
623
624
625
626
627
628
629
630
631
632
633
634
635
636
637
638
639
640
641
642
643
644
645
646
647
648
649
650
651
652
653
654
655
656
657
658
659
660
661
662
663
664
665
666
667
668
def scale(
    self,
    clip: vs.VideoNode,
    width: int | None = None,
    height: int | None = None,
    # ComplexScaler adds shift per planes
    shift: tuple[TopShift | list[TopShift], LeftShift | list[LeftShift]] = (0, 0),
    *,
    # `linear` and `sigmoid` from LinearScaler
    linear: bool | None = None,
    sigmoid: bool | tuple[Slope, Center] = False,
    # `border_handling`, `sample_grid_model`, `sar`, `dar`, `dar_in` and `keep_ar` from KeepArScaler
    border_handling: BorderHandling = BorderHandling.MIRROR,
    sample_grid_model: SampleGridModel = SampleGridModel.MATCH_EDGES,
    sar: Sar | float | bool | None = None,
    dar: Dar | float | bool | None = None,
    dar_in: Dar | bool | float | None = None,
    keep_ar: bool | None = None,
    # ComplexScaler adds blur
    blur: float | None = None,
    **kwargs: Any,
) -> vs.VideoNode | ConstantFormatVideoNode:
    """
    Scale a clip to the given resolution, with aspect ratio and linear light support.

    Keyword arguments passed during initialization are automatically injected here,
    unless explicitly overridden by the arguments provided at call time.
    Only arguments that match named parameters in this method are injected.

    Args:
        clip: The source clip.
        width: Target width (defaults to clip width if None).
        height: Target height (defaults to clip height if None).
        shift: Subpixel shift (top, left) applied during scaling. If a tuple is provided, it is used uniformly. If a
            list is given, the shift is applied per plane.
        linear: Whether to linearize the input before descaling. If None, inferred from sigmoid.
        sigmoid: Whether to use sigmoid transfer curve. Can be True, False, or a tuple of (slope, center). `True`
            applies the defaults values (6.5, 0.75). Keep in mind sigmoid slope has to be in range 1.0-20.0
            (inclusive) and sigmoid center has to be in range 0.0-1.0 (inclusive).
        border_handling: Method for handling image borders during sampling.
        sample_grid_model: Model used to align sampling grid.
        sar: Sample aspect ratio to assume or convert to.
        dar: Desired display aspect ratio.
        dar_in: Input display aspect ratio, if different from clip's.
        keep_ar: Whether to adjust dimensions to preserve aspect ratio.
        blur: Amount of blur to apply during scaling.

    Returns:
        Scaled clip, optionally aspect-corrected and linearized.
    """
    kwargs.update(
        linear=linear,
        sigmoid=sigmoid,
        border_handling=border_handling,
        sample_grid_model=sample_grid_model,
        sar=sar,
        dar=dar,
        dar_in=dar_in,
        keep_ar=keep_ar,
        blur=blur,
    )

    shift_top, shift_left = shift

    if isinstance(shift_top, (int, float)) and isinstance(shift_left, (int, float)):
        return super().scale(clip, width, height, (shift_top, shift_left), **kwargs)

    assert check_variable_format(clip, self.scale)

    n_planes = clip.format.num_planes

    shift_top = normalize_seq(shift_top, n_planes)
    shift_left = normalize_seq(shift_left, n_planes)

    if n_planes == 1:
        if len(set(shift_top)) > 1 or len(set(shift_left)) > 1:
            raise CustomValueError(
                "Inconsistent shift values detected for a single plane. "
                "All shift values must be identical when passing a GRAY clip.",
                self.scale,
                (shift_top, shift_left),
            )

        return super().scale(clip, width, height, (shift_top[0], shift_left[0]), **kwargs)

    width, height = self._wh_norm(clip, width, height)

    format_in = clip.format
    format_out = get_video_format(fallback(kwargs.pop("format", None), self.kwargs.get("format"), clip.format))

    chromaloc = ChromaLocation.from_video(clip, func=self.scale)
    chromaloc_in = ChromaLocation(
        fallback(kwargs.pop("chromaloc_in", None), self.kwargs.get("chromaloc_in"), chromaloc)
    )
    chromaloc_out = ChromaLocation(fallback(kwargs.pop("chromaloc", None), self.kwargs.get("chromaloc"), chromaloc))

    off_left, off_top = chromaloc_in.get_offsets(format_in)
    off_left_out, off_top_out = chromaloc_out.get_offsets(format_out)

    factor_w = 1 / 2**format_in.subsampling_w
    factor_h = 1 / 2**format_in.subsampling_h

    # Offsets for format out
    offc_left = (abs(off_left) + off_left_out) * factor_w
    offc_top = (abs(off_top) + off_top_out) * factor_h

    # Offsets for scale out
    if format_out.subsampling_w:
        offc_left = ((abs(off_left) + off_left * (clip.width / width)) * factor_w) + offc_left
    if format_out.subsampling_h:
        offc_top = ((abs(off_top) + off_top * (clip.height / height)) * factor_h) + offc_top

    for i in range(1, n_planes):
        shift_left[i] += offc_left
        shift_top[i] += offc_top

    scaled_planes = list[vs.VideoNode]()

    for i, (plane, top, left) in enumerate(zip(split(clip), shift_top, shift_left)):
        if i:
            w = round(width * 1 / 2**format_out.subsampling_h)
            h = round(height * 1 / 2**format_out.subsampling_h)
        else:
            w, h = width, height

        scaled_planes.append(
            super().scale(
                plane,
                w,
                h,
                (top, left),
                format=format_out.replace(color_family=vs.GRAY, subsampling_w=0, subsampling_h=0),
                **kwargs,
            )
        )

    merged = vs.core.std.ShufflePlanes(scaled_planes, [0, 0, 0], format_out.color_family, clip)

    if chromaloc_in != chromaloc_out:
        return chromaloc_out.apply(merged)

    return merged

supersample

supersample(
    clip: VideoNodeT,
    rfactor: float = 2.0,
    shift: tuple[TopShift, LeftShift] = (0, 0),
    **kwargs: Any
) -> VideoNodeT

Supersample a clip by a given scaling factor.

Parameters:

  • clip

    (VideoNodeT) –

    The source clip.

  • rfactor

    (float, default: 2.0 ) –

    Scaling factor for supersampling.

  • shift

    (tuple[TopShift, LeftShift], default: (0, 0) ) –

    Subpixel shift (top, left) applied during scaling.

  • **kwargs

    (Any, default: {} ) –

    Additional arguments forwarded to the scale function.

Raises:

  • CustomValueError

    If resulting resolution is non-positive.

Returns:

Source code
480
481
482
483
484
485
486
487
488
489
490
491
492
493
494
495
496
497
498
499
500
501
502
503
504
505
506
507
508
509
def supersample(
    self, clip: VideoNodeT, rfactor: float = 2.0, shift: tuple[TopShift, LeftShift] = (0, 0), **kwargs: Any
) -> VideoNodeT:
    """
    Supersample a clip by a given scaling factor.

    Args:
        clip: The source clip.
        rfactor: Scaling factor for supersampling.
        shift: Subpixel shift (top, left) applied during scaling.
        **kwargs: Additional arguments forwarded to the scale function.

    Raises:
        CustomValueError: If resulting resolution is non-positive.

    Returns:
        The supersampled clip.
    """
    assert check_variable_resolution(clip, self.supersample)

    dst_width, dst_height = ceil(clip.width * rfactor), ceil(clip.height * rfactor)

    if max(dst_width, dst_height) <= 0.0:
        raise CustomValueError(
            'Multiplying the resolution by "rfactor" must result in a positive resolution!',
            self.supersample,
            rfactor,
        )

    return self.scale(clip, dst_width, dst_height, shift, **kwargs)  # type: ignore[return-value]