Skip to content

placebo

Classes:

EwaBicubic

EwaBicubic(
    b: float = 0.0, c: float = 0.5, radius: int | None = None, **kwargs: Any
)

Bases: Placebo

Ewa Bicubic resizer.

Initialize the scaler with specific 'b' and 'c' parameters and optional arguments.

These keyword arguments are automatically forwarded to the _implemented_funcs methods but only if the method explicitly accepts them as named parameters. If the same keyword is passed to both __init__ and one of the _implemented_funcs, the one passed to func takes precedence.

Parameters:

  • b

    (float, default: 0.0 ) –

    The 'b' parameter for bicubic interpolation.

  • c

    (float, default: 0.5 ) –

    The 'c' parameter for bicubic interpolation.

  • radius

    (int | None, default: None ) –

    Overrides the filter kernel radius.

  • kwargs

    (Any, default: {} ) –

    Keyword arguments that configure the internal scaling behavior.

Classes:

Methods:

  • ensure_obj

    Ensure that the input is a scaler instance, resolving it if necessary.

  • from_param

    Resolve and return a scaler type from a given input (string, type, or instance).

  • get_scale_args
  • kernel_radius
  • multi

    Deprecated alias for supersample.

  • pretty_string

    Cached property returning a user-friendly string representation.

  • scale

    Scale a clip to the given resolution, with aspect ratio and linear light support.

  • supersample

    Supersample a clip by a given scaling factor.

Attributes:

Source code
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
def __init__(self, b: float = 0.0, c: float = 0.5, radius: int | None = None, **kwargs: Any) -> None:
    """
    Initialize the scaler with specific 'b' and 'c' parameters and optional arguments.

    These keyword arguments are automatically forwarded to the `_implemented_funcs` methods
    but only if the method explicitly accepts them as named parameters.
    If the same keyword is passed to both `__init__` and one of the `_implemented_funcs`,
    the one passed to `func` takes precedence.

    :param b:       The 'b' parameter for bicubic interpolation.
    :param c:       The 'c' parameter for bicubic interpolation.
    :param radius:  Overrides the filter kernel radius.
    :param kwargs:  Keyword arguments that configure the internal scaling behavior.
    """
    radius = kwargs.pop("taps", radius)

    if radius is None:
        if (b, c) == (0, 0):
            radius = 1
        else:
            radius = 2

    super().__init__(radius, b, c, **kwargs)

antiring instance-attribute

antiring = antiring

b instance-attribute

b = b

blur instance-attribute

blur = blur

c instance-attribute

c = c

clamp instance-attribute

clamp = clamp

kwargs instance-attribute

kwargs: dict[str, Any] = kwargs

Arguments passed to the implemented funcs or internal scale function.

scale_function class-attribute instance-attribute

scale_function = Resample

taper instance-attribute

taper = taper

taps instance-attribute

taps = taps

cached_property

cached_property(func: Callable[Concatenate[_BaseScalerT, P], T_co])

Bases: cached_property[T_co]

Read only version of functools.cached_property.

Source code
265
def __init__(self, func: Callable[Concatenate[_BaseScalerT, P], T_co]) -> None: ...

ensure_obj classmethod

ensure_obj(
    scaler: str | type[Self] | Self | None = None,
    /,
    func_except: FuncExceptT | None = None,
) -> Self

Ensure that the input is a scaler instance, resolving it if necessary.

Parameters:

  • scaler

    (str | type[Self] | Self | None, default: None ) –

    Scaler identifier (string, class, or instance).

  • func_except

    (FuncExceptT | None, default: None ) –

    Function returned for custom error handling.

Returns:

  • Self

    Scaler instance.

Source code
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
@classmethod
def ensure_obj(
    cls,
    scaler: str | type[Self] | Self | None = None,
    /,
    func_except: FuncExceptT | None = None,
) -> Self:
    """
    Ensure that the input is a scaler instance, resolving it if necessary.

    :param scaler:          Scaler identifier (string, class, or instance).
    :param func_except:     Function returned for custom error handling.
    :return:                Scaler instance.
    """
    return _base_ensure_obj(cls, scaler, func_except)

from_param classmethod

from_param(
    scaler: str | type[Self] | Self | None = None,
    /,
    func_except: FuncExceptT | None = None,
) -> type[Self]

Resolve and return a scaler type from a given input (string, type, or instance).

Parameters:

  • scaler

    (str | type[Self] | Self | None, default: None ) –

    Scaler identifier (string, class, or instance).

  • func_except

    (FuncExceptT | None, default: None ) –

    Function returned for custom error handling.

Returns:

  • type[Self]

    Resolved scaler type.

Source code
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
@classmethod
def from_param(
    cls,
    scaler: str | type[Self] | Self | None = None,
    /,
    func_except: FuncExceptT | None = None,
) -> type[Self]:
    """
    Resolve and return a scaler type from a given input (string, type, or instance).

    :param scaler:          Scaler identifier (string, class, or instance).
    :param func_except:     Function returned for custom error handling.
    :return:                Resolved scaler type.
    """
    return _base_from_param(cls, scaler, cls._err_class, func_except)

get_scale_args

get_scale_args(
    clip: VideoNode,
    shift: tuple[TopShift, LeftShift] = (0, 0),
    width: int | None = None,
    height: int | None = None,
    **kwargs: Any
) -> dict[str, Any]
Source code
 75
 76
 77
 78
 79
 80
 81
 82
 83
 84
 85
 86
 87
 88
 89
 90
 91
 92
 93
 94
 95
 96
 97
 98
 99
100
101
def get_scale_args(
    self,
    clip: vs.VideoNode,
    shift: tuple[TopShift, LeftShift] = (0, 0),
    width: int | None = None,
    height: int | None = None,
    **kwargs: Any,
) -> dict[str, Any]:
    return (
        dict(
            sx=kwargs.pop("src_left", shift[1]),
            sy=kwargs.pop("src_top", shift[0]),
            width=width,
            height=height,
            filter=self._kernel,
            radius=self.taps,
            param1=self.b,
            param2=self.c,
            clamp=self.clamp,
            taper=self.taper,
            blur=self.blur,
            antiring=self.antiring,
        )
        | self.kwargs
        | kwargs
        | dict(linearize=False, sigmoidize=False)
    )

kernel_radius

kernel_radius() -> int
Source code
103
104
105
106
107
108
109
110
111
112
113
114
@ComplexScaler.cached_property
def kernel_radius(self) -> int:
    if self.taps:
        return ceil(self.taps)

    if self.b or self.c:
        b, c = fallback(self.b, 0), fallback(self.c, 0.5)

        if (b, c) == (0, 0):
            return 1

    return 2

multi

multi(
    clip: VideoNodeT,
    multi: float = 2.0,
    shift: tuple[TopShift, LeftShift] = (0, 0),
    **kwargs: Any
) -> VideoNodeT

Deprecated alias for supersample.

Parameters:

  • clip

    (VideoNodeT) –

    The source clip.

  • multi

    (float, default: 2.0 ) –

    Supersampling factor.

  • shift

    (tuple[TopShift, LeftShift], default: (0, 0) ) –

    Subpixel shift (top, left) applied during scaling.

  • kwargs

    (Any, default: {} ) –

    Additional arguments forwarded to the scale function.

Returns:

Source code
451
452
453
454
455
456
457
458
459
460
461
462
463
464
@deprecated('The "multi" method is deprecated. Use "supersample" instead.', category=DeprecationWarning)
def multi(
    self, clip: VideoNodeT, multi: float = 2.0, shift: tuple[TopShift, LeftShift] = (0, 0), **kwargs: Any
) -> VideoNodeT:
    """
    Deprecated alias for `supersample`.

    :param clip:    The source clip.
    :param multi:   Supersampling factor.
    :param shift:   Subpixel shift (top, left) applied during scaling.
    :param kwargs:  Additional arguments forwarded to the scale function.
    :return:        The supersampled clip.
    """
    return self.supersample(clip, multi, shift, **kwargs)

pretty_string

pretty_string() -> str

Cached property returning a user-friendly string representation.

Returns:

  • str

    Pretty-printed string with arguments.

Source code
368
369
370
371
372
373
374
375
@cached_property
def pretty_string(self) -> str:
    """
    Cached property returning a user-friendly string representation.

    :return: Pretty-printed string with arguments.
    """
    return self._pretty_string()

scale

scale(
    clip: VideoNode,
    width: int | None = None,
    height: int | None = None,
    shift: tuple[TopShift | list[TopShift], LeftShift | list[LeftShift]] = (
        0,
        0,
    ),
    *,
    linear: bool | None = None,
    sigmoid: bool | tuple[Slope, Center] = False,
    border_handling: BorderHandling = MIRROR,
    sample_grid_model: SampleGridModel = MATCH_EDGES,
    sar: Sar | float | bool | None = None,
    dar: Dar | float | bool | None = None,
    dar_in: Dar | bool | float | None = None,
    keep_ar: bool | None = None,
    blur: float | None = None,
    **kwargs: Any
) -> VideoNode | ConstantFormatVideoNode

Scale a clip to the given resolution, with aspect ratio and linear light support.

Keyword arguments passed during initialization are automatically injected here, unless explicitly overridden by the arguments provided at call time. Only arguments that match named parameters in this method are injected.

Parameters:

  • clip

    (VideoNode) –

    The source clip.

  • width

    (int | None, default: None ) –

    Target width (defaults to clip width if None).

  • height

    (int | None, default: None ) –

    Target height (defaults to clip height if None).

  • shift

    (tuple[TopShift | list[TopShift], LeftShift | list[LeftShift]], default: (0, 0) ) –

    Subpixel shift (top, left) applied during scaling. If a tuple is provided, it is used uniformly. If a list is given, the shift is applied per plane.

  • linear

    (bool | None, default: None ) –

    Whether to linearize the input before descaling. If None, inferred from sigmoid.

  • sigmoid

    (bool | tuple[Slope, Center], default: False ) –

    Whether to use sigmoid transfer curve. Can be True, False, or a tuple of (slope, center). True applies the defaults values (6.5, 0.75). Keep in mind sigmoid slope has to be in range 1.0-20.0 (inclusive) and sigmoid center has to be in range 0.0-1.0 (inclusive).

  • border_handling

    (BorderHandling, default: MIRROR ) –

    Method for handling image borders during sampling.

  • sample_grid_model

    (SampleGridModel, default: MATCH_EDGES ) –

    Model used to align sampling grid.

  • sar

    (Sar | float | bool | None, default: None ) –

    Sample aspect ratio to assume or convert to.

  • dar

    (Dar | float | bool | None, default: None ) –

    Desired display aspect ratio.

  • dar_in

    (Dar | bool | float | None, default: None ) –

    Input display aspect ratio, if different from clip's.

  • keep_ar

    (bool | None, default: None ) –

    Whether to adjust dimensions to preserve aspect ratio.

  • blur

    (float | None, default: None ) –

    Amount of blur to apply during scaling.

Returns:

  • VideoNode | ConstantFormatVideoNode

    Scaled clip, optionally aspect-corrected and linearized.

Source code
510
511
512
513
514
515
516
517
518
519
520
521
522
523
524
525
526
527
528
529
530
531
532
533
534
535
536
537
538
539
540
541
542
543
544
545
546
547
548
549
550
551
552
553
554
555
556
557
558
559
560
561
562
563
564
565
566
567
568
569
570
571
572
573
574
575
576
577
578
579
580
581
582
583
584
585
586
587
588
589
590
591
592
593
594
595
596
597
598
599
600
601
602
603
604
605
606
607
608
609
610
611
612
613
614
615
616
617
618
619
620
621
622
623
624
625
626
627
628
629
630
631
632
633
634
635
636
637
638
639
640
641
642
643
644
645
646
647
648
649
650
651
652
def scale(
    self,
    clip: vs.VideoNode,
    width: int | None = None,
    height: int | None = None,
    # ComplexScaler adds shift per planes
    shift: tuple[TopShift | list[TopShift], LeftShift | list[LeftShift]] = (0, 0),
    *,
    # `linear` and `sigmoid` from LinearScaler
    linear: bool | None = None,
    sigmoid: bool | tuple[Slope, Center] = False,
    # `border_handling`, `sample_grid_model`, `sar`, `dar`, `dar_in` and `keep_ar` from KeepArScaler
    border_handling: BorderHandling = BorderHandling.MIRROR,
    sample_grid_model: SampleGridModel = SampleGridModel.MATCH_EDGES,
    sar: Sar | float | bool | None = None,
    dar: Dar | float | bool | None = None,
    dar_in: Dar | bool | float | None = None,
    keep_ar: bool | None = None,
    # ComplexScaler adds blur
    blur: float | None = None,
    **kwargs: Any,
) -> vs.VideoNode | ConstantFormatVideoNode:
    """
    Scale a clip to the given resolution, with aspect ratio and linear light support.

    Keyword arguments passed during initialization are automatically injected here,
    unless explicitly overridden by the arguments provided at call time.
    Only arguments that match named parameters in this method are injected.

    :param clip:                The source clip.
    :param width:               Target width (defaults to clip width if None).
    :param height:              Target height (defaults to clip height if None).
    :param shift:               Subpixel shift (top, left) applied during scaling.
                                If a tuple is provided, it is used uniformly.
                                If a list is given, the shift is applied per plane.
    :param linear:              Whether to linearize the input before descaling. If None, inferred from sigmoid.
    :param sigmoid:             Whether to use sigmoid transfer curve. Can be True, False, or a tuple of (slope, center).
                                `True` applies the defaults values (6.5, 0.75).
                                Keep in mind sigmoid slope has to be in range 1.0-20.0 (inclusive)
                                and sigmoid center has to be in range 0.0-1.0 (inclusive).
    :param border_handling:     Method for handling image borders during sampling.
    :param sample_grid_model:   Model used to align sampling grid.
    :param sar:                 Sample aspect ratio to assume or convert to.
    :param dar:                 Desired display aspect ratio.
    :param dar_in:              Input display aspect ratio, if different from clip's.
    :param keep_ar:             Whether to adjust dimensions to preserve aspect ratio.
    :param blur:                Amount of blur to apply during scaling.
    :return:                    Scaled clip, optionally aspect-corrected and linearized.
    """
    kwargs.update(
        linear=linear,
        sigmoid=sigmoid,
        border_handling=border_handling,
        sample_grid_model=sample_grid_model,
        sar=sar,
        dar=dar,
        dar_in=dar_in,
        keep_ar=keep_ar,
        blur=blur
    )

    shift_top, shift_left = shift

    if isinstance(shift_top, (int, float)) and isinstance(shift_left, (int, float)):
        return super().scale(clip, width, height, (shift_top, shift_left), **kwargs)

    assert check_variable_format(clip, self.scale)

    n_planes = clip.format.num_planes

    shift_top = normalize_seq(shift_top, n_planes)
    shift_left = normalize_seq(shift_left, n_planes)

    if n_planes == 1:
        if len(set(shift_top)) > 1 or len(set(shift_left)) > 1:
            raise CustomValueError(
                "Inconsistent shift values detected for a single plane. "
                "All shift values must be identical when passing a GRAY clip.",
                self.scale,
                (shift_top, shift_left),
            )

        return super().scale(clip, width, height, (shift_top[0], shift_left[0]), **kwargs)

    width, height = self._wh_norm(clip, width, height)

    format_in = clip.format
    format_out = get_video_format(fallback(kwargs.pop("format", None), self.kwargs.get("format"), clip.format))

    chromaloc = ChromaLocation.from_video(clip, func=self.scale)
    chromaloc_in = ChromaLocation(
        fallback(kwargs.pop("chromaloc_in", None), self.kwargs.get("chromaloc_in"), chromaloc)
    )
    chromaloc_out = ChromaLocation(
        fallback(kwargs.pop("chromaloc", None), self.kwargs.get("chromaloc"), chromaloc)
    )

    off_left, off_top = chromaloc_in.get_offsets(format_in)
    off_left_out, off_top_out = chromaloc_out.get_offsets(format_out)

    factor_w = 1 / 2 ** format_in.subsampling_w
    factor_h = 1 / 2 ** format_in.subsampling_h

    # Offsets for format out
    offc_left = (abs(off_left) + off_left_out) * factor_w
    offc_top = (abs(off_top) + off_top_out) * factor_h

    # Offsets for scale out
    if format_out.subsampling_w:
        offc_left = ((abs(off_left) + off_left * (clip.width / width)) * factor_w) + offc_left
    if format_out.subsampling_h:
        offc_top = ((abs(off_top) + off_top * (clip.height / height)) * factor_h) + offc_top

    for i in range(1, n_planes):
        shift_left[i] += offc_left
        shift_top[i] += offc_top

    scaled_planes = list[vs.VideoNode]()

    for i, (plane, top, left) in enumerate(zip(split(clip), shift_top, shift_left)):
        if i:
            w = round(width * 1 / 2 ** format_out.subsampling_h)
            h = round(height * 1 / 2 ** format_out.subsampling_h)
        else:
            w, h = width, height

        scaled_planes.append(
            super().scale(
                plane,
                w,
                h,
                (top, left),
                format=format_out.replace(color_family=vs.GRAY, subsampling_w=0, subsampling_h=0),
                **kwargs
            )
        )

    merged = vs.core.std.ShufflePlanes(scaled_planes, [0, 0, 0], format_out.color_family, clip)

    if chromaloc_in != chromaloc_out:
        return chromaloc_out.apply(merged)

    return merged

supersample

supersample(
    clip: VideoNodeT,
    rfactor: float = 2.0,
    shift: tuple[TopShift, LeftShift] = (0, 0),
    **kwargs: Any
) -> VideoNodeT

Supersample a clip by a given scaling factor.

Parameters:

  • clip

    (VideoNodeT) –

    The source clip.

  • rfactor

    (float, default: 2.0 ) –

    Scaling factor for supersampling.

  • shift

    (tuple[TopShift, LeftShift], default: (0, 0) ) –

    Subpixel shift (top, left) applied during scaling.

  • kwargs

    (Any, default: {} ) –

    Additional arguments forwarded to the scale function.

Returns:

Raises:

  • CustomValueError

    If resulting resolution is non-positive.

Source code
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
def supersample(
    self, clip: VideoNodeT, rfactor: float = 2.0, shift: tuple[TopShift, LeftShift] = (0, 0), **kwargs: Any
) -> VideoNodeT:
    """
    Supersample a clip by a given scaling factor.

    :param clip:                The source clip.
    :param rfactor:             Scaling factor for supersampling.
    :param shift:               Subpixel shift (top, left) applied during scaling.
    :param kwargs:              Additional arguments forwarded to the scale function.
    :raises CustomValueError:   If resulting resolution is non-positive.
    :return:                    The supersampled clip.
    """
    assert check_variable_resolution(clip, self.supersample)

    dst_width, dst_height = ceil(clip.width * rfactor), ceil(clip.height * rfactor)

    if max(dst_width, dst_height) <= 0.0:
        raise CustomValueError(
            'Multiplying the resolution by "rfactor" must result in a positive resolution!',
            self.supersample,
            rfactor,
        )

    return self.scale(clip, dst_width, dst_height, shift, **kwargs)  # type: ignore[return-value]

EwaGinseng

EwaGinseng(taps: float = 3.238315484166236, **kwargs: Any)

Bases: Placebo

Ewa Ginseng resizer.

Initialize the kernel with a specific number of taps and optional keyword arguments.

These keyword arguments are automatically forwarded to the _implemented_funcs methods but only if the method explicitly accepts them as named parameters. If the same keyword is passed to both __init__ and one of the _implemented_funcs, the one passed to func takes precedence.

Parameters:

  • taps

    (float, default: 3.238315484166236 ) –

    The number of taps used for Ginseng interpolation.

  • kwargs

    (Any, default: {} ) –

    Keyword arguments that configure the internal scaling behavior.

Classes:

Methods:

  • ensure_obj

    Ensure that the input is a scaler instance, resolving it if necessary.

  • from_param

    Resolve and return a scaler type from a given input (string, type, or instance).

  • get_scale_args
  • kernel_radius
  • multi

    Deprecated alias for supersample.

  • pretty_string

    Cached property returning a user-friendly string representation.

  • scale

    Scale a clip to the given resolution, with aspect ratio and linear light support.

  • supersample

    Supersample a clip by a given scaling factor.

Attributes:

Source code
192
193
194
195
196
197
198
199
200
201
202
203
204
def __init__(self, taps: float = 3.2383154841662362076499, **kwargs: Any) -> None:
    """
    Initialize the kernel with a specific number of taps and optional keyword arguments.

    These keyword arguments are automatically forwarded to the `_implemented_funcs` methods
    but only if the method explicitly accepts them as named parameters.
    If the same keyword is passed to both `__init__` and one of the `_implemented_funcs`,
    the one passed to `func` takes precedence.

    :param taps:    The number of taps used for Ginseng interpolation.
    :param kwargs:  Keyword arguments that configure the internal scaling behavior.
    """
    super().__init__(taps, None, None, **kwargs)

antiring instance-attribute

antiring = antiring

b instance-attribute

b = b

blur instance-attribute

blur = blur

c instance-attribute

c = c

clamp instance-attribute

clamp = clamp

kwargs instance-attribute

kwargs: dict[str, Any] = kwargs

Arguments passed to the implemented funcs or internal scale function.

scale_function class-attribute instance-attribute

scale_function = Resample

taper instance-attribute

taper = taper

taps instance-attribute

taps = taps

cached_property

cached_property(func: Callable[Concatenate[_BaseScalerT, P], T_co])

Bases: cached_property[T_co]

Read only version of functools.cached_property.

Source code
265
def __init__(self, func: Callable[Concatenate[_BaseScalerT, P], T_co]) -> None: ...

ensure_obj classmethod

ensure_obj(
    scaler: str | type[Self] | Self | None = None,
    /,
    func_except: FuncExceptT | None = None,
) -> Self

Ensure that the input is a scaler instance, resolving it if necessary.

Parameters:

  • scaler

    (str | type[Self] | Self | None, default: None ) –

    Scaler identifier (string, class, or instance).

  • func_except

    (FuncExceptT | None, default: None ) –

    Function returned for custom error handling.

Returns:

  • Self

    Scaler instance.

Source code
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
@classmethod
def ensure_obj(
    cls,
    scaler: str | type[Self] | Self | None = None,
    /,
    func_except: FuncExceptT | None = None,
) -> Self:
    """
    Ensure that the input is a scaler instance, resolving it if necessary.

    :param scaler:          Scaler identifier (string, class, or instance).
    :param func_except:     Function returned for custom error handling.
    :return:                Scaler instance.
    """
    return _base_ensure_obj(cls, scaler, func_except)

from_param classmethod

from_param(
    scaler: str | type[Self] | Self | None = None,
    /,
    func_except: FuncExceptT | None = None,
) -> type[Self]

Resolve and return a scaler type from a given input (string, type, or instance).

Parameters:

  • scaler

    (str | type[Self] | Self | None, default: None ) –

    Scaler identifier (string, class, or instance).

  • func_except

    (FuncExceptT | None, default: None ) –

    Function returned for custom error handling.

Returns:

  • type[Self]

    Resolved scaler type.

Source code
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
@classmethod
def from_param(
    cls,
    scaler: str | type[Self] | Self | None = None,
    /,
    func_except: FuncExceptT | None = None,
) -> type[Self]:
    """
    Resolve and return a scaler type from a given input (string, type, or instance).

    :param scaler:          Scaler identifier (string, class, or instance).
    :param func_except:     Function returned for custom error handling.
    :return:                Resolved scaler type.
    """
    return _base_from_param(cls, scaler, cls._err_class, func_except)

get_scale_args

get_scale_args(
    clip: VideoNode,
    shift: tuple[TopShift, LeftShift] = (0, 0),
    width: int | None = None,
    height: int | None = None,
    **kwargs: Any
) -> dict[str, Any]
Source code
 75
 76
 77
 78
 79
 80
 81
 82
 83
 84
 85
 86
 87
 88
 89
 90
 91
 92
 93
 94
 95
 96
 97
 98
 99
100
101
def get_scale_args(
    self,
    clip: vs.VideoNode,
    shift: tuple[TopShift, LeftShift] = (0, 0),
    width: int | None = None,
    height: int | None = None,
    **kwargs: Any,
) -> dict[str, Any]:
    return (
        dict(
            sx=kwargs.pop("src_left", shift[1]),
            sy=kwargs.pop("src_top", shift[0]),
            width=width,
            height=height,
            filter=self._kernel,
            radius=self.taps,
            param1=self.b,
            param2=self.c,
            clamp=self.clamp,
            taper=self.taper,
            blur=self.blur,
            antiring=self.antiring,
        )
        | self.kwargs
        | kwargs
        | dict(linearize=False, sigmoidize=False)
    )

kernel_radius

kernel_radius() -> int
Source code
103
104
105
106
107
108
109
110
111
112
113
114
@ComplexScaler.cached_property
def kernel_radius(self) -> int:
    if self.taps:
        return ceil(self.taps)

    if self.b or self.c:
        b, c = fallback(self.b, 0), fallback(self.c, 0.5)

        if (b, c) == (0, 0):
            return 1

    return 2

multi

multi(
    clip: VideoNodeT,
    multi: float = 2.0,
    shift: tuple[TopShift, LeftShift] = (0, 0),
    **kwargs: Any
) -> VideoNodeT

Deprecated alias for supersample.

Parameters:

  • clip

    (VideoNodeT) –

    The source clip.

  • multi

    (float, default: 2.0 ) –

    Supersampling factor.

  • shift

    (tuple[TopShift, LeftShift], default: (0, 0) ) –

    Subpixel shift (top, left) applied during scaling.

  • kwargs

    (Any, default: {} ) –

    Additional arguments forwarded to the scale function.

Returns:

Source code
451
452
453
454
455
456
457
458
459
460
461
462
463
464
@deprecated('The "multi" method is deprecated. Use "supersample" instead.', category=DeprecationWarning)
def multi(
    self, clip: VideoNodeT, multi: float = 2.0, shift: tuple[TopShift, LeftShift] = (0, 0), **kwargs: Any
) -> VideoNodeT:
    """
    Deprecated alias for `supersample`.

    :param clip:    The source clip.
    :param multi:   Supersampling factor.
    :param shift:   Subpixel shift (top, left) applied during scaling.
    :param kwargs:  Additional arguments forwarded to the scale function.
    :return:        The supersampled clip.
    """
    return self.supersample(clip, multi, shift, **kwargs)

pretty_string

pretty_string() -> str

Cached property returning a user-friendly string representation.

Returns:

  • str

    Pretty-printed string with arguments.

Source code
368
369
370
371
372
373
374
375
@cached_property
def pretty_string(self) -> str:
    """
    Cached property returning a user-friendly string representation.

    :return: Pretty-printed string with arguments.
    """
    return self._pretty_string()

scale

scale(
    clip: VideoNode,
    width: int | None = None,
    height: int | None = None,
    shift: tuple[TopShift | list[TopShift], LeftShift | list[LeftShift]] = (
        0,
        0,
    ),
    *,
    linear: bool | None = None,
    sigmoid: bool | tuple[Slope, Center] = False,
    border_handling: BorderHandling = MIRROR,
    sample_grid_model: SampleGridModel = MATCH_EDGES,
    sar: Sar | float | bool | None = None,
    dar: Dar | float | bool | None = None,
    dar_in: Dar | bool | float | None = None,
    keep_ar: bool | None = None,
    blur: float | None = None,
    **kwargs: Any
) -> VideoNode | ConstantFormatVideoNode

Scale a clip to the given resolution, with aspect ratio and linear light support.

Keyword arguments passed during initialization are automatically injected here, unless explicitly overridden by the arguments provided at call time. Only arguments that match named parameters in this method are injected.

Parameters:

  • clip

    (VideoNode) –

    The source clip.

  • width

    (int | None, default: None ) –

    Target width (defaults to clip width if None).

  • height

    (int | None, default: None ) –

    Target height (defaults to clip height if None).

  • shift

    (tuple[TopShift | list[TopShift], LeftShift | list[LeftShift]], default: (0, 0) ) –

    Subpixel shift (top, left) applied during scaling. If a tuple is provided, it is used uniformly. If a list is given, the shift is applied per plane.

  • linear

    (bool | None, default: None ) –

    Whether to linearize the input before descaling. If None, inferred from sigmoid.

  • sigmoid

    (bool | tuple[Slope, Center], default: False ) –

    Whether to use sigmoid transfer curve. Can be True, False, or a tuple of (slope, center). True applies the defaults values (6.5, 0.75). Keep in mind sigmoid slope has to be in range 1.0-20.0 (inclusive) and sigmoid center has to be in range 0.0-1.0 (inclusive).

  • border_handling

    (BorderHandling, default: MIRROR ) –

    Method for handling image borders during sampling.

  • sample_grid_model

    (SampleGridModel, default: MATCH_EDGES ) –

    Model used to align sampling grid.

  • sar

    (Sar | float | bool | None, default: None ) –

    Sample aspect ratio to assume or convert to.

  • dar

    (Dar | float | bool | None, default: None ) –

    Desired display aspect ratio.

  • dar_in

    (Dar | bool | float | None, default: None ) –

    Input display aspect ratio, if different from clip's.

  • keep_ar

    (bool | None, default: None ) –

    Whether to adjust dimensions to preserve aspect ratio.

  • blur

    (float | None, default: None ) –

    Amount of blur to apply during scaling.

Returns:

  • VideoNode | ConstantFormatVideoNode

    Scaled clip, optionally aspect-corrected and linearized.

Source code
510
511
512
513
514
515
516
517
518
519
520
521
522
523
524
525
526
527
528
529
530
531
532
533
534
535
536
537
538
539
540
541
542
543
544
545
546
547
548
549
550
551
552
553
554
555
556
557
558
559
560
561
562
563
564
565
566
567
568
569
570
571
572
573
574
575
576
577
578
579
580
581
582
583
584
585
586
587
588
589
590
591
592
593
594
595
596
597
598
599
600
601
602
603
604
605
606
607
608
609
610
611
612
613
614
615
616
617
618
619
620
621
622
623
624
625
626
627
628
629
630
631
632
633
634
635
636
637
638
639
640
641
642
643
644
645
646
647
648
649
650
651
652
def scale(
    self,
    clip: vs.VideoNode,
    width: int | None = None,
    height: int | None = None,
    # ComplexScaler adds shift per planes
    shift: tuple[TopShift | list[TopShift], LeftShift | list[LeftShift]] = (0, 0),
    *,
    # `linear` and `sigmoid` from LinearScaler
    linear: bool | None = None,
    sigmoid: bool | tuple[Slope, Center] = False,
    # `border_handling`, `sample_grid_model`, `sar`, `dar`, `dar_in` and `keep_ar` from KeepArScaler
    border_handling: BorderHandling = BorderHandling.MIRROR,
    sample_grid_model: SampleGridModel = SampleGridModel.MATCH_EDGES,
    sar: Sar | float | bool | None = None,
    dar: Dar | float | bool | None = None,
    dar_in: Dar | bool | float | None = None,
    keep_ar: bool | None = None,
    # ComplexScaler adds blur
    blur: float | None = None,
    **kwargs: Any,
) -> vs.VideoNode | ConstantFormatVideoNode:
    """
    Scale a clip to the given resolution, with aspect ratio and linear light support.

    Keyword arguments passed during initialization are automatically injected here,
    unless explicitly overridden by the arguments provided at call time.
    Only arguments that match named parameters in this method are injected.

    :param clip:                The source clip.
    :param width:               Target width (defaults to clip width if None).
    :param height:              Target height (defaults to clip height if None).
    :param shift:               Subpixel shift (top, left) applied during scaling.
                                If a tuple is provided, it is used uniformly.
                                If a list is given, the shift is applied per plane.
    :param linear:              Whether to linearize the input before descaling. If None, inferred from sigmoid.
    :param sigmoid:             Whether to use sigmoid transfer curve. Can be True, False, or a tuple of (slope, center).
                                `True` applies the defaults values (6.5, 0.75).
                                Keep in mind sigmoid slope has to be in range 1.0-20.0 (inclusive)
                                and sigmoid center has to be in range 0.0-1.0 (inclusive).
    :param border_handling:     Method for handling image borders during sampling.
    :param sample_grid_model:   Model used to align sampling grid.
    :param sar:                 Sample aspect ratio to assume or convert to.
    :param dar:                 Desired display aspect ratio.
    :param dar_in:              Input display aspect ratio, if different from clip's.
    :param keep_ar:             Whether to adjust dimensions to preserve aspect ratio.
    :param blur:                Amount of blur to apply during scaling.
    :return:                    Scaled clip, optionally aspect-corrected and linearized.
    """
    kwargs.update(
        linear=linear,
        sigmoid=sigmoid,
        border_handling=border_handling,
        sample_grid_model=sample_grid_model,
        sar=sar,
        dar=dar,
        dar_in=dar_in,
        keep_ar=keep_ar,
        blur=blur
    )

    shift_top, shift_left = shift

    if isinstance(shift_top, (int, float)) and isinstance(shift_left, (int, float)):
        return super().scale(clip, width, height, (shift_top, shift_left), **kwargs)

    assert check_variable_format(clip, self.scale)

    n_planes = clip.format.num_planes

    shift_top = normalize_seq(shift_top, n_planes)
    shift_left = normalize_seq(shift_left, n_planes)

    if n_planes == 1:
        if len(set(shift_top)) > 1 or len(set(shift_left)) > 1:
            raise CustomValueError(
                "Inconsistent shift values detected for a single plane. "
                "All shift values must be identical when passing a GRAY clip.",
                self.scale,
                (shift_top, shift_left),
            )

        return super().scale(clip, width, height, (shift_top[0], shift_left[0]), **kwargs)

    width, height = self._wh_norm(clip, width, height)

    format_in = clip.format
    format_out = get_video_format(fallback(kwargs.pop("format", None), self.kwargs.get("format"), clip.format))

    chromaloc = ChromaLocation.from_video(clip, func=self.scale)
    chromaloc_in = ChromaLocation(
        fallback(kwargs.pop("chromaloc_in", None), self.kwargs.get("chromaloc_in"), chromaloc)
    )
    chromaloc_out = ChromaLocation(
        fallback(kwargs.pop("chromaloc", None), self.kwargs.get("chromaloc"), chromaloc)
    )

    off_left, off_top = chromaloc_in.get_offsets(format_in)
    off_left_out, off_top_out = chromaloc_out.get_offsets(format_out)

    factor_w = 1 / 2 ** format_in.subsampling_w
    factor_h = 1 / 2 ** format_in.subsampling_h

    # Offsets for format out
    offc_left = (abs(off_left) + off_left_out) * factor_w
    offc_top = (abs(off_top) + off_top_out) * factor_h

    # Offsets for scale out
    if format_out.subsampling_w:
        offc_left = ((abs(off_left) + off_left * (clip.width / width)) * factor_w) + offc_left
    if format_out.subsampling_h:
        offc_top = ((abs(off_top) + off_top * (clip.height / height)) * factor_h) + offc_top

    for i in range(1, n_planes):
        shift_left[i] += offc_left
        shift_top[i] += offc_top

    scaled_planes = list[vs.VideoNode]()

    for i, (plane, top, left) in enumerate(zip(split(clip), shift_top, shift_left)):
        if i:
            w = round(width * 1 / 2 ** format_out.subsampling_h)
            h = round(height * 1 / 2 ** format_out.subsampling_h)
        else:
            w, h = width, height

        scaled_planes.append(
            super().scale(
                plane,
                w,
                h,
                (top, left),
                format=format_out.replace(color_family=vs.GRAY, subsampling_w=0, subsampling_h=0),
                **kwargs
            )
        )

    merged = vs.core.std.ShufflePlanes(scaled_planes, [0, 0, 0], format_out.color_family, clip)

    if chromaloc_in != chromaloc_out:
        return chromaloc_out.apply(merged)

    return merged

supersample

supersample(
    clip: VideoNodeT,
    rfactor: float = 2.0,
    shift: tuple[TopShift, LeftShift] = (0, 0),
    **kwargs: Any
) -> VideoNodeT

Supersample a clip by a given scaling factor.

Parameters:

  • clip

    (VideoNodeT) –

    The source clip.

  • rfactor

    (float, default: 2.0 ) –

    Scaling factor for supersampling.

  • shift

    (tuple[TopShift, LeftShift], default: (0, 0) ) –

    Subpixel shift (top, left) applied during scaling.

  • kwargs

    (Any, default: {} ) –

    Additional arguments forwarded to the scale function.

Returns:

Raises:

  • CustomValueError

    If resulting resolution is non-positive.

Source code
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
def supersample(
    self, clip: VideoNodeT, rfactor: float = 2.0, shift: tuple[TopShift, LeftShift] = (0, 0), **kwargs: Any
) -> VideoNodeT:
    """
    Supersample a clip by a given scaling factor.

    :param clip:                The source clip.
    :param rfactor:             Scaling factor for supersampling.
    :param shift:               Subpixel shift (top, left) applied during scaling.
    :param kwargs:              Additional arguments forwarded to the scale function.
    :raises CustomValueError:   If resulting resolution is non-positive.
    :return:                    The supersampled clip.
    """
    assert check_variable_resolution(clip, self.supersample)

    dst_width, dst_height = ceil(clip.width * rfactor), ceil(clip.height * rfactor)

    if max(dst_width, dst_height) <= 0.0:
        raise CustomValueError(
            'Multiplying the resolution by "rfactor" must result in a positive resolution!',
            self.supersample,
            rfactor,
        )

    return self.scale(clip, dst_width, dst_height, shift, **kwargs)  # type: ignore[return-value]

EwaHann

EwaHann(taps: float = 3.238315484166236, **kwargs: Any)

Bases: Placebo

Ewa Hann resizer.

Initialize the kernel with a specific number of taps and optional keyword arguments.

These keyword arguments are automatically forwarded to the _implemented_funcs methods but only if the method explicitly accepts them as named parameters. If the same keyword is passed to both __init__ and one of the _implemented_funcs, the one passed to func takes precedence.

Parameters:

  • taps

    (float, default: 3.238315484166236 ) –

    The number of taps used for Hann interpolation.

  • kwargs

    (Any, default: {} ) –

    Keyword arguments that configure the internal scaling behavior.

Classes:

Methods:

  • ensure_obj

    Ensure that the input is a scaler instance, resolving it if necessary.

  • from_param

    Resolve and return a scaler type from a given input (string, type, or instance).

  • get_scale_args
  • kernel_radius
  • multi

    Deprecated alias for supersample.

  • pretty_string

    Cached property returning a user-friendly string representation.

  • scale

    Scale a clip to the given resolution, with aspect ratio and linear light support.

  • supersample

    Supersample a clip by a given scaling factor.

Attributes:

Source code
212
213
214
215
216
217
218
219
220
221
222
223
224
def __init__(self, taps: float = 3.2383154841662362076499, **kwargs: Any) -> None:
    """
    Initialize the kernel with a specific number of taps and optional keyword arguments.

    These keyword arguments are automatically forwarded to the `_implemented_funcs` methods
    but only if the method explicitly accepts them as named parameters.
    If the same keyword is passed to both `__init__` and one of the `_implemented_funcs`,
    the one passed to `func` takes precedence.

    :param taps:    The number of taps used for Hann interpolation.
    :param kwargs:  Keyword arguments that configure the internal scaling behavior.
    """
    super().__init__(taps, None, None, **kwargs)

antiring instance-attribute

antiring = antiring

b instance-attribute

b = b

blur instance-attribute

blur = blur

c instance-attribute

c = c

clamp instance-attribute

clamp = clamp

kwargs instance-attribute

kwargs: dict[str, Any] = kwargs

Arguments passed to the implemented funcs or internal scale function.

scale_function class-attribute instance-attribute

scale_function = Resample

taper instance-attribute

taper = taper

taps instance-attribute

taps = taps

cached_property

cached_property(func: Callable[Concatenate[_BaseScalerT, P], T_co])

Bases: cached_property[T_co]

Read only version of functools.cached_property.

Source code
265
def __init__(self, func: Callable[Concatenate[_BaseScalerT, P], T_co]) -> None: ...

ensure_obj classmethod

ensure_obj(
    scaler: str | type[Self] | Self | None = None,
    /,
    func_except: FuncExceptT | None = None,
) -> Self

Ensure that the input is a scaler instance, resolving it if necessary.

Parameters:

  • scaler

    (str | type[Self] | Self | None, default: None ) –

    Scaler identifier (string, class, or instance).

  • func_except

    (FuncExceptT | None, default: None ) –

    Function returned for custom error handling.

Returns:

  • Self

    Scaler instance.

Source code
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
@classmethod
def ensure_obj(
    cls,
    scaler: str | type[Self] | Self | None = None,
    /,
    func_except: FuncExceptT | None = None,
) -> Self:
    """
    Ensure that the input is a scaler instance, resolving it if necessary.

    :param scaler:          Scaler identifier (string, class, or instance).
    :param func_except:     Function returned for custom error handling.
    :return:                Scaler instance.
    """
    return _base_ensure_obj(cls, scaler, func_except)

from_param classmethod

from_param(
    scaler: str | type[Self] | Self | None = None,
    /,
    func_except: FuncExceptT | None = None,
) -> type[Self]

Resolve and return a scaler type from a given input (string, type, or instance).

Parameters:

  • scaler

    (str | type[Self] | Self | None, default: None ) –

    Scaler identifier (string, class, or instance).

  • func_except

    (FuncExceptT | None, default: None ) –

    Function returned for custom error handling.

Returns:

  • type[Self]

    Resolved scaler type.

Source code
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
@classmethod
def from_param(
    cls,
    scaler: str | type[Self] | Self | None = None,
    /,
    func_except: FuncExceptT | None = None,
) -> type[Self]:
    """
    Resolve and return a scaler type from a given input (string, type, or instance).

    :param scaler:          Scaler identifier (string, class, or instance).
    :param func_except:     Function returned for custom error handling.
    :return:                Resolved scaler type.
    """
    return _base_from_param(cls, scaler, cls._err_class, func_except)

get_scale_args

get_scale_args(
    clip: VideoNode,
    shift: tuple[TopShift, LeftShift] = (0, 0),
    width: int | None = None,
    height: int | None = None,
    **kwargs: Any
) -> dict[str, Any]
Source code
 75
 76
 77
 78
 79
 80
 81
 82
 83
 84
 85
 86
 87
 88
 89
 90
 91
 92
 93
 94
 95
 96
 97
 98
 99
100
101
def get_scale_args(
    self,
    clip: vs.VideoNode,
    shift: tuple[TopShift, LeftShift] = (0, 0),
    width: int | None = None,
    height: int | None = None,
    **kwargs: Any,
) -> dict[str, Any]:
    return (
        dict(
            sx=kwargs.pop("src_left", shift[1]),
            sy=kwargs.pop("src_top", shift[0]),
            width=width,
            height=height,
            filter=self._kernel,
            radius=self.taps,
            param1=self.b,
            param2=self.c,
            clamp=self.clamp,
            taper=self.taper,
            blur=self.blur,
            antiring=self.antiring,
        )
        | self.kwargs
        | kwargs
        | dict(linearize=False, sigmoidize=False)
    )

kernel_radius

kernel_radius() -> int
Source code
103
104
105
106
107
108
109
110
111
112
113
114
@ComplexScaler.cached_property
def kernel_radius(self) -> int:
    if self.taps:
        return ceil(self.taps)

    if self.b or self.c:
        b, c = fallback(self.b, 0), fallback(self.c, 0.5)

        if (b, c) == (0, 0):
            return 1

    return 2

multi

multi(
    clip: VideoNodeT,
    multi: float = 2.0,
    shift: tuple[TopShift, LeftShift] = (0, 0),
    **kwargs: Any
) -> VideoNodeT

Deprecated alias for supersample.

Parameters:

  • clip

    (VideoNodeT) –

    The source clip.

  • multi

    (float, default: 2.0 ) –

    Supersampling factor.

  • shift

    (tuple[TopShift, LeftShift], default: (0, 0) ) –

    Subpixel shift (top, left) applied during scaling.

  • kwargs

    (Any, default: {} ) –

    Additional arguments forwarded to the scale function.

Returns:

Source code
451
452
453
454
455
456
457
458
459
460
461
462
463
464
@deprecated('The "multi" method is deprecated. Use "supersample" instead.', category=DeprecationWarning)
def multi(
    self, clip: VideoNodeT, multi: float = 2.0, shift: tuple[TopShift, LeftShift] = (0, 0), **kwargs: Any
) -> VideoNodeT:
    """
    Deprecated alias for `supersample`.

    :param clip:    The source clip.
    :param multi:   Supersampling factor.
    :param shift:   Subpixel shift (top, left) applied during scaling.
    :param kwargs:  Additional arguments forwarded to the scale function.
    :return:        The supersampled clip.
    """
    return self.supersample(clip, multi, shift, **kwargs)

pretty_string

pretty_string() -> str

Cached property returning a user-friendly string representation.

Returns:

  • str

    Pretty-printed string with arguments.

Source code
368
369
370
371
372
373
374
375
@cached_property
def pretty_string(self) -> str:
    """
    Cached property returning a user-friendly string representation.

    :return: Pretty-printed string with arguments.
    """
    return self._pretty_string()

scale

scale(
    clip: VideoNode,
    width: int | None = None,
    height: int | None = None,
    shift: tuple[TopShift | list[TopShift], LeftShift | list[LeftShift]] = (
        0,
        0,
    ),
    *,
    linear: bool | None = None,
    sigmoid: bool | tuple[Slope, Center] = False,
    border_handling: BorderHandling = MIRROR,
    sample_grid_model: SampleGridModel = MATCH_EDGES,
    sar: Sar | float | bool | None = None,
    dar: Dar | float | bool | None = None,
    dar_in: Dar | bool | float | None = None,
    keep_ar: bool | None = None,
    blur: float | None = None,
    **kwargs: Any
) -> VideoNode | ConstantFormatVideoNode

Scale a clip to the given resolution, with aspect ratio and linear light support.

Keyword arguments passed during initialization are automatically injected here, unless explicitly overridden by the arguments provided at call time. Only arguments that match named parameters in this method are injected.

Parameters:

  • clip

    (VideoNode) –

    The source clip.

  • width

    (int | None, default: None ) –

    Target width (defaults to clip width if None).

  • height

    (int | None, default: None ) –

    Target height (defaults to clip height if None).

  • shift

    (tuple[TopShift | list[TopShift], LeftShift | list[LeftShift]], default: (0, 0) ) –

    Subpixel shift (top, left) applied during scaling. If a tuple is provided, it is used uniformly. If a list is given, the shift is applied per plane.

  • linear

    (bool | None, default: None ) –

    Whether to linearize the input before descaling. If None, inferred from sigmoid.

  • sigmoid

    (bool | tuple[Slope, Center], default: False ) –

    Whether to use sigmoid transfer curve. Can be True, False, or a tuple of (slope, center). True applies the defaults values (6.5, 0.75). Keep in mind sigmoid slope has to be in range 1.0-20.0 (inclusive) and sigmoid center has to be in range 0.0-1.0 (inclusive).

  • border_handling

    (BorderHandling, default: MIRROR ) –

    Method for handling image borders during sampling.

  • sample_grid_model

    (SampleGridModel, default: MATCH_EDGES ) –

    Model used to align sampling grid.

  • sar

    (Sar | float | bool | None, default: None ) –

    Sample aspect ratio to assume or convert to.

  • dar

    (Dar | float | bool | None, default: None ) –

    Desired display aspect ratio.

  • dar_in

    (Dar | bool | float | None, default: None ) –

    Input display aspect ratio, if different from clip's.

  • keep_ar

    (bool | None, default: None ) –

    Whether to adjust dimensions to preserve aspect ratio.

  • blur

    (float | None, default: None ) –

    Amount of blur to apply during scaling.

Returns:

  • VideoNode | ConstantFormatVideoNode

    Scaled clip, optionally aspect-corrected and linearized.

Source code
510
511
512
513
514
515
516
517
518
519
520
521
522
523
524
525
526
527
528
529
530
531
532
533
534
535
536
537
538
539
540
541
542
543
544
545
546
547
548
549
550
551
552
553
554
555
556
557
558
559
560
561
562
563
564
565
566
567
568
569
570
571
572
573
574
575
576
577
578
579
580
581
582
583
584
585
586
587
588
589
590
591
592
593
594
595
596
597
598
599
600
601
602
603
604
605
606
607
608
609
610
611
612
613
614
615
616
617
618
619
620
621
622
623
624
625
626
627
628
629
630
631
632
633
634
635
636
637
638
639
640
641
642
643
644
645
646
647
648
649
650
651
652
def scale(
    self,
    clip: vs.VideoNode,
    width: int | None = None,
    height: int | None = None,
    # ComplexScaler adds shift per planes
    shift: tuple[TopShift | list[TopShift], LeftShift | list[LeftShift]] = (0, 0),
    *,
    # `linear` and `sigmoid` from LinearScaler
    linear: bool | None = None,
    sigmoid: bool | tuple[Slope, Center] = False,
    # `border_handling`, `sample_grid_model`, `sar`, `dar`, `dar_in` and `keep_ar` from KeepArScaler
    border_handling: BorderHandling = BorderHandling.MIRROR,
    sample_grid_model: SampleGridModel = SampleGridModel.MATCH_EDGES,
    sar: Sar | float | bool | None = None,
    dar: Dar | float | bool | None = None,
    dar_in: Dar | bool | float | None = None,
    keep_ar: bool | None = None,
    # ComplexScaler adds blur
    blur: float | None = None,
    **kwargs: Any,
) -> vs.VideoNode | ConstantFormatVideoNode:
    """
    Scale a clip to the given resolution, with aspect ratio and linear light support.

    Keyword arguments passed during initialization are automatically injected here,
    unless explicitly overridden by the arguments provided at call time.
    Only arguments that match named parameters in this method are injected.

    :param clip:                The source clip.
    :param width:               Target width (defaults to clip width if None).
    :param height:              Target height (defaults to clip height if None).
    :param shift:               Subpixel shift (top, left) applied during scaling.
                                If a tuple is provided, it is used uniformly.
                                If a list is given, the shift is applied per plane.
    :param linear:              Whether to linearize the input before descaling. If None, inferred from sigmoid.
    :param sigmoid:             Whether to use sigmoid transfer curve. Can be True, False, or a tuple of (slope, center).
                                `True` applies the defaults values (6.5, 0.75).
                                Keep in mind sigmoid slope has to be in range 1.0-20.0 (inclusive)
                                and sigmoid center has to be in range 0.0-1.0 (inclusive).
    :param border_handling:     Method for handling image borders during sampling.
    :param sample_grid_model:   Model used to align sampling grid.
    :param sar:                 Sample aspect ratio to assume or convert to.
    :param dar:                 Desired display aspect ratio.
    :param dar_in:              Input display aspect ratio, if different from clip's.
    :param keep_ar:             Whether to adjust dimensions to preserve aspect ratio.
    :param blur:                Amount of blur to apply during scaling.
    :return:                    Scaled clip, optionally aspect-corrected and linearized.
    """
    kwargs.update(
        linear=linear,
        sigmoid=sigmoid,
        border_handling=border_handling,
        sample_grid_model=sample_grid_model,
        sar=sar,
        dar=dar,
        dar_in=dar_in,
        keep_ar=keep_ar,
        blur=blur
    )

    shift_top, shift_left = shift

    if isinstance(shift_top, (int, float)) and isinstance(shift_left, (int, float)):
        return super().scale(clip, width, height, (shift_top, shift_left), **kwargs)

    assert check_variable_format(clip, self.scale)

    n_planes = clip.format.num_planes

    shift_top = normalize_seq(shift_top, n_planes)
    shift_left = normalize_seq(shift_left, n_planes)

    if n_planes == 1:
        if len(set(shift_top)) > 1 or len(set(shift_left)) > 1:
            raise CustomValueError(
                "Inconsistent shift values detected for a single plane. "
                "All shift values must be identical when passing a GRAY clip.",
                self.scale,
                (shift_top, shift_left),
            )

        return super().scale(clip, width, height, (shift_top[0], shift_left[0]), **kwargs)

    width, height = self._wh_norm(clip, width, height)

    format_in = clip.format
    format_out = get_video_format(fallback(kwargs.pop("format", None), self.kwargs.get("format"), clip.format))

    chromaloc = ChromaLocation.from_video(clip, func=self.scale)
    chromaloc_in = ChromaLocation(
        fallback(kwargs.pop("chromaloc_in", None), self.kwargs.get("chromaloc_in"), chromaloc)
    )
    chromaloc_out = ChromaLocation(
        fallback(kwargs.pop("chromaloc", None), self.kwargs.get("chromaloc"), chromaloc)
    )

    off_left, off_top = chromaloc_in.get_offsets(format_in)
    off_left_out, off_top_out = chromaloc_out.get_offsets(format_out)

    factor_w = 1 / 2 ** format_in.subsampling_w
    factor_h = 1 / 2 ** format_in.subsampling_h

    # Offsets for format out
    offc_left = (abs(off_left) + off_left_out) * factor_w
    offc_top = (abs(off_top) + off_top_out) * factor_h

    # Offsets for scale out
    if format_out.subsampling_w:
        offc_left = ((abs(off_left) + off_left * (clip.width / width)) * factor_w) + offc_left
    if format_out.subsampling_h:
        offc_top = ((abs(off_top) + off_top * (clip.height / height)) * factor_h) + offc_top

    for i in range(1, n_planes):
        shift_left[i] += offc_left
        shift_top[i] += offc_top

    scaled_planes = list[vs.VideoNode]()

    for i, (plane, top, left) in enumerate(zip(split(clip), shift_top, shift_left)):
        if i:
            w = round(width * 1 / 2 ** format_out.subsampling_h)
            h = round(height * 1 / 2 ** format_out.subsampling_h)
        else:
            w, h = width, height

        scaled_planes.append(
            super().scale(
                plane,
                w,
                h,
                (top, left),
                format=format_out.replace(color_family=vs.GRAY, subsampling_w=0, subsampling_h=0),
                **kwargs
            )
        )

    merged = vs.core.std.ShufflePlanes(scaled_planes, [0, 0, 0], format_out.color_family, clip)

    if chromaloc_in != chromaloc_out:
        return chromaloc_out.apply(merged)

    return merged

supersample

supersample(
    clip: VideoNodeT,
    rfactor: float = 2.0,
    shift: tuple[TopShift, LeftShift] = (0, 0),
    **kwargs: Any
) -> VideoNodeT

Supersample a clip by a given scaling factor.

Parameters:

  • clip

    (VideoNodeT) –

    The source clip.

  • rfactor

    (float, default: 2.0 ) –

    Scaling factor for supersampling.

  • shift

    (tuple[TopShift, LeftShift], default: (0, 0) ) –

    Subpixel shift (top, left) applied during scaling.

  • kwargs

    (Any, default: {} ) –

    Additional arguments forwarded to the scale function.

Returns:

Raises:

  • CustomValueError

    If resulting resolution is non-positive.

Source code
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
def supersample(
    self, clip: VideoNodeT, rfactor: float = 2.0, shift: tuple[TopShift, LeftShift] = (0, 0), **kwargs: Any
) -> VideoNodeT:
    """
    Supersample a clip by a given scaling factor.

    :param clip:                The source clip.
    :param rfactor:             Scaling factor for supersampling.
    :param shift:               Subpixel shift (top, left) applied during scaling.
    :param kwargs:              Additional arguments forwarded to the scale function.
    :raises CustomValueError:   If resulting resolution is non-positive.
    :return:                    The supersampled clip.
    """
    assert check_variable_resolution(clip, self.supersample)

    dst_width, dst_height = ceil(clip.width * rfactor), ceil(clip.height * rfactor)

    if max(dst_width, dst_height) <= 0.0:
        raise CustomValueError(
            'Multiplying the resolution by "rfactor" must result in a positive resolution!',
            self.supersample,
            rfactor,
        )

    return self.scale(clip, dst_width, dst_height, shift, **kwargs)  # type: ignore[return-value]

EwaJinc

EwaJinc(taps: float = 3.238315484166236, **kwargs: Any)

Bases: Placebo

Ewa Jinc resizer.

Initialize the kernel with a specific number of taps and optional keyword arguments.

These keyword arguments are automatically forwarded to the _implemented_funcs methods but only if the method explicitly accepts them as named parameters. If the same keyword is passed to both __init__ and one of the _implemented_funcs, the one passed to func takes precedence.

Parameters:

  • taps

    (float, default: 3.238315484166236 ) –

    The number of taps used for Jinc interpolation.

  • kwargs

    (Any, default: {} ) –

    Keyword arguments that configure the internal scaling behavior.

Classes:

Methods:

  • ensure_obj

    Ensure that the input is a scaler instance, resolving it if necessary.

  • from_param

    Resolve and return a scaler type from a given input (string, type, or instance).

  • get_scale_args
  • kernel_radius
  • multi

    Deprecated alias for supersample.

  • pretty_string

    Cached property returning a user-friendly string representation.

  • scale

    Scale a clip to the given resolution, with aspect ratio and linear light support.

  • supersample

    Supersample a clip by a given scaling factor.

Attributes:

Source code
172
173
174
175
176
177
178
179
180
181
182
183
184
def __init__(self, taps: float = 3.2383154841662362076499, **kwargs: Any) -> None:
    """
    Initialize the kernel with a specific number of taps and optional keyword arguments.

    These keyword arguments are automatically forwarded to the `_implemented_funcs` methods
    but only if the method explicitly accepts them as named parameters.
    If the same keyword is passed to both `__init__` and one of the `_implemented_funcs`,
    the one passed to `func` takes precedence.

    :param taps:    The number of taps used for Jinc interpolation.
    :param kwargs:  Keyword arguments that configure the internal scaling behavior.
    """
    super().__init__(taps, None, None, **kwargs)

antiring instance-attribute

antiring = antiring

b instance-attribute

b = b

blur instance-attribute

blur = blur

c instance-attribute

c = c

clamp instance-attribute

clamp = clamp

kwargs instance-attribute

kwargs: dict[str, Any] = kwargs

Arguments passed to the implemented funcs or internal scale function.

scale_function class-attribute instance-attribute

scale_function = Resample

taper instance-attribute

taper = taper

taps instance-attribute

taps = taps

cached_property

cached_property(func: Callable[Concatenate[_BaseScalerT, P], T_co])

Bases: cached_property[T_co]

Read only version of functools.cached_property.

Source code
265
def __init__(self, func: Callable[Concatenate[_BaseScalerT, P], T_co]) -> None: ...

ensure_obj classmethod

ensure_obj(
    scaler: str | type[Self] | Self | None = None,
    /,
    func_except: FuncExceptT | None = None,
) -> Self

Ensure that the input is a scaler instance, resolving it if necessary.

Parameters:

  • scaler

    (str | type[Self] | Self | None, default: None ) –

    Scaler identifier (string, class, or instance).

  • func_except

    (FuncExceptT | None, default: None ) –

    Function returned for custom error handling.

Returns:

  • Self

    Scaler instance.

Source code
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
@classmethod
def ensure_obj(
    cls,
    scaler: str | type[Self] | Self | None = None,
    /,
    func_except: FuncExceptT | None = None,
) -> Self:
    """
    Ensure that the input is a scaler instance, resolving it if necessary.

    :param scaler:          Scaler identifier (string, class, or instance).
    :param func_except:     Function returned for custom error handling.
    :return:                Scaler instance.
    """
    return _base_ensure_obj(cls, scaler, func_except)

from_param classmethod

from_param(
    scaler: str | type[Self] | Self | None = None,
    /,
    func_except: FuncExceptT | None = None,
) -> type[Self]

Resolve and return a scaler type from a given input (string, type, or instance).

Parameters:

  • scaler

    (str | type[Self] | Self | None, default: None ) –

    Scaler identifier (string, class, or instance).

  • func_except

    (FuncExceptT | None, default: None ) –

    Function returned for custom error handling.

Returns:

  • type[Self]

    Resolved scaler type.

Source code
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
@classmethod
def from_param(
    cls,
    scaler: str | type[Self] | Self | None = None,
    /,
    func_except: FuncExceptT | None = None,
) -> type[Self]:
    """
    Resolve and return a scaler type from a given input (string, type, or instance).

    :param scaler:          Scaler identifier (string, class, or instance).
    :param func_except:     Function returned for custom error handling.
    :return:                Resolved scaler type.
    """
    return _base_from_param(cls, scaler, cls._err_class, func_except)

get_scale_args

get_scale_args(
    clip: VideoNode,
    shift: tuple[TopShift, LeftShift] = (0, 0),
    width: int | None = None,
    height: int | None = None,
    **kwargs: Any
) -> dict[str, Any]
Source code
 75
 76
 77
 78
 79
 80
 81
 82
 83
 84
 85
 86
 87
 88
 89
 90
 91
 92
 93
 94
 95
 96
 97
 98
 99
100
101
def get_scale_args(
    self,
    clip: vs.VideoNode,
    shift: tuple[TopShift, LeftShift] = (0, 0),
    width: int | None = None,
    height: int | None = None,
    **kwargs: Any,
) -> dict[str, Any]:
    return (
        dict(
            sx=kwargs.pop("src_left", shift[1]),
            sy=kwargs.pop("src_top", shift[0]),
            width=width,
            height=height,
            filter=self._kernel,
            radius=self.taps,
            param1=self.b,
            param2=self.c,
            clamp=self.clamp,
            taper=self.taper,
            blur=self.blur,
            antiring=self.antiring,
        )
        | self.kwargs
        | kwargs
        | dict(linearize=False, sigmoidize=False)
    )

kernel_radius

kernel_radius() -> int
Source code
103
104
105
106
107
108
109
110
111
112
113
114
@ComplexScaler.cached_property
def kernel_radius(self) -> int:
    if self.taps:
        return ceil(self.taps)

    if self.b or self.c:
        b, c = fallback(self.b, 0), fallback(self.c, 0.5)

        if (b, c) == (0, 0):
            return 1

    return 2

multi

multi(
    clip: VideoNodeT,
    multi: float = 2.0,
    shift: tuple[TopShift, LeftShift] = (0, 0),
    **kwargs: Any
) -> VideoNodeT

Deprecated alias for supersample.

Parameters:

  • clip

    (VideoNodeT) –

    The source clip.

  • multi

    (float, default: 2.0 ) –

    Supersampling factor.

  • shift

    (tuple[TopShift, LeftShift], default: (0, 0) ) –

    Subpixel shift (top, left) applied during scaling.

  • kwargs

    (Any, default: {} ) –

    Additional arguments forwarded to the scale function.

Returns:

Source code
451
452
453
454
455
456
457
458
459
460
461
462
463
464
@deprecated('The "multi" method is deprecated. Use "supersample" instead.', category=DeprecationWarning)
def multi(
    self, clip: VideoNodeT, multi: float = 2.0, shift: tuple[TopShift, LeftShift] = (0, 0), **kwargs: Any
) -> VideoNodeT:
    """
    Deprecated alias for `supersample`.

    :param clip:    The source clip.
    :param multi:   Supersampling factor.
    :param shift:   Subpixel shift (top, left) applied during scaling.
    :param kwargs:  Additional arguments forwarded to the scale function.
    :return:        The supersampled clip.
    """
    return self.supersample(clip, multi, shift, **kwargs)

pretty_string

pretty_string() -> str

Cached property returning a user-friendly string representation.

Returns:

  • str

    Pretty-printed string with arguments.

Source code
368
369
370
371
372
373
374
375
@cached_property
def pretty_string(self) -> str:
    """
    Cached property returning a user-friendly string representation.

    :return: Pretty-printed string with arguments.
    """
    return self._pretty_string()

scale

scale(
    clip: VideoNode,
    width: int | None = None,
    height: int | None = None,
    shift: tuple[TopShift | list[TopShift], LeftShift | list[LeftShift]] = (
        0,
        0,
    ),
    *,
    linear: bool | None = None,
    sigmoid: bool | tuple[Slope, Center] = False,
    border_handling: BorderHandling = MIRROR,
    sample_grid_model: SampleGridModel = MATCH_EDGES,
    sar: Sar | float | bool | None = None,
    dar: Dar | float | bool | None = None,
    dar_in: Dar | bool | float | None = None,
    keep_ar: bool | None = None,
    blur: float | None = None,
    **kwargs: Any
) -> VideoNode | ConstantFormatVideoNode

Scale a clip to the given resolution, with aspect ratio and linear light support.

Keyword arguments passed during initialization are automatically injected here, unless explicitly overridden by the arguments provided at call time. Only arguments that match named parameters in this method are injected.

Parameters:

  • clip

    (VideoNode) –

    The source clip.

  • width

    (int | None, default: None ) –

    Target width (defaults to clip width if None).

  • height

    (int | None, default: None ) –

    Target height (defaults to clip height if None).

  • shift

    (tuple[TopShift | list[TopShift], LeftShift | list[LeftShift]], default: (0, 0) ) –

    Subpixel shift (top, left) applied during scaling. If a tuple is provided, it is used uniformly. If a list is given, the shift is applied per plane.

  • linear

    (bool | None, default: None ) –

    Whether to linearize the input before descaling. If None, inferred from sigmoid.

  • sigmoid

    (bool | tuple[Slope, Center], default: False ) –

    Whether to use sigmoid transfer curve. Can be True, False, or a tuple of (slope, center). True applies the defaults values (6.5, 0.75). Keep in mind sigmoid slope has to be in range 1.0-20.0 (inclusive) and sigmoid center has to be in range 0.0-1.0 (inclusive).

  • border_handling

    (BorderHandling, default: MIRROR ) –

    Method for handling image borders during sampling.

  • sample_grid_model

    (SampleGridModel, default: MATCH_EDGES ) –

    Model used to align sampling grid.

  • sar

    (Sar | float | bool | None, default: None ) –

    Sample aspect ratio to assume or convert to.

  • dar

    (Dar | float | bool | None, default: None ) –

    Desired display aspect ratio.

  • dar_in

    (Dar | bool | float | None, default: None ) –

    Input display aspect ratio, if different from clip's.

  • keep_ar

    (bool | None, default: None ) –

    Whether to adjust dimensions to preserve aspect ratio.

  • blur

    (float | None, default: None ) –

    Amount of blur to apply during scaling.

Returns:

  • VideoNode | ConstantFormatVideoNode

    Scaled clip, optionally aspect-corrected and linearized.

Source code
510
511
512
513
514
515
516
517
518
519
520
521
522
523
524
525
526
527
528
529
530
531
532
533
534
535
536
537
538
539
540
541
542
543
544
545
546
547
548
549
550
551
552
553
554
555
556
557
558
559
560
561
562
563
564
565
566
567
568
569
570
571
572
573
574
575
576
577
578
579
580
581
582
583
584
585
586
587
588
589
590
591
592
593
594
595
596
597
598
599
600
601
602
603
604
605
606
607
608
609
610
611
612
613
614
615
616
617
618
619
620
621
622
623
624
625
626
627
628
629
630
631
632
633
634
635
636
637
638
639
640
641
642
643
644
645
646
647
648
649
650
651
652
def scale(
    self,
    clip: vs.VideoNode,
    width: int | None = None,
    height: int | None = None,
    # ComplexScaler adds shift per planes
    shift: tuple[TopShift | list[TopShift], LeftShift | list[LeftShift]] = (0, 0),
    *,
    # `linear` and `sigmoid` from LinearScaler
    linear: bool | None = None,
    sigmoid: bool | tuple[Slope, Center] = False,
    # `border_handling`, `sample_grid_model`, `sar`, `dar`, `dar_in` and `keep_ar` from KeepArScaler
    border_handling: BorderHandling = BorderHandling.MIRROR,
    sample_grid_model: SampleGridModel = SampleGridModel.MATCH_EDGES,
    sar: Sar | float | bool | None = None,
    dar: Dar | float | bool | None = None,
    dar_in: Dar | bool | float | None = None,
    keep_ar: bool | None = None,
    # ComplexScaler adds blur
    blur: float | None = None,
    **kwargs: Any,
) -> vs.VideoNode | ConstantFormatVideoNode:
    """
    Scale a clip to the given resolution, with aspect ratio and linear light support.

    Keyword arguments passed during initialization are automatically injected here,
    unless explicitly overridden by the arguments provided at call time.
    Only arguments that match named parameters in this method are injected.

    :param clip:                The source clip.
    :param width:               Target width (defaults to clip width if None).
    :param height:              Target height (defaults to clip height if None).
    :param shift:               Subpixel shift (top, left) applied during scaling.
                                If a tuple is provided, it is used uniformly.
                                If a list is given, the shift is applied per plane.
    :param linear:              Whether to linearize the input before descaling. If None, inferred from sigmoid.
    :param sigmoid:             Whether to use sigmoid transfer curve. Can be True, False, or a tuple of (slope, center).
                                `True` applies the defaults values (6.5, 0.75).
                                Keep in mind sigmoid slope has to be in range 1.0-20.0 (inclusive)
                                and sigmoid center has to be in range 0.0-1.0 (inclusive).
    :param border_handling:     Method for handling image borders during sampling.
    :param sample_grid_model:   Model used to align sampling grid.
    :param sar:                 Sample aspect ratio to assume or convert to.
    :param dar:                 Desired display aspect ratio.
    :param dar_in:              Input display aspect ratio, if different from clip's.
    :param keep_ar:             Whether to adjust dimensions to preserve aspect ratio.
    :param blur:                Amount of blur to apply during scaling.
    :return:                    Scaled clip, optionally aspect-corrected and linearized.
    """
    kwargs.update(
        linear=linear,
        sigmoid=sigmoid,
        border_handling=border_handling,
        sample_grid_model=sample_grid_model,
        sar=sar,
        dar=dar,
        dar_in=dar_in,
        keep_ar=keep_ar,
        blur=blur
    )

    shift_top, shift_left = shift

    if isinstance(shift_top, (int, float)) and isinstance(shift_left, (int, float)):
        return super().scale(clip, width, height, (shift_top, shift_left), **kwargs)

    assert check_variable_format(clip, self.scale)

    n_planes = clip.format.num_planes

    shift_top = normalize_seq(shift_top, n_planes)
    shift_left = normalize_seq(shift_left, n_planes)

    if n_planes == 1:
        if len(set(shift_top)) > 1 or len(set(shift_left)) > 1:
            raise CustomValueError(
                "Inconsistent shift values detected for a single plane. "
                "All shift values must be identical when passing a GRAY clip.",
                self.scale,
                (shift_top, shift_left),
            )

        return super().scale(clip, width, height, (shift_top[0], shift_left[0]), **kwargs)

    width, height = self._wh_norm(clip, width, height)

    format_in = clip.format
    format_out = get_video_format(fallback(kwargs.pop("format", None), self.kwargs.get("format"), clip.format))

    chromaloc = ChromaLocation.from_video(clip, func=self.scale)
    chromaloc_in = ChromaLocation(
        fallback(kwargs.pop("chromaloc_in", None), self.kwargs.get("chromaloc_in"), chromaloc)
    )
    chromaloc_out = ChromaLocation(
        fallback(kwargs.pop("chromaloc", None), self.kwargs.get("chromaloc"), chromaloc)
    )

    off_left, off_top = chromaloc_in.get_offsets(format_in)
    off_left_out, off_top_out = chromaloc_out.get_offsets(format_out)

    factor_w = 1 / 2 ** format_in.subsampling_w
    factor_h = 1 / 2 ** format_in.subsampling_h

    # Offsets for format out
    offc_left = (abs(off_left) + off_left_out) * factor_w
    offc_top = (abs(off_top) + off_top_out) * factor_h

    # Offsets for scale out
    if format_out.subsampling_w:
        offc_left = ((abs(off_left) + off_left * (clip.width / width)) * factor_w) + offc_left
    if format_out.subsampling_h:
        offc_top = ((abs(off_top) + off_top * (clip.height / height)) * factor_h) + offc_top

    for i in range(1, n_planes):
        shift_left[i] += offc_left
        shift_top[i] += offc_top

    scaled_planes = list[vs.VideoNode]()

    for i, (plane, top, left) in enumerate(zip(split(clip), shift_top, shift_left)):
        if i:
            w = round(width * 1 / 2 ** format_out.subsampling_h)
            h = round(height * 1 / 2 ** format_out.subsampling_h)
        else:
            w, h = width, height

        scaled_planes.append(
            super().scale(
                plane,
                w,
                h,
                (top, left),
                format=format_out.replace(color_family=vs.GRAY, subsampling_w=0, subsampling_h=0),
                **kwargs
            )
        )

    merged = vs.core.std.ShufflePlanes(scaled_planes, [0, 0, 0], format_out.color_family, clip)

    if chromaloc_in != chromaloc_out:
        return chromaloc_out.apply(merged)

    return merged

supersample

supersample(
    clip: VideoNodeT,
    rfactor: float = 2.0,
    shift: tuple[TopShift, LeftShift] = (0, 0),
    **kwargs: Any
) -> VideoNodeT

Supersample a clip by a given scaling factor.

Parameters:

  • clip

    (VideoNodeT) –

    The source clip.

  • rfactor

    (float, default: 2.0 ) –

    Scaling factor for supersampling.

  • shift

    (tuple[TopShift, LeftShift], default: (0, 0) ) –

    Subpixel shift (top, left) applied during scaling.

  • kwargs

    (Any, default: {} ) –

    Additional arguments forwarded to the scale function.

Returns:

Raises:

  • CustomValueError

    If resulting resolution is non-positive.

Source code
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
def supersample(
    self, clip: VideoNodeT, rfactor: float = 2.0, shift: tuple[TopShift, LeftShift] = (0, 0), **kwargs: Any
) -> VideoNodeT:
    """
    Supersample a clip by a given scaling factor.

    :param clip:                The source clip.
    :param rfactor:             Scaling factor for supersampling.
    :param shift:               Subpixel shift (top, left) applied during scaling.
    :param kwargs:              Additional arguments forwarded to the scale function.
    :raises CustomValueError:   If resulting resolution is non-positive.
    :return:                    The supersampled clip.
    """
    assert check_variable_resolution(clip, self.supersample)

    dst_width, dst_height = ceil(clip.width * rfactor), ceil(clip.height * rfactor)

    if max(dst_width, dst_height) <= 0.0:
        raise CustomValueError(
            'Multiplying the resolution by "rfactor" must result in a positive resolution!',
            self.supersample,
            rfactor,
        )

    return self.scale(clip, dst_width, dst_height, shift, **kwargs)  # type: ignore[return-value]

EwaLanczos

EwaLanczos(taps: float = 3.238315484166236, **kwargs: Any)

Bases: Placebo

Ewa Lanczos resizer.

Initialize the kernel with a specific number of taps and optional keyword arguments.

These keyword arguments are automatically forwarded to the _implemented_funcs methods but only if the method explicitly accepts them as named parameters. If the same keyword is passed to both __init__ and one of the _implemented_funcs, the one passed to func takes precedence.

Parameters:

  • taps

    (float, default: 3.238315484166236 ) –

    The number of taps used for Lanczos interpolation.

  • kwargs

    (Any, default: {} ) –

    Keyword arguments that configure the internal scaling behavior.

Classes:

Methods:

  • ensure_obj

    Ensure that the input is a scaler instance, resolving it if necessary.

  • from_param

    Resolve and return a scaler type from a given input (string, type, or instance).

  • get_scale_args
  • kernel_radius
  • multi

    Deprecated alias for supersample.

  • pretty_string

    Cached property returning a user-friendly string representation.

  • scale

    Scale a clip to the given resolution, with aspect ratio and linear light support.

  • supersample

    Supersample a clip by a given scaling factor.

Attributes:

Source code
152
153
154
155
156
157
158
159
160
161
162
163
164
def __init__(self, taps: float = 3.2383154841662362076499, **kwargs: Any) -> None:
    """
    Initialize the kernel with a specific number of taps and optional keyword arguments.

    These keyword arguments are automatically forwarded to the `_implemented_funcs` methods
    but only if the method explicitly accepts them as named parameters.
    If the same keyword is passed to both `__init__` and one of the `_implemented_funcs`,
    the one passed to `func` takes precedence.

    :param taps:    The number of taps used for Lanczos interpolation.
    :param kwargs:  Keyword arguments that configure the internal scaling behavior.
    """
    super().__init__(taps, None, None, **kwargs)

antiring instance-attribute

antiring = antiring

b instance-attribute

b = b

blur instance-attribute

blur = blur

c instance-attribute

c = c

clamp instance-attribute

clamp = clamp

kwargs instance-attribute

kwargs: dict[str, Any] = kwargs

Arguments passed to the implemented funcs or internal scale function.

scale_function class-attribute instance-attribute

scale_function = Resample

taper instance-attribute

taper = taper

taps instance-attribute

taps = taps

cached_property

cached_property(func: Callable[Concatenate[_BaseScalerT, P], T_co])

Bases: cached_property[T_co]

Read only version of functools.cached_property.

Source code
265
def __init__(self, func: Callable[Concatenate[_BaseScalerT, P], T_co]) -> None: ...

ensure_obj classmethod

ensure_obj(
    scaler: str | type[Self] | Self | None = None,
    /,
    func_except: FuncExceptT | None = None,
) -> Self

Ensure that the input is a scaler instance, resolving it if necessary.

Parameters:

  • scaler

    (str | type[Self] | Self | None, default: None ) –

    Scaler identifier (string, class, or instance).

  • func_except

    (FuncExceptT | None, default: None ) –

    Function returned for custom error handling.

Returns:

  • Self

    Scaler instance.

Source code
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
@classmethod
def ensure_obj(
    cls,
    scaler: str | type[Self] | Self | None = None,
    /,
    func_except: FuncExceptT | None = None,
) -> Self:
    """
    Ensure that the input is a scaler instance, resolving it if necessary.

    :param scaler:          Scaler identifier (string, class, or instance).
    :param func_except:     Function returned for custom error handling.
    :return:                Scaler instance.
    """
    return _base_ensure_obj(cls, scaler, func_except)

from_param classmethod

from_param(
    scaler: str | type[Self] | Self | None = None,
    /,
    func_except: FuncExceptT | None = None,
) -> type[Self]

Resolve and return a scaler type from a given input (string, type, or instance).

Parameters:

  • scaler

    (str | type[Self] | Self | None, default: None ) –

    Scaler identifier (string, class, or instance).

  • func_except

    (FuncExceptT | None, default: None ) –

    Function returned for custom error handling.

Returns:

  • type[Self]

    Resolved scaler type.

Source code
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
@classmethod
def from_param(
    cls,
    scaler: str | type[Self] | Self | None = None,
    /,
    func_except: FuncExceptT | None = None,
) -> type[Self]:
    """
    Resolve and return a scaler type from a given input (string, type, or instance).

    :param scaler:          Scaler identifier (string, class, or instance).
    :param func_except:     Function returned for custom error handling.
    :return:                Resolved scaler type.
    """
    return _base_from_param(cls, scaler, cls._err_class, func_except)

get_scale_args

get_scale_args(
    clip: VideoNode,
    shift: tuple[TopShift, LeftShift] = (0, 0),
    width: int | None = None,
    height: int | None = None,
    **kwargs: Any
) -> dict[str, Any]
Source code
 75
 76
 77
 78
 79
 80
 81
 82
 83
 84
 85
 86
 87
 88
 89
 90
 91
 92
 93
 94
 95
 96
 97
 98
 99
100
101
def get_scale_args(
    self,
    clip: vs.VideoNode,
    shift: tuple[TopShift, LeftShift] = (0, 0),
    width: int | None = None,
    height: int | None = None,
    **kwargs: Any,
) -> dict[str, Any]:
    return (
        dict(
            sx=kwargs.pop("src_left", shift[1]),
            sy=kwargs.pop("src_top", shift[0]),
            width=width,
            height=height,
            filter=self._kernel,
            radius=self.taps,
            param1=self.b,
            param2=self.c,
            clamp=self.clamp,
            taper=self.taper,
            blur=self.blur,
            antiring=self.antiring,
        )
        | self.kwargs
        | kwargs
        | dict(linearize=False, sigmoidize=False)
    )

kernel_radius

kernel_radius() -> int
Source code
103
104
105
106
107
108
109
110
111
112
113
114
@ComplexScaler.cached_property
def kernel_radius(self) -> int:
    if self.taps:
        return ceil(self.taps)

    if self.b or self.c:
        b, c = fallback(self.b, 0), fallback(self.c, 0.5)

        if (b, c) == (0, 0):
            return 1

    return 2

multi

multi(
    clip: VideoNodeT,
    multi: float = 2.0,
    shift: tuple[TopShift, LeftShift] = (0, 0),
    **kwargs: Any
) -> VideoNodeT

Deprecated alias for supersample.

Parameters:

  • clip

    (VideoNodeT) –

    The source clip.

  • multi

    (float, default: 2.0 ) –

    Supersampling factor.

  • shift

    (tuple[TopShift, LeftShift], default: (0, 0) ) –

    Subpixel shift (top, left) applied during scaling.

  • kwargs

    (Any, default: {} ) –

    Additional arguments forwarded to the scale function.

Returns:

Source code
451
452
453
454
455
456
457
458
459
460
461
462
463
464
@deprecated('The "multi" method is deprecated. Use "supersample" instead.', category=DeprecationWarning)
def multi(
    self, clip: VideoNodeT, multi: float = 2.0, shift: tuple[TopShift, LeftShift] = (0, 0), **kwargs: Any
) -> VideoNodeT:
    """
    Deprecated alias for `supersample`.

    :param clip:    The source clip.
    :param multi:   Supersampling factor.
    :param shift:   Subpixel shift (top, left) applied during scaling.
    :param kwargs:  Additional arguments forwarded to the scale function.
    :return:        The supersampled clip.
    """
    return self.supersample(clip, multi, shift, **kwargs)

pretty_string

pretty_string() -> str

Cached property returning a user-friendly string representation.

Returns:

  • str

    Pretty-printed string with arguments.

Source code
368
369
370
371
372
373
374
375
@cached_property
def pretty_string(self) -> str:
    """
    Cached property returning a user-friendly string representation.

    :return: Pretty-printed string with arguments.
    """
    return self._pretty_string()

scale

scale(
    clip: VideoNode,
    width: int | None = None,
    height: int | None = None,
    shift: tuple[TopShift | list[TopShift], LeftShift | list[LeftShift]] = (
        0,
        0,
    ),
    *,
    linear: bool | None = None,
    sigmoid: bool | tuple[Slope, Center] = False,
    border_handling: BorderHandling = MIRROR,
    sample_grid_model: SampleGridModel = MATCH_EDGES,
    sar: Sar | float | bool | None = None,
    dar: Dar | float | bool | None = None,
    dar_in: Dar | bool | float | None = None,
    keep_ar: bool | None = None,
    blur: float | None = None,
    **kwargs: Any
) -> VideoNode | ConstantFormatVideoNode

Scale a clip to the given resolution, with aspect ratio and linear light support.

Keyword arguments passed during initialization are automatically injected here, unless explicitly overridden by the arguments provided at call time. Only arguments that match named parameters in this method are injected.

Parameters:

  • clip

    (VideoNode) –

    The source clip.

  • width

    (int | None, default: None ) –

    Target width (defaults to clip width if None).

  • height

    (int | None, default: None ) –

    Target height (defaults to clip height if None).

  • shift

    (tuple[TopShift | list[TopShift], LeftShift | list[LeftShift]], default: (0, 0) ) –

    Subpixel shift (top, left) applied during scaling. If a tuple is provided, it is used uniformly. If a list is given, the shift is applied per plane.

  • linear

    (bool | None, default: None ) –

    Whether to linearize the input before descaling. If None, inferred from sigmoid.

  • sigmoid

    (bool | tuple[Slope, Center], default: False ) –

    Whether to use sigmoid transfer curve. Can be True, False, or a tuple of (slope, center). True applies the defaults values (6.5, 0.75). Keep in mind sigmoid slope has to be in range 1.0-20.0 (inclusive) and sigmoid center has to be in range 0.0-1.0 (inclusive).

  • border_handling

    (BorderHandling, default: MIRROR ) –

    Method for handling image borders during sampling.

  • sample_grid_model

    (SampleGridModel, default: MATCH_EDGES ) –

    Model used to align sampling grid.

  • sar

    (Sar | float | bool | None, default: None ) –

    Sample aspect ratio to assume or convert to.

  • dar

    (Dar | float | bool | None, default: None ) –

    Desired display aspect ratio.

  • dar_in

    (Dar | bool | float | None, default: None ) –

    Input display aspect ratio, if different from clip's.

  • keep_ar

    (bool | None, default: None ) –

    Whether to adjust dimensions to preserve aspect ratio.

  • blur

    (float | None, default: None ) –

    Amount of blur to apply during scaling.

Returns:

  • VideoNode | ConstantFormatVideoNode

    Scaled clip, optionally aspect-corrected and linearized.

Source code
510
511
512
513
514
515
516
517
518
519
520
521
522
523
524
525
526
527
528
529
530
531
532
533
534
535
536
537
538
539
540
541
542
543
544
545
546
547
548
549
550
551
552
553
554
555
556
557
558
559
560
561
562
563
564
565
566
567
568
569
570
571
572
573
574
575
576
577
578
579
580
581
582
583
584
585
586
587
588
589
590
591
592
593
594
595
596
597
598
599
600
601
602
603
604
605
606
607
608
609
610
611
612
613
614
615
616
617
618
619
620
621
622
623
624
625
626
627
628
629
630
631
632
633
634
635
636
637
638
639
640
641
642
643
644
645
646
647
648
649
650
651
652
def scale(
    self,
    clip: vs.VideoNode,
    width: int | None = None,
    height: int | None = None,
    # ComplexScaler adds shift per planes
    shift: tuple[TopShift | list[TopShift], LeftShift | list[LeftShift]] = (0, 0),
    *,
    # `linear` and `sigmoid` from LinearScaler
    linear: bool | None = None,
    sigmoid: bool | tuple[Slope, Center] = False,
    # `border_handling`, `sample_grid_model`, `sar`, `dar`, `dar_in` and `keep_ar` from KeepArScaler
    border_handling: BorderHandling = BorderHandling.MIRROR,
    sample_grid_model: SampleGridModel = SampleGridModel.MATCH_EDGES,
    sar: Sar | float | bool | None = None,
    dar: Dar | float | bool | None = None,
    dar_in: Dar | bool | float | None = None,
    keep_ar: bool | None = None,
    # ComplexScaler adds blur
    blur: float | None = None,
    **kwargs: Any,
) -> vs.VideoNode | ConstantFormatVideoNode:
    """
    Scale a clip to the given resolution, with aspect ratio and linear light support.

    Keyword arguments passed during initialization are automatically injected here,
    unless explicitly overridden by the arguments provided at call time.
    Only arguments that match named parameters in this method are injected.

    :param clip:                The source clip.
    :param width:               Target width (defaults to clip width if None).
    :param height:              Target height (defaults to clip height if None).
    :param shift:               Subpixel shift (top, left) applied during scaling.
                                If a tuple is provided, it is used uniformly.
                                If a list is given, the shift is applied per plane.
    :param linear:              Whether to linearize the input before descaling. If None, inferred from sigmoid.
    :param sigmoid:             Whether to use sigmoid transfer curve. Can be True, False, or a tuple of (slope, center).
                                `True` applies the defaults values (6.5, 0.75).
                                Keep in mind sigmoid slope has to be in range 1.0-20.0 (inclusive)
                                and sigmoid center has to be in range 0.0-1.0 (inclusive).
    :param border_handling:     Method for handling image borders during sampling.
    :param sample_grid_model:   Model used to align sampling grid.
    :param sar:                 Sample aspect ratio to assume or convert to.
    :param dar:                 Desired display aspect ratio.
    :param dar_in:              Input display aspect ratio, if different from clip's.
    :param keep_ar:             Whether to adjust dimensions to preserve aspect ratio.
    :param blur:                Amount of blur to apply during scaling.
    :return:                    Scaled clip, optionally aspect-corrected and linearized.
    """
    kwargs.update(
        linear=linear,
        sigmoid=sigmoid,
        border_handling=border_handling,
        sample_grid_model=sample_grid_model,
        sar=sar,
        dar=dar,
        dar_in=dar_in,
        keep_ar=keep_ar,
        blur=blur
    )

    shift_top, shift_left = shift

    if isinstance(shift_top, (int, float)) and isinstance(shift_left, (int, float)):
        return super().scale(clip, width, height, (shift_top, shift_left), **kwargs)

    assert check_variable_format(clip, self.scale)

    n_planes = clip.format.num_planes

    shift_top = normalize_seq(shift_top, n_planes)
    shift_left = normalize_seq(shift_left, n_planes)

    if n_planes == 1:
        if len(set(shift_top)) > 1 or len(set(shift_left)) > 1:
            raise CustomValueError(
                "Inconsistent shift values detected for a single plane. "
                "All shift values must be identical when passing a GRAY clip.",
                self.scale,
                (shift_top, shift_left),
            )

        return super().scale(clip, width, height, (shift_top[0], shift_left[0]), **kwargs)

    width, height = self._wh_norm(clip, width, height)

    format_in = clip.format
    format_out = get_video_format(fallback(kwargs.pop("format", None), self.kwargs.get("format"), clip.format))

    chromaloc = ChromaLocation.from_video(clip, func=self.scale)
    chromaloc_in = ChromaLocation(
        fallback(kwargs.pop("chromaloc_in", None), self.kwargs.get("chromaloc_in"), chromaloc)
    )
    chromaloc_out = ChromaLocation(
        fallback(kwargs.pop("chromaloc", None), self.kwargs.get("chromaloc"), chromaloc)
    )

    off_left, off_top = chromaloc_in.get_offsets(format_in)
    off_left_out, off_top_out = chromaloc_out.get_offsets(format_out)

    factor_w = 1 / 2 ** format_in.subsampling_w
    factor_h = 1 / 2 ** format_in.subsampling_h

    # Offsets for format out
    offc_left = (abs(off_left) + off_left_out) * factor_w
    offc_top = (abs(off_top) + off_top_out) * factor_h

    # Offsets for scale out
    if format_out.subsampling_w:
        offc_left = ((abs(off_left) + off_left * (clip.width / width)) * factor_w) + offc_left
    if format_out.subsampling_h:
        offc_top = ((abs(off_top) + off_top * (clip.height / height)) * factor_h) + offc_top

    for i in range(1, n_planes):
        shift_left[i] += offc_left
        shift_top[i] += offc_top

    scaled_planes = list[vs.VideoNode]()

    for i, (plane, top, left) in enumerate(zip(split(clip), shift_top, shift_left)):
        if i:
            w = round(width * 1 / 2 ** format_out.subsampling_h)
            h = round(height * 1 / 2 ** format_out.subsampling_h)
        else:
            w, h = width, height

        scaled_planes.append(
            super().scale(
                plane,
                w,
                h,
                (top, left),
                format=format_out.replace(color_family=vs.GRAY, subsampling_w=0, subsampling_h=0),
                **kwargs
            )
        )

    merged = vs.core.std.ShufflePlanes(scaled_planes, [0, 0, 0], format_out.color_family, clip)

    if chromaloc_in != chromaloc_out:
        return chromaloc_out.apply(merged)

    return merged

supersample

supersample(
    clip: VideoNodeT,
    rfactor: float = 2.0,
    shift: tuple[TopShift, LeftShift] = (0, 0),
    **kwargs: Any
) -> VideoNodeT

Supersample a clip by a given scaling factor.

Parameters:

  • clip

    (VideoNodeT) –

    The source clip.

  • rfactor

    (float, default: 2.0 ) –

    Scaling factor for supersampling.

  • shift

    (tuple[TopShift, LeftShift], default: (0, 0) ) –

    Subpixel shift (top, left) applied during scaling.

  • kwargs

    (Any, default: {} ) –

    Additional arguments forwarded to the scale function.

Returns:

Raises:

  • CustomValueError

    If resulting resolution is non-positive.

Source code
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
def supersample(
    self, clip: VideoNodeT, rfactor: float = 2.0, shift: tuple[TopShift, LeftShift] = (0, 0), **kwargs: Any
) -> VideoNodeT:
    """
    Supersample a clip by a given scaling factor.

    :param clip:                The source clip.
    :param rfactor:             Scaling factor for supersampling.
    :param shift:               Subpixel shift (top, left) applied during scaling.
    :param kwargs:              Additional arguments forwarded to the scale function.
    :raises CustomValueError:   If resulting resolution is non-positive.
    :return:                    The supersampled clip.
    """
    assert check_variable_resolution(clip, self.supersample)

    dst_width, dst_height = ceil(clip.width * rfactor), ceil(clip.height * rfactor)

    if max(dst_width, dst_height) <= 0.0:
        raise CustomValueError(
            'Multiplying the resolution by "rfactor" must result in a positive resolution!',
            self.supersample,
            rfactor,
        )

    return self.scale(clip, dst_width, dst_height, shift, **kwargs)  # type: ignore[return-value]

EwaRobidoux

EwaRobidoux(**kwargs: Any)

Bases: Placebo

Ewa Robidoux resizer.

Initialize the kernel with optional keyword arguments.

These keyword arguments are automatically forwarded to the _implemented_funcs methods but only if the method explicitly accepts them as named parameters. If the same keyword is passed to both __init__ and one of the _implemented_funcs, the one passed to func takes precedence.

Parameters:

  • kwargs

    (Any, default: {} ) –

    Keyword arguments that configure the internal scaling behavior.

Classes:

Methods:

  • ensure_obj

    Ensure that the input is a scaler instance, resolving it if necessary.

  • from_param

    Resolve and return a scaler type from a given input (string, type, or instance).

  • get_scale_args
  • kernel_radius
  • multi

    Deprecated alias for supersample.

  • pretty_string

    Cached property returning a user-friendly string representation.

  • scale

    Scale a clip to the given resolution, with aspect ratio and linear light support.

  • supersample

    Supersample a clip by a given scaling factor.

Attributes:

Source code
232
233
234
235
236
237
238
239
240
241
242
243
def __init__(self, **kwargs: Any) -> None:
    """
    Initialize the kernel with optional keyword arguments.

    These keyword arguments are automatically forwarded to the `_implemented_funcs` methods
    but only if the method explicitly accepts them as named parameters.
    If the same keyword is passed to both `__init__` and one of the `_implemented_funcs`,
    the one passed to `func` takes precedence.

    :param kwargs:  Keyword arguments that configure the internal scaling behavior.
    """
    super().__init__(None, None, None, **kwargs)

antiring instance-attribute

antiring = antiring

b instance-attribute

b = b

blur instance-attribute

blur = blur

c instance-attribute

c = c

clamp instance-attribute

clamp = clamp

kwargs instance-attribute

kwargs: dict[str, Any] = kwargs

Arguments passed to the implemented funcs or internal scale function.

scale_function class-attribute instance-attribute

scale_function = Resample

taper instance-attribute

taper = taper

taps instance-attribute

taps = taps

cached_property

cached_property(func: Callable[Concatenate[_BaseScalerT, P], T_co])

Bases: cached_property[T_co]

Read only version of functools.cached_property.

Source code
265
def __init__(self, func: Callable[Concatenate[_BaseScalerT, P], T_co]) -> None: ...

ensure_obj classmethod

ensure_obj(
    scaler: str | type[Self] | Self | None = None,
    /,
    func_except: FuncExceptT | None = None,
) -> Self

Ensure that the input is a scaler instance, resolving it if necessary.

Parameters:

  • scaler

    (str | type[Self] | Self | None, default: None ) –

    Scaler identifier (string, class, or instance).

  • func_except

    (FuncExceptT | None, default: None ) –

    Function returned for custom error handling.

Returns:

  • Self

    Scaler instance.

Source code
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
@classmethod
def ensure_obj(
    cls,
    scaler: str | type[Self] | Self | None = None,
    /,
    func_except: FuncExceptT | None = None,
) -> Self:
    """
    Ensure that the input is a scaler instance, resolving it if necessary.

    :param scaler:          Scaler identifier (string, class, or instance).
    :param func_except:     Function returned for custom error handling.
    :return:                Scaler instance.
    """
    return _base_ensure_obj(cls, scaler, func_except)

from_param classmethod

from_param(
    scaler: str | type[Self] | Self | None = None,
    /,
    func_except: FuncExceptT | None = None,
) -> type[Self]

Resolve and return a scaler type from a given input (string, type, or instance).

Parameters:

  • scaler

    (str | type[Self] | Self | None, default: None ) –

    Scaler identifier (string, class, or instance).

  • func_except

    (FuncExceptT | None, default: None ) –

    Function returned for custom error handling.

Returns:

  • type[Self]

    Resolved scaler type.

Source code
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
@classmethod
def from_param(
    cls,
    scaler: str | type[Self] | Self | None = None,
    /,
    func_except: FuncExceptT | None = None,
) -> type[Self]:
    """
    Resolve and return a scaler type from a given input (string, type, or instance).

    :param scaler:          Scaler identifier (string, class, or instance).
    :param func_except:     Function returned for custom error handling.
    :return:                Resolved scaler type.
    """
    return _base_from_param(cls, scaler, cls._err_class, func_except)

get_scale_args

get_scale_args(
    clip: VideoNode,
    shift: tuple[TopShift, LeftShift] = (0, 0),
    width: int | None = None,
    height: int | None = None,
    **kwargs: Any
) -> dict[str, Any]
Source code
 75
 76
 77
 78
 79
 80
 81
 82
 83
 84
 85
 86
 87
 88
 89
 90
 91
 92
 93
 94
 95
 96
 97
 98
 99
100
101
def get_scale_args(
    self,
    clip: vs.VideoNode,
    shift: tuple[TopShift, LeftShift] = (0, 0),
    width: int | None = None,
    height: int | None = None,
    **kwargs: Any,
) -> dict[str, Any]:
    return (
        dict(
            sx=kwargs.pop("src_left", shift[1]),
            sy=kwargs.pop("src_top", shift[0]),
            width=width,
            height=height,
            filter=self._kernel,
            radius=self.taps,
            param1=self.b,
            param2=self.c,
            clamp=self.clamp,
            taper=self.taper,
            blur=self.blur,
            antiring=self.antiring,
        )
        | self.kwargs
        | kwargs
        | dict(linearize=False, sigmoidize=False)
    )

kernel_radius

kernel_radius() -> int
Source code
103
104
105
106
107
108
109
110
111
112
113
114
@ComplexScaler.cached_property
def kernel_radius(self) -> int:
    if self.taps:
        return ceil(self.taps)

    if self.b or self.c:
        b, c = fallback(self.b, 0), fallback(self.c, 0.5)

        if (b, c) == (0, 0):
            return 1

    return 2

multi

multi(
    clip: VideoNodeT,
    multi: float = 2.0,
    shift: tuple[TopShift, LeftShift] = (0, 0),
    **kwargs: Any
) -> VideoNodeT

Deprecated alias for supersample.

Parameters:

  • clip

    (VideoNodeT) –

    The source clip.

  • multi

    (float, default: 2.0 ) –

    Supersampling factor.

  • shift

    (tuple[TopShift, LeftShift], default: (0, 0) ) –

    Subpixel shift (top, left) applied during scaling.

  • kwargs

    (Any, default: {} ) –

    Additional arguments forwarded to the scale function.

Returns:

Source code
451
452
453
454
455
456
457
458
459
460
461
462
463
464
@deprecated('The "multi" method is deprecated. Use "supersample" instead.', category=DeprecationWarning)
def multi(
    self, clip: VideoNodeT, multi: float = 2.0, shift: tuple[TopShift, LeftShift] = (0, 0), **kwargs: Any
) -> VideoNodeT:
    """
    Deprecated alias for `supersample`.

    :param clip:    The source clip.
    :param multi:   Supersampling factor.
    :param shift:   Subpixel shift (top, left) applied during scaling.
    :param kwargs:  Additional arguments forwarded to the scale function.
    :return:        The supersampled clip.
    """
    return self.supersample(clip, multi, shift, **kwargs)

pretty_string

pretty_string() -> str

Cached property returning a user-friendly string representation.

Returns:

  • str

    Pretty-printed string with arguments.

Source code
368
369
370
371
372
373
374
375
@cached_property
def pretty_string(self) -> str:
    """
    Cached property returning a user-friendly string representation.

    :return: Pretty-printed string with arguments.
    """
    return self._pretty_string()

scale

scale(
    clip: VideoNode,
    width: int | None = None,
    height: int | None = None,
    shift: tuple[TopShift | list[TopShift], LeftShift | list[LeftShift]] = (
        0,
        0,
    ),
    *,
    linear: bool | None = None,
    sigmoid: bool | tuple[Slope, Center] = False,
    border_handling: BorderHandling = MIRROR,
    sample_grid_model: SampleGridModel = MATCH_EDGES,
    sar: Sar | float | bool | None = None,
    dar: Dar | float | bool | None = None,
    dar_in: Dar | bool | float | None = None,
    keep_ar: bool | None = None,
    blur: float | None = None,
    **kwargs: Any
) -> VideoNode | ConstantFormatVideoNode

Scale a clip to the given resolution, with aspect ratio and linear light support.

Keyword arguments passed during initialization are automatically injected here, unless explicitly overridden by the arguments provided at call time. Only arguments that match named parameters in this method are injected.

Parameters:

  • clip

    (VideoNode) –

    The source clip.

  • width

    (int | None, default: None ) –

    Target width (defaults to clip width if None).

  • height

    (int | None, default: None ) –

    Target height (defaults to clip height if None).

  • shift

    (tuple[TopShift | list[TopShift], LeftShift | list[LeftShift]], default: (0, 0) ) –

    Subpixel shift (top, left) applied during scaling. If a tuple is provided, it is used uniformly. If a list is given, the shift is applied per plane.

  • linear

    (bool | None, default: None ) –

    Whether to linearize the input before descaling. If None, inferred from sigmoid.

  • sigmoid

    (bool | tuple[Slope, Center], default: False ) –

    Whether to use sigmoid transfer curve. Can be True, False, or a tuple of (slope, center). True applies the defaults values (6.5, 0.75). Keep in mind sigmoid slope has to be in range 1.0-20.0 (inclusive) and sigmoid center has to be in range 0.0-1.0 (inclusive).

  • border_handling

    (BorderHandling, default: MIRROR ) –

    Method for handling image borders during sampling.

  • sample_grid_model

    (SampleGridModel, default: MATCH_EDGES ) –

    Model used to align sampling grid.

  • sar

    (Sar | float | bool | None, default: None ) –

    Sample aspect ratio to assume or convert to.

  • dar

    (Dar | float | bool | None, default: None ) –

    Desired display aspect ratio.

  • dar_in

    (Dar | bool | float | None, default: None ) –

    Input display aspect ratio, if different from clip's.

  • keep_ar

    (bool | None, default: None ) –

    Whether to adjust dimensions to preserve aspect ratio.

  • blur

    (float | None, default: None ) –

    Amount of blur to apply during scaling.

Returns:

  • VideoNode | ConstantFormatVideoNode

    Scaled clip, optionally aspect-corrected and linearized.

Source code
510
511
512
513
514
515
516
517
518
519
520
521
522
523
524
525
526
527
528
529
530
531
532
533
534
535
536
537
538
539
540
541
542
543
544
545
546
547
548
549
550
551
552
553
554
555
556
557
558
559
560
561
562
563
564
565
566
567
568
569
570
571
572
573
574
575
576
577
578
579
580
581
582
583
584
585
586
587
588
589
590
591
592
593
594
595
596
597
598
599
600
601
602
603
604
605
606
607
608
609
610
611
612
613
614
615
616
617
618
619
620
621
622
623
624
625
626
627
628
629
630
631
632
633
634
635
636
637
638
639
640
641
642
643
644
645
646
647
648
649
650
651
652
def scale(
    self,
    clip: vs.VideoNode,
    width: int | None = None,
    height: int | None = None,
    # ComplexScaler adds shift per planes
    shift: tuple[TopShift | list[TopShift], LeftShift | list[LeftShift]] = (0, 0),
    *,
    # `linear` and `sigmoid` from LinearScaler
    linear: bool | None = None,
    sigmoid: bool | tuple[Slope, Center] = False,
    # `border_handling`, `sample_grid_model`, `sar`, `dar`, `dar_in` and `keep_ar` from KeepArScaler
    border_handling: BorderHandling = BorderHandling.MIRROR,
    sample_grid_model: SampleGridModel = SampleGridModel.MATCH_EDGES,
    sar: Sar | float | bool | None = None,
    dar: Dar | float | bool | None = None,
    dar_in: Dar | bool | float | None = None,
    keep_ar: bool | None = None,
    # ComplexScaler adds blur
    blur: float | None = None,
    **kwargs: Any,
) -> vs.VideoNode | ConstantFormatVideoNode:
    """
    Scale a clip to the given resolution, with aspect ratio and linear light support.

    Keyword arguments passed during initialization are automatically injected here,
    unless explicitly overridden by the arguments provided at call time.
    Only arguments that match named parameters in this method are injected.

    :param clip:                The source clip.
    :param width:               Target width (defaults to clip width if None).
    :param height:              Target height (defaults to clip height if None).
    :param shift:               Subpixel shift (top, left) applied during scaling.
                                If a tuple is provided, it is used uniformly.
                                If a list is given, the shift is applied per plane.
    :param linear:              Whether to linearize the input before descaling. If None, inferred from sigmoid.
    :param sigmoid:             Whether to use sigmoid transfer curve. Can be True, False, or a tuple of (slope, center).
                                `True` applies the defaults values (6.5, 0.75).
                                Keep in mind sigmoid slope has to be in range 1.0-20.0 (inclusive)
                                and sigmoid center has to be in range 0.0-1.0 (inclusive).
    :param border_handling:     Method for handling image borders during sampling.
    :param sample_grid_model:   Model used to align sampling grid.
    :param sar:                 Sample aspect ratio to assume or convert to.
    :param dar:                 Desired display aspect ratio.
    :param dar_in:              Input display aspect ratio, if different from clip's.
    :param keep_ar:             Whether to adjust dimensions to preserve aspect ratio.
    :param blur:                Amount of blur to apply during scaling.
    :return:                    Scaled clip, optionally aspect-corrected and linearized.
    """
    kwargs.update(
        linear=linear,
        sigmoid=sigmoid,
        border_handling=border_handling,
        sample_grid_model=sample_grid_model,
        sar=sar,
        dar=dar,
        dar_in=dar_in,
        keep_ar=keep_ar,
        blur=blur
    )

    shift_top, shift_left = shift

    if isinstance(shift_top, (int, float)) and isinstance(shift_left, (int, float)):
        return super().scale(clip, width, height, (shift_top, shift_left), **kwargs)

    assert check_variable_format(clip, self.scale)

    n_planes = clip.format.num_planes

    shift_top = normalize_seq(shift_top, n_planes)
    shift_left = normalize_seq(shift_left, n_planes)

    if n_planes == 1:
        if len(set(shift_top)) > 1 or len(set(shift_left)) > 1:
            raise CustomValueError(
                "Inconsistent shift values detected for a single plane. "
                "All shift values must be identical when passing a GRAY clip.",
                self.scale,
                (shift_top, shift_left),
            )

        return super().scale(clip, width, height, (shift_top[0], shift_left[0]), **kwargs)

    width, height = self._wh_norm(clip, width, height)

    format_in = clip.format
    format_out = get_video_format(fallback(kwargs.pop("format", None), self.kwargs.get("format"), clip.format))

    chromaloc = ChromaLocation.from_video(clip, func=self.scale)
    chromaloc_in = ChromaLocation(
        fallback(kwargs.pop("chromaloc_in", None), self.kwargs.get("chromaloc_in"), chromaloc)
    )
    chromaloc_out = ChromaLocation(
        fallback(kwargs.pop("chromaloc", None), self.kwargs.get("chromaloc"), chromaloc)
    )

    off_left, off_top = chromaloc_in.get_offsets(format_in)
    off_left_out, off_top_out = chromaloc_out.get_offsets(format_out)

    factor_w = 1 / 2 ** format_in.subsampling_w
    factor_h = 1 / 2 ** format_in.subsampling_h

    # Offsets for format out
    offc_left = (abs(off_left) + off_left_out) * factor_w
    offc_top = (abs(off_top) + off_top_out) * factor_h

    # Offsets for scale out
    if format_out.subsampling_w:
        offc_left = ((abs(off_left) + off_left * (clip.width / width)) * factor_w) + offc_left
    if format_out.subsampling_h:
        offc_top = ((abs(off_top) + off_top * (clip.height / height)) * factor_h) + offc_top

    for i in range(1, n_planes):
        shift_left[i] += offc_left
        shift_top[i] += offc_top

    scaled_planes = list[vs.VideoNode]()

    for i, (plane, top, left) in enumerate(zip(split(clip), shift_top, shift_left)):
        if i:
            w = round(width * 1 / 2 ** format_out.subsampling_h)
            h = round(height * 1 / 2 ** format_out.subsampling_h)
        else:
            w, h = width, height

        scaled_planes.append(
            super().scale(
                plane,
                w,
                h,
                (top, left),
                format=format_out.replace(color_family=vs.GRAY, subsampling_w=0, subsampling_h=0),
                **kwargs
            )
        )

    merged = vs.core.std.ShufflePlanes(scaled_planes, [0, 0, 0], format_out.color_family, clip)

    if chromaloc_in != chromaloc_out:
        return chromaloc_out.apply(merged)

    return merged

supersample

supersample(
    clip: VideoNodeT,
    rfactor: float = 2.0,
    shift: tuple[TopShift, LeftShift] = (0, 0),
    **kwargs: Any
) -> VideoNodeT

Supersample a clip by a given scaling factor.

Parameters:

  • clip

    (VideoNodeT) –

    The source clip.

  • rfactor

    (float, default: 2.0 ) –

    Scaling factor for supersampling.

  • shift

    (tuple[TopShift, LeftShift], default: (0, 0) ) –

    Subpixel shift (top, left) applied during scaling.

  • kwargs

    (Any, default: {} ) –

    Additional arguments forwarded to the scale function.

Returns:

Raises:

  • CustomValueError

    If resulting resolution is non-positive.

Source code
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
def supersample(
    self, clip: VideoNodeT, rfactor: float = 2.0, shift: tuple[TopShift, LeftShift] = (0, 0), **kwargs: Any
) -> VideoNodeT:
    """
    Supersample a clip by a given scaling factor.

    :param clip:                The source clip.
    :param rfactor:             Scaling factor for supersampling.
    :param shift:               Subpixel shift (top, left) applied during scaling.
    :param kwargs:              Additional arguments forwarded to the scale function.
    :raises CustomValueError:   If resulting resolution is non-positive.
    :return:                    The supersampled clip.
    """
    assert check_variable_resolution(clip, self.supersample)

    dst_width, dst_height = ceil(clip.width * rfactor), ceil(clip.height * rfactor)

    if max(dst_width, dst_height) <= 0.0:
        raise CustomValueError(
            'Multiplying the resolution by "rfactor" must result in a positive resolution!',
            self.supersample,
            rfactor,
        )

    return self.scale(clip, dst_width, dst_height, shift, **kwargs)  # type: ignore[return-value]

EwaRobidouxSharp

EwaRobidouxSharp(**kwargs: Any)

Bases: Placebo

Ewa Robidoux Sharp resizer.

Initialize the kernel with optional keyword arguments.

These keyword arguments are automatically forwarded to the _implemented_funcs methods but only if the method explicitly accepts them as named parameters. If the same keyword is passed to both __init__ and one of the _implemented_funcs, the one passed to func takes precedence.

Parameters:

  • kwargs

    (Any, default: {} ) –

    Keyword arguments that configure the internal scaling behavior.

Classes:

Methods:

  • ensure_obj

    Ensure that the input is a scaler instance, resolving it if necessary.

  • from_param

    Resolve and return a scaler type from a given input (string, type, or instance).

  • get_scale_args
  • kernel_radius
  • multi

    Deprecated alias for supersample.

  • pretty_string

    Cached property returning a user-friendly string representation.

  • scale

    Scale a clip to the given resolution, with aspect ratio and linear light support.

  • supersample

    Supersample a clip by a given scaling factor.

Attributes:

Source code
251
252
253
254
255
256
257
258
259
260
261
262
def __init__(self, **kwargs: Any) -> None:
    """
    Initialize the kernel with optional keyword arguments.

    These keyword arguments are automatically forwarded to the `_implemented_funcs` methods
    but only if the method explicitly accepts them as named parameters.
    If the same keyword is passed to both `__init__` and one of the `_implemented_funcs`,
    the one passed to `func` takes precedence.

    :param kwargs:  Keyword arguments that configure the internal scaling behavior.
    """
    super().__init__(None, None, None, **kwargs)

antiring instance-attribute

antiring = antiring

b instance-attribute

b = b

blur instance-attribute

blur = blur

c instance-attribute

c = c

clamp instance-attribute

clamp = clamp

kwargs instance-attribute

kwargs: dict[str, Any] = kwargs

Arguments passed to the implemented funcs or internal scale function.

scale_function class-attribute instance-attribute

scale_function = Resample

taper instance-attribute

taper = taper

taps instance-attribute

taps = taps

cached_property

cached_property(func: Callable[Concatenate[_BaseScalerT, P], T_co])

Bases: cached_property[T_co]

Read only version of functools.cached_property.

Source code
265
def __init__(self, func: Callable[Concatenate[_BaseScalerT, P], T_co]) -> None: ...

ensure_obj classmethod

ensure_obj(
    scaler: str | type[Self] | Self | None = None,
    /,
    func_except: FuncExceptT | None = None,
) -> Self

Ensure that the input is a scaler instance, resolving it if necessary.

Parameters:

  • scaler

    (str | type[Self] | Self | None, default: None ) –

    Scaler identifier (string, class, or instance).

  • func_except

    (FuncExceptT | None, default: None ) –

    Function returned for custom error handling.

Returns:

  • Self

    Scaler instance.

Source code
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
@classmethod
def ensure_obj(
    cls,
    scaler: str | type[Self] | Self | None = None,
    /,
    func_except: FuncExceptT | None = None,
) -> Self:
    """
    Ensure that the input is a scaler instance, resolving it if necessary.

    :param scaler:          Scaler identifier (string, class, or instance).
    :param func_except:     Function returned for custom error handling.
    :return:                Scaler instance.
    """
    return _base_ensure_obj(cls, scaler, func_except)

from_param classmethod

from_param(
    scaler: str | type[Self] | Self | None = None,
    /,
    func_except: FuncExceptT | None = None,
) -> type[Self]

Resolve and return a scaler type from a given input (string, type, or instance).

Parameters:

  • scaler

    (str | type[Self] | Self | None, default: None ) –

    Scaler identifier (string, class, or instance).

  • func_except

    (FuncExceptT | None, default: None ) –

    Function returned for custom error handling.

Returns:

  • type[Self]

    Resolved scaler type.

Source code
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
@classmethod
def from_param(
    cls,
    scaler: str | type[Self] | Self | None = None,
    /,
    func_except: FuncExceptT | None = None,
) -> type[Self]:
    """
    Resolve and return a scaler type from a given input (string, type, or instance).

    :param scaler:          Scaler identifier (string, class, or instance).
    :param func_except:     Function returned for custom error handling.
    :return:                Resolved scaler type.
    """
    return _base_from_param(cls, scaler, cls._err_class, func_except)

get_scale_args

get_scale_args(
    clip: VideoNode,
    shift: tuple[TopShift, LeftShift] = (0, 0),
    width: int | None = None,
    height: int | None = None,
    **kwargs: Any
) -> dict[str, Any]
Source code
 75
 76
 77
 78
 79
 80
 81
 82
 83
 84
 85
 86
 87
 88
 89
 90
 91
 92
 93
 94
 95
 96
 97
 98
 99
100
101
def get_scale_args(
    self,
    clip: vs.VideoNode,
    shift: tuple[TopShift, LeftShift] = (0, 0),
    width: int | None = None,
    height: int | None = None,
    **kwargs: Any,
) -> dict[str, Any]:
    return (
        dict(
            sx=kwargs.pop("src_left", shift[1]),
            sy=kwargs.pop("src_top", shift[0]),
            width=width,
            height=height,
            filter=self._kernel,
            radius=self.taps,
            param1=self.b,
            param2=self.c,
            clamp=self.clamp,
            taper=self.taper,
            blur=self.blur,
            antiring=self.antiring,
        )
        | self.kwargs
        | kwargs
        | dict(linearize=False, sigmoidize=False)
    )

kernel_radius

kernel_radius() -> int
Source code
103
104
105
106
107
108
109
110
111
112
113
114
@ComplexScaler.cached_property
def kernel_radius(self) -> int:
    if self.taps:
        return ceil(self.taps)

    if self.b or self.c:
        b, c = fallback(self.b, 0), fallback(self.c, 0.5)

        if (b, c) == (0, 0):
            return 1

    return 2

multi

multi(
    clip: VideoNodeT,
    multi: float = 2.0,
    shift: tuple[TopShift, LeftShift] = (0, 0),
    **kwargs: Any
) -> VideoNodeT

Deprecated alias for supersample.

Parameters:

  • clip

    (VideoNodeT) –

    The source clip.

  • multi

    (float, default: 2.0 ) –

    Supersampling factor.

  • shift

    (tuple[TopShift, LeftShift], default: (0, 0) ) –

    Subpixel shift (top, left) applied during scaling.

  • kwargs

    (Any, default: {} ) –

    Additional arguments forwarded to the scale function.

Returns:

Source code
451
452
453
454
455
456
457
458
459
460
461
462
463
464
@deprecated('The "multi" method is deprecated. Use "supersample" instead.', category=DeprecationWarning)
def multi(
    self, clip: VideoNodeT, multi: float = 2.0, shift: tuple[TopShift, LeftShift] = (0, 0), **kwargs: Any
) -> VideoNodeT:
    """
    Deprecated alias for `supersample`.

    :param clip:    The source clip.
    :param multi:   Supersampling factor.
    :param shift:   Subpixel shift (top, left) applied during scaling.
    :param kwargs:  Additional arguments forwarded to the scale function.
    :return:        The supersampled clip.
    """
    return self.supersample(clip, multi, shift, **kwargs)

pretty_string

pretty_string() -> str

Cached property returning a user-friendly string representation.

Returns:

  • str

    Pretty-printed string with arguments.

Source code
368
369
370
371
372
373
374
375
@cached_property
def pretty_string(self) -> str:
    """
    Cached property returning a user-friendly string representation.

    :return: Pretty-printed string with arguments.
    """
    return self._pretty_string()

scale

scale(
    clip: VideoNode,
    width: int | None = None,
    height: int | None = None,
    shift: tuple[TopShift | list[TopShift], LeftShift | list[LeftShift]] = (
        0,
        0,
    ),
    *,
    linear: bool | None = None,
    sigmoid: bool | tuple[Slope, Center] = False,
    border_handling: BorderHandling = MIRROR,
    sample_grid_model: SampleGridModel = MATCH_EDGES,
    sar: Sar | float | bool | None = None,
    dar: Dar | float | bool | None = None,
    dar_in: Dar | bool | float | None = None,
    keep_ar: bool | None = None,
    blur: float | None = None,
    **kwargs: Any
) -> VideoNode | ConstantFormatVideoNode

Scale a clip to the given resolution, with aspect ratio and linear light support.

Keyword arguments passed during initialization are automatically injected here, unless explicitly overridden by the arguments provided at call time. Only arguments that match named parameters in this method are injected.

Parameters:

  • clip

    (VideoNode) –

    The source clip.

  • width

    (int | None, default: None ) –

    Target width (defaults to clip width if None).

  • height

    (int | None, default: None ) –

    Target height (defaults to clip height if None).

  • shift

    (tuple[TopShift | list[TopShift], LeftShift | list[LeftShift]], default: (0, 0) ) –

    Subpixel shift (top, left) applied during scaling. If a tuple is provided, it is used uniformly. If a list is given, the shift is applied per plane.

  • linear

    (bool | None, default: None ) –

    Whether to linearize the input before descaling. If None, inferred from sigmoid.

  • sigmoid

    (bool | tuple[Slope, Center], default: False ) –

    Whether to use sigmoid transfer curve. Can be True, False, or a tuple of (slope, center). True applies the defaults values (6.5, 0.75). Keep in mind sigmoid slope has to be in range 1.0-20.0 (inclusive) and sigmoid center has to be in range 0.0-1.0 (inclusive).

  • border_handling

    (BorderHandling, default: MIRROR ) –

    Method for handling image borders during sampling.

  • sample_grid_model

    (SampleGridModel, default: MATCH_EDGES ) –

    Model used to align sampling grid.

  • sar

    (Sar | float | bool | None, default: None ) –

    Sample aspect ratio to assume or convert to.

  • dar

    (Dar | float | bool | None, default: None ) –

    Desired display aspect ratio.

  • dar_in

    (Dar | bool | float | None, default: None ) –

    Input display aspect ratio, if different from clip's.

  • keep_ar

    (bool | None, default: None ) –

    Whether to adjust dimensions to preserve aspect ratio.

  • blur

    (float | None, default: None ) –

    Amount of blur to apply during scaling.

Returns:

  • VideoNode | ConstantFormatVideoNode

    Scaled clip, optionally aspect-corrected and linearized.

Source code
510
511
512
513
514
515
516
517
518
519
520
521
522
523
524
525
526
527
528
529
530
531
532
533
534
535
536
537
538
539
540
541
542
543
544
545
546
547
548
549
550
551
552
553
554
555
556
557
558
559
560
561
562
563
564
565
566
567
568
569
570
571
572
573
574
575
576
577
578
579
580
581
582
583
584
585
586
587
588
589
590
591
592
593
594
595
596
597
598
599
600
601
602
603
604
605
606
607
608
609
610
611
612
613
614
615
616
617
618
619
620
621
622
623
624
625
626
627
628
629
630
631
632
633
634
635
636
637
638
639
640
641
642
643
644
645
646
647
648
649
650
651
652
def scale(
    self,
    clip: vs.VideoNode,
    width: int | None = None,
    height: int | None = None,
    # ComplexScaler adds shift per planes
    shift: tuple[TopShift | list[TopShift], LeftShift | list[LeftShift]] = (0, 0),
    *,
    # `linear` and `sigmoid` from LinearScaler
    linear: bool | None = None,
    sigmoid: bool | tuple[Slope, Center] = False,
    # `border_handling`, `sample_grid_model`, `sar`, `dar`, `dar_in` and `keep_ar` from KeepArScaler
    border_handling: BorderHandling = BorderHandling.MIRROR,
    sample_grid_model: SampleGridModel = SampleGridModel.MATCH_EDGES,
    sar: Sar | float | bool | None = None,
    dar: Dar | float | bool | None = None,
    dar_in: Dar | bool | float | None = None,
    keep_ar: bool | None = None,
    # ComplexScaler adds blur
    blur: float | None = None,
    **kwargs: Any,
) -> vs.VideoNode | ConstantFormatVideoNode:
    """
    Scale a clip to the given resolution, with aspect ratio and linear light support.

    Keyword arguments passed during initialization are automatically injected here,
    unless explicitly overridden by the arguments provided at call time.
    Only arguments that match named parameters in this method are injected.

    :param clip:                The source clip.
    :param width:               Target width (defaults to clip width if None).
    :param height:              Target height (defaults to clip height if None).
    :param shift:               Subpixel shift (top, left) applied during scaling.
                                If a tuple is provided, it is used uniformly.
                                If a list is given, the shift is applied per plane.
    :param linear:              Whether to linearize the input before descaling. If None, inferred from sigmoid.
    :param sigmoid:             Whether to use sigmoid transfer curve. Can be True, False, or a tuple of (slope, center).
                                `True` applies the defaults values (6.5, 0.75).
                                Keep in mind sigmoid slope has to be in range 1.0-20.0 (inclusive)
                                and sigmoid center has to be in range 0.0-1.0 (inclusive).
    :param border_handling:     Method for handling image borders during sampling.
    :param sample_grid_model:   Model used to align sampling grid.
    :param sar:                 Sample aspect ratio to assume or convert to.
    :param dar:                 Desired display aspect ratio.
    :param dar_in:              Input display aspect ratio, if different from clip's.
    :param keep_ar:             Whether to adjust dimensions to preserve aspect ratio.
    :param blur:                Amount of blur to apply during scaling.
    :return:                    Scaled clip, optionally aspect-corrected and linearized.
    """
    kwargs.update(
        linear=linear,
        sigmoid=sigmoid,
        border_handling=border_handling,
        sample_grid_model=sample_grid_model,
        sar=sar,
        dar=dar,
        dar_in=dar_in,
        keep_ar=keep_ar,
        blur=blur
    )

    shift_top, shift_left = shift

    if isinstance(shift_top, (int, float)) and isinstance(shift_left, (int, float)):
        return super().scale(clip, width, height, (shift_top, shift_left), **kwargs)

    assert check_variable_format(clip, self.scale)

    n_planes = clip.format.num_planes

    shift_top = normalize_seq(shift_top, n_planes)
    shift_left = normalize_seq(shift_left, n_planes)

    if n_planes == 1:
        if len(set(shift_top)) > 1 or len(set(shift_left)) > 1:
            raise CustomValueError(
                "Inconsistent shift values detected for a single plane. "
                "All shift values must be identical when passing a GRAY clip.",
                self.scale,
                (shift_top, shift_left),
            )

        return super().scale(clip, width, height, (shift_top[0], shift_left[0]), **kwargs)

    width, height = self._wh_norm(clip, width, height)

    format_in = clip.format
    format_out = get_video_format(fallback(kwargs.pop("format", None), self.kwargs.get("format"), clip.format))

    chromaloc = ChromaLocation.from_video(clip, func=self.scale)
    chromaloc_in = ChromaLocation(
        fallback(kwargs.pop("chromaloc_in", None), self.kwargs.get("chromaloc_in"), chromaloc)
    )
    chromaloc_out = ChromaLocation(
        fallback(kwargs.pop("chromaloc", None), self.kwargs.get("chromaloc"), chromaloc)
    )

    off_left, off_top = chromaloc_in.get_offsets(format_in)
    off_left_out, off_top_out = chromaloc_out.get_offsets(format_out)

    factor_w = 1 / 2 ** format_in.subsampling_w
    factor_h = 1 / 2 ** format_in.subsampling_h

    # Offsets for format out
    offc_left = (abs(off_left) + off_left_out) * factor_w
    offc_top = (abs(off_top) + off_top_out) * factor_h

    # Offsets for scale out
    if format_out.subsampling_w:
        offc_left = ((abs(off_left) + off_left * (clip.width / width)) * factor_w) + offc_left
    if format_out.subsampling_h:
        offc_top = ((abs(off_top) + off_top * (clip.height / height)) * factor_h) + offc_top

    for i in range(1, n_planes):
        shift_left[i] += offc_left
        shift_top[i] += offc_top

    scaled_planes = list[vs.VideoNode]()

    for i, (plane, top, left) in enumerate(zip(split(clip), shift_top, shift_left)):
        if i:
            w = round(width * 1 / 2 ** format_out.subsampling_h)
            h = round(height * 1 / 2 ** format_out.subsampling_h)
        else:
            w, h = width, height

        scaled_planes.append(
            super().scale(
                plane,
                w,
                h,
                (top, left),
                format=format_out.replace(color_family=vs.GRAY, subsampling_w=0, subsampling_h=0),
                **kwargs
            )
        )

    merged = vs.core.std.ShufflePlanes(scaled_planes, [0, 0, 0], format_out.color_family, clip)

    if chromaloc_in != chromaloc_out:
        return chromaloc_out.apply(merged)

    return merged

supersample

supersample(
    clip: VideoNodeT,
    rfactor: float = 2.0,
    shift: tuple[TopShift, LeftShift] = (0, 0),
    **kwargs: Any
) -> VideoNodeT

Supersample a clip by a given scaling factor.

Parameters:

  • clip

    (VideoNodeT) –

    The source clip.

  • rfactor

    (float, default: 2.0 ) –

    Scaling factor for supersampling.

  • shift

    (tuple[TopShift, LeftShift], default: (0, 0) ) –

    Subpixel shift (top, left) applied during scaling.

  • kwargs

    (Any, default: {} ) –

    Additional arguments forwarded to the scale function.

Returns:

Raises:

  • CustomValueError

    If resulting resolution is non-positive.

Source code
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
def supersample(
    self, clip: VideoNodeT, rfactor: float = 2.0, shift: tuple[TopShift, LeftShift] = (0, 0), **kwargs: Any
) -> VideoNodeT:
    """
    Supersample a clip by a given scaling factor.

    :param clip:                The source clip.
    :param rfactor:             Scaling factor for supersampling.
    :param shift:               Subpixel shift (top, left) applied during scaling.
    :param kwargs:              Additional arguments forwarded to the scale function.
    :raises CustomValueError:   If resulting resolution is non-positive.
    :return:                    The supersampled clip.
    """
    assert check_variable_resolution(clip, self.supersample)

    dst_width, dst_height = ceil(clip.width * rfactor), ceil(clip.height * rfactor)

    if max(dst_width, dst_height) <= 0.0:
        raise CustomValueError(
            'Multiplying the resolution by "rfactor" must result in a positive resolution!',
            self.supersample,
            rfactor,
        )

    return self.scale(clip, dst_width, dst_height, shift, **kwargs)  # type: ignore[return-value]

Placebo

Placebo(
    taps: float | None = None,
    b: float | None = None,
    c: float | None = None,
    clamp: float = 0.0,
    blur: float = 0.0,
    taper: float = 0.0,
    antiring: float = 0.0,
    **kwargs: Any
)

Bases: ComplexScaler

Abstract Placebo scaler class.

This class and its subclasses depend on vs-placebo

Initialize the scaler with optional arguments.

Keyword arguments passed during initialization are automatically injected here, unless explicitly overridden by the arguments provided at call time. Only arguments that match named parameters in this method are injected.

Parameters:

  • taps

    (float | None, default: None ) –

    Overrides the filter kernel radius. Has no effect if the filter kernel is not resizeable.

  • b

    (float | None, default: None ) –

    The 'b' parameter for bicubic interpolation.

  • c

    (float | None, default: None ) –

    The 'c' parameter for bicubic interpolation.

  • clamp

    (float, default: 0.0 ) –

    Represents an extra weighting/clamping coefficient for negative weights. A value of 0.0 represents no clamping. A value of 1.0 represents full clamping, i.e. all negative lobes will be removed.

  • blur

    (float, default: 0.0 ) –

    Additional blur coefficient. This effectively stretches the kernel, without changing the effective radius of the filter radius.

  • taper

    (float, default: 0.0 ) –

    Additional taper coefficient. This essentially flattens the function's center.

  • antiring

    (float, default: 0.0 ) –

    Antiringing strength.

  • kwargs

    (Any, default: {} ) –

    Keyword arguments that configure the internal scaling behavior.

Classes:

Methods:

  • ensure_obj

    Ensure that the input is a scaler instance, resolving it if necessary.

  • from_param

    Resolve and return a scaler type from a given input (string, type, or instance).

  • get_scale_args
  • kernel_radius
  • multi

    Deprecated alias for supersample.

  • pretty_string

    Cached property returning a user-friendly string representation.

  • scale

    Scale a clip to the given resolution, with aspect ratio and linear light support.

  • supersample

    Supersample a clip by a given scaling factor.

Attributes:

Source code
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
def __init__(
    self,
    taps: float | None = None,
    b: float | None = None,
    c: float | None = None,
    clamp: float = 0.0,
    blur: float = 0.0,
    taper: float = 0.0,
    antiring: float = 0.0,
    **kwargs: Any,
) -> None:
    """
    Initialize the scaler with optional arguments.

    Keyword arguments passed during initialization are automatically injected here,
    unless explicitly overridden by the arguments provided at call time.
    Only arguments that match named parameters in this method are injected.

    :param taps:        Overrides the filter kernel radius.
                        Has no effect if the filter kernel is not resizeable.
    :param b:           The 'b' parameter for bicubic interpolation.
    :param c:           The 'c' parameter for bicubic interpolation.
    :param clamp:       Represents an extra weighting/clamping coefficient for negative weights.
                        A value of 0.0 represents no clamping.
                        A value of 1.0 represents full clamping, i.e. all negative lobes will be removed.
    :param blur:        Additional blur coefficient.
                        This effectively stretches the kernel, without changing the effective radius of the filter radius.
    :param taper:       Additional taper coefficient. This essentially flattens the function's center.
    :param antiring:    Antiringing strength.
    :param kwargs:      Keyword arguments that configure the internal scaling behavior.
    """
    self.taps = taps
    self.b = b
    self.c = c
    self.clamp = clamp
    self.blur = blur
    self.taper = taper
    self.antiring = antiring
    super().__init__(**kwargs)

antiring instance-attribute

antiring = antiring

b instance-attribute

b = b

blur instance-attribute

blur = blur

c instance-attribute

c = c

clamp instance-attribute

clamp = clamp

kwargs instance-attribute

kwargs: dict[str, Any] = kwargs

Arguments passed to the implemented funcs or internal scale function.

scale_function class-attribute instance-attribute

scale_function = Resample

taper instance-attribute

taper = taper

taps instance-attribute

taps = taps

cached_property

cached_property(func: Callable[Concatenate[_BaseScalerT, P], T_co])

Bases: cached_property[T_co]

Read only version of functools.cached_property.

Source code
265
def __init__(self, func: Callable[Concatenate[_BaseScalerT, P], T_co]) -> None: ...

ensure_obj classmethod

ensure_obj(
    scaler: str | type[Self] | Self | None = None,
    /,
    func_except: FuncExceptT | None = None,
) -> Self

Ensure that the input is a scaler instance, resolving it if necessary.

Parameters:

  • scaler

    (str | type[Self] | Self | None, default: None ) –

    Scaler identifier (string, class, or instance).

  • func_except

    (FuncExceptT | None, default: None ) –

    Function returned for custom error handling.

Returns:

  • Self

    Scaler instance.

Source code
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
@classmethod
def ensure_obj(
    cls,
    scaler: str | type[Self] | Self | None = None,
    /,
    func_except: FuncExceptT | None = None,
) -> Self:
    """
    Ensure that the input is a scaler instance, resolving it if necessary.

    :param scaler:          Scaler identifier (string, class, or instance).
    :param func_except:     Function returned for custom error handling.
    :return:                Scaler instance.
    """
    return _base_ensure_obj(cls, scaler, func_except)

from_param classmethod

from_param(
    scaler: str | type[Self] | Self | None = None,
    /,
    func_except: FuncExceptT | None = None,
) -> type[Self]

Resolve and return a scaler type from a given input (string, type, or instance).

Parameters:

  • scaler

    (str | type[Self] | Self | None, default: None ) –

    Scaler identifier (string, class, or instance).

  • func_except

    (FuncExceptT | None, default: None ) –

    Function returned for custom error handling.

Returns:

  • type[Self]

    Resolved scaler type.

Source code
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
@classmethod
def from_param(
    cls,
    scaler: str | type[Self] | Self | None = None,
    /,
    func_except: FuncExceptT | None = None,
) -> type[Self]:
    """
    Resolve and return a scaler type from a given input (string, type, or instance).

    :param scaler:          Scaler identifier (string, class, or instance).
    :param func_except:     Function returned for custom error handling.
    :return:                Resolved scaler type.
    """
    return _base_from_param(cls, scaler, cls._err_class, func_except)

get_scale_args

get_scale_args(
    clip: VideoNode,
    shift: tuple[TopShift, LeftShift] = (0, 0),
    width: int | None = None,
    height: int | None = None,
    **kwargs: Any
) -> dict[str, Any]
Source code
 75
 76
 77
 78
 79
 80
 81
 82
 83
 84
 85
 86
 87
 88
 89
 90
 91
 92
 93
 94
 95
 96
 97
 98
 99
100
101
def get_scale_args(
    self,
    clip: vs.VideoNode,
    shift: tuple[TopShift, LeftShift] = (0, 0),
    width: int | None = None,
    height: int | None = None,
    **kwargs: Any,
) -> dict[str, Any]:
    return (
        dict(
            sx=kwargs.pop("src_left", shift[1]),
            sy=kwargs.pop("src_top", shift[0]),
            width=width,
            height=height,
            filter=self._kernel,
            radius=self.taps,
            param1=self.b,
            param2=self.c,
            clamp=self.clamp,
            taper=self.taper,
            blur=self.blur,
            antiring=self.antiring,
        )
        | self.kwargs
        | kwargs
        | dict(linearize=False, sigmoidize=False)
    )

kernel_radius

kernel_radius() -> int
Source code
103
104
105
106
107
108
109
110
111
112
113
114
@ComplexScaler.cached_property
def kernel_radius(self) -> int:
    if self.taps:
        return ceil(self.taps)

    if self.b or self.c:
        b, c = fallback(self.b, 0), fallback(self.c, 0.5)

        if (b, c) == (0, 0):
            return 1

    return 2

multi

multi(
    clip: VideoNodeT,
    multi: float = 2.0,
    shift: tuple[TopShift, LeftShift] = (0, 0),
    **kwargs: Any
) -> VideoNodeT

Deprecated alias for supersample.

Parameters:

  • clip

    (VideoNodeT) –

    The source clip.

  • multi

    (float, default: 2.0 ) –

    Supersampling factor.

  • shift

    (tuple[TopShift, LeftShift], default: (0, 0) ) –

    Subpixel shift (top, left) applied during scaling.

  • kwargs

    (Any, default: {} ) –

    Additional arguments forwarded to the scale function.

Returns:

Source code
451
452
453
454
455
456
457
458
459
460
461
462
463
464
@deprecated('The "multi" method is deprecated. Use "supersample" instead.', category=DeprecationWarning)
def multi(
    self, clip: VideoNodeT, multi: float = 2.0, shift: tuple[TopShift, LeftShift] = (0, 0), **kwargs: Any
) -> VideoNodeT:
    """
    Deprecated alias for `supersample`.

    :param clip:    The source clip.
    :param multi:   Supersampling factor.
    :param shift:   Subpixel shift (top, left) applied during scaling.
    :param kwargs:  Additional arguments forwarded to the scale function.
    :return:        The supersampled clip.
    """
    return self.supersample(clip, multi, shift, **kwargs)

pretty_string

pretty_string() -> str

Cached property returning a user-friendly string representation.

Returns:

  • str

    Pretty-printed string with arguments.

Source code
368
369
370
371
372
373
374
375
@cached_property
def pretty_string(self) -> str:
    """
    Cached property returning a user-friendly string representation.

    :return: Pretty-printed string with arguments.
    """
    return self._pretty_string()

scale

scale(
    clip: VideoNode,
    width: int | None = None,
    height: int | None = None,
    shift: tuple[TopShift | list[TopShift], LeftShift | list[LeftShift]] = (
        0,
        0,
    ),
    *,
    linear: bool | None = None,
    sigmoid: bool | tuple[Slope, Center] = False,
    border_handling: BorderHandling = MIRROR,
    sample_grid_model: SampleGridModel = MATCH_EDGES,
    sar: Sar | float | bool | None = None,
    dar: Dar | float | bool | None = None,
    dar_in: Dar | bool | float | None = None,
    keep_ar: bool | None = None,
    blur: float | None = None,
    **kwargs: Any
) -> VideoNode | ConstantFormatVideoNode

Scale a clip to the given resolution, with aspect ratio and linear light support.

Keyword arguments passed during initialization are automatically injected here, unless explicitly overridden by the arguments provided at call time. Only arguments that match named parameters in this method are injected.

Parameters:

  • clip

    (VideoNode) –

    The source clip.

  • width

    (int | None, default: None ) –

    Target width (defaults to clip width if None).

  • height

    (int | None, default: None ) –

    Target height (defaults to clip height if None).

  • shift

    (tuple[TopShift | list[TopShift], LeftShift | list[LeftShift]], default: (0, 0) ) –

    Subpixel shift (top, left) applied during scaling. If a tuple is provided, it is used uniformly. If a list is given, the shift is applied per plane.

  • linear

    (bool | None, default: None ) –

    Whether to linearize the input before descaling. If None, inferred from sigmoid.

  • sigmoid

    (bool | tuple[Slope, Center], default: False ) –

    Whether to use sigmoid transfer curve. Can be True, False, or a tuple of (slope, center). True applies the defaults values (6.5, 0.75). Keep in mind sigmoid slope has to be in range 1.0-20.0 (inclusive) and sigmoid center has to be in range 0.0-1.0 (inclusive).

  • border_handling

    (BorderHandling, default: MIRROR ) –

    Method for handling image borders during sampling.

  • sample_grid_model

    (SampleGridModel, default: MATCH_EDGES ) –

    Model used to align sampling grid.

  • sar

    (Sar | float | bool | None, default: None ) –

    Sample aspect ratio to assume or convert to.

  • dar

    (Dar | float | bool | None, default: None ) –

    Desired display aspect ratio.

  • dar_in

    (Dar | bool | float | None, default: None ) –

    Input display aspect ratio, if different from clip's.

  • keep_ar

    (bool | None, default: None ) –

    Whether to adjust dimensions to preserve aspect ratio.

  • blur

    (float | None, default: None ) –

    Amount of blur to apply during scaling.

Returns:

  • VideoNode | ConstantFormatVideoNode

    Scaled clip, optionally aspect-corrected and linearized.

Source code
510
511
512
513
514
515
516
517
518
519
520
521
522
523
524
525
526
527
528
529
530
531
532
533
534
535
536
537
538
539
540
541
542
543
544
545
546
547
548
549
550
551
552
553
554
555
556
557
558
559
560
561
562
563
564
565
566
567
568
569
570
571
572
573
574
575
576
577
578
579
580
581
582
583
584
585
586
587
588
589
590
591
592
593
594
595
596
597
598
599
600
601
602
603
604
605
606
607
608
609
610
611
612
613
614
615
616
617
618
619
620
621
622
623
624
625
626
627
628
629
630
631
632
633
634
635
636
637
638
639
640
641
642
643
644
645
646
647
648
649
650
651
652
def scale(
    self,
    clip: vs.VideoNode,
    width: int | None = None,
    height: int | None = None,
    # ComplexScaler adds shift per planes
    shift: tuple[TopShift | list[TopShift], LeftShift | list[LeftShift]] = (0, 0),
    *,
    # `linear` and `sigmoid` from LinearScaler
    linear: bool | None = None,
    sigmoid: bool | tuple[Slope, Center] = False,
    # `border_handling`, `sample_grid_model`, `sar`, `dar`, `dar_in` and `keep_ar` from KeepArScaler
    border_handling: BorderHandling = BorderHandling.MIRROR,
    sample_grid_model: SampleGridModel = SampleGridModel.MATCH_EDGES,
    sar: Sar | float | bool | None = None,
    dar: Dar | float | bool | None = None,
    dar_in: Dar | bool | float | None = None,
    keep_ar: bool | None = None,
    # ComplexScaler adds blur
    blur: float | None = None,
    **kwargs: Any,
) -> vs.VideoNode | ConstantFormatVideoNode:
    """
    Scale a clip to the given resolution, with aspect ratio and linear light support.

    Keyword arguments passed during initialization are automatically injected here,
    unless explicitly overridden by the arguments provided at call time.
    Only arguments that match named parameters in this method are injected.

    :param clip:                The source clip.
    :param width:               Target width (defaults to clip width if None).
    :param height:              Target height (defaults to clip height if None).
    :param shift:               Subpixel shift (top, left) applied during scaling.
                                If a tuple is provided, it is used uniformly.
                                If a list is given, the shift is applied per plane.
    :param linear:              Whether to linearize the input before descaling. If None, inferred from sigmoid.
    :param sigmoid:             Whether to use sigmoid transfer curve. Can be True, False, or a tuple of (slope, center).
                                `True` applies the defaults values (6.5, 0.75).
                                Keep in mind sigmoid slope has to be in range 1.0-20.0 (inclusive)
                                and sigmoid center has to be in range 0.0-1.0 (inclusive).
    :param border_handling:     Method for handling image borders during sampling.
    :param sample_grid_model:   Model used to align sampling grid.
    :param sar:                 Sample aspect ratio to assume or convert to.
    :param dar:                 Desired display aspect ratio.
    :param dar_in:              Input display aspect ratio, if different from clip's.
    :param keep_ar:             Whether to adjust dimensions to preserve aspect ratio.
    :param blur:                Amount of blur to apply during scaling.
    :return:                    Scaled clip, optionally aspect-corrected and linearized.
    """
    kwargs.update(
        linear=linear,
        sigmoid=sigmoid,
        border_handling=border_handling,
        sample_grid_model=sample_grid_model,
        sar=sar,
        dar=dar,
        dar_in=dar_in,
        keep_ar=keep_ar,
        blur=blur
    )

    shift_top, shift_left = shift

    if isinstance(shift_top, (int, float)) and isinstance(shift_left, (int, float)):
        return super().scale(clip, width, height, (shift_top, shift_left), **kwargs)

    assert check_variable_format(clip, self.scale)

    n_planes = clip.format.num_planes

    shift_top = normalize_seq(shift_top, n_planes)
    shift_left = normalize_seq(shift_left, n_planes)

    if n_planes == 1:
        if len(set(shift_top)) > 1 or len(set(shift_left)) > 1:
            raise CustomValueError(
                "Inconsistent shift values detected for a single plane. "
                "All shift values must be identical when passing a GRAY clip.",
                self.scale,
                (shift_top, shift_left),
            )

        return super().scale(clip, width, height, (shift_top[0], shift_left[0]), **kwargs)

    width, height = self._wh_norm(clip, width, height)

    format_in = clip.format
    format_out = get_video_format(fallback(kwargs.pop("format", None), self.kwargs.get("format"), clip.format))

    chromaloc = ChromaLocation.from_video(clip, func=self.scale)
    chromaloc_in = ChromaLocation(
        fallback(kwargs.pop("chromaloc_in", None), self.kwargs.get("chromaloc_in"), chromaloc)
    )
    chromaloc_out = ChromaLocation(
        fallback(kwargs.pop("chromaloc", None), self.kwargs.get("chromaloc"), chromaloc)
    )

    off_left, off_top = chromaloc_in.get_offsets(format_in)
    off_left_out, off_top_out = chromaloc_out.get_offsets(format_out)

    factor_w = 1 / 2 ** format_in.subsampling_w
    factor_h = 1 / 2 ** format_in.subsampling_h

    # Offsets for format out
    offc_left = (abs(off_left) + off_left_out) * factor_w
    offc_top = (abs(off_top) + off_top_out) * factor_h

    # Offsets for scale out
    if format_out.subsampling_w:
        offc_left = ((abs(off_left) + off_left * (clip.width / width)) * factor_w) + offc_left
    if format_out.subsampling_h:
        offc_top = ((abs(off_top) + off_top * (clip.height / height)) * factor_h) + offc_top

    for i in range(1, n_planes):
        shift_left[i] += offc_left
        shift_top[i] += offc_top

    scaled_planes = list[vs.VideoNode]()

    for i, (plane, top, left) in enumerate(zip(split(clip), shift_top, shift_left)):
        if i:
            w = round(width * 1 / 2 ** format_out.subsampling_h)
            h = round(height * 1 / 2 ** format_out.subsampling_h)
        else:
            w, h = width, height

        scaled_planes.append(
            super().scale(
                plane,
                w,
                h,
                (top, left),
                format=format_out.replace(color_family=vs.GRAY, subsampling_w=0, subsampling_h=0),
                **kwargs
            )
        )

    merged = vs.core.std.ShufflePlanes(scaled_planes, [0, 0, 0], format_out.color_family, clip)

    if chromaloc_in != chromaloc_out:
        return chromaloc_out.apply(merged)

    return merged

supersample

supersample(
    clip: VideoNodeT,
    rfactor: float = 2.0,
    shift: tuple[TopShift, LeftShift] = (0, 0),
    **kwargs: Any
) -> VideoNodeT

Supersample a clip by a given scaling factor.

Parameters:

  • clip

    (VideoNodeT) –

    The source clip.

  • rfactor

    (float, default: 2.0 ) –

    Scaling factor for supersampling.

  • shift

    (tuple[TopShift, LeftShift], default: (0, 0) ) –

    Subpixel shift (top, left) applied during scaling.

  • kwargs

    (Any, default: {} ) –

    Additional arguments forwarded to the scale function.

Returns:

Raises:

  • CustomValueError

    If resulting resolution is non-positive.

Source code
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
def supersample(
    self, clip: VideoNodeT, rfactor: float = 2.0, shift: tuple[TopShift, LeftShift] = (0, 0), **kwargs: Any
) -> VideoNodeT:
    """
    Supersample a clip by a given scaling factor.

    :param clip:                The source clip.
    :param rfactor:             Scaling factor for supersampling.
    :param shift:               Subpixel shift (top, left) applied during scaling.
    :param kwargs:              Additional arguments forwarded to the scale function.
    :raises CustomValueError:   If resulting resolution is non-positive.
    :return:                    The supersampled clip.
    """
    assert check_variable_resolution(clip, self.supersample)

    dst_width, dst_height = ceil(clip.width * rfactor), ceil(clip.height * rfactor)

    if max(dst_width, dst_height) <= 0.0:
        raise CustomValueError(
            'Multiplying the resolution by "rfactor" must result in a positive resolution!',
            self.supersample,
            rfactor,
        )

    return self.scale(clip, dst_width, dst_height, shift, **kwargs)  # type: ignore[return-value]