Skip to content

onnx

This module implements scalers for ONNX models.

Classes:

  • ArtCNN

    Super-Resolution Convolutional Neural Networks optimised for anime.

  • BaseOnnxScaler

    Abstract generic scaler class for an ONNX model.

  • DPIR

    Deep Plug-and-Play Image Restoration

  • GenericOnnxScaler

    Generic scaler class for an ONNX model.

  • Waifu2x

    Well known Image Super-Resolution for Anime-Style Art.

Functions:

ArtCNN

ArtCNN(
    backend: Any | None = None,
    tiles: int | tuple[int, int] | None = None,
    tilesize: int | tuple[int, int] | None = None,
    overlap: int | tuple[int, int] | None = None,
    max_instances: int = 2,
    *,
    kernel: KernelLike = Catrom,
    scaler: ScalerLike | None = None,
    shifter: KernelLike | None = None,
    **kwargs: Any
)

Bases: BaseArtCNNLuma

Super-Resolution Convolutional Neural Networks optimised for anime.

A quick reminder that vs-mlrt does not ship these in the base package.

You will have to grab the extended models pack or get it from the repo itself.

(And create an "ArtCNN" folder in your models folder yourself)

https://github.com/Artoriuz/ArtCNN/releases/latest

Defaults to R8F64.

Example usage:

from vsscale import ArtCNN

doubled = ArtCNN().scale(clip, clip.width * 2, clip.height * 2)

Parameters:

  • backend

    (Any | None, default: None ) –

    The backend to be used with the vs-mlrt framework. If set to None, the most suitable backend will be automatically selected, prioritizing fp16 support.

  • tiles

    (int | tuple[int, int] | None, default: None ) –

    Whether to split the image into multiple tiles. This can help reduce VRAM usage, but note that the model's behavior may vary when they are used.

  • tilesize

    (int | tuple[int, int] | None, default: None ) –

    The size of each tile when splitting the image (if tiles are enabled).

  • overlap

    (int | tuple[int, int] | None, default: None ) –

    The size of overlap between tiles.

  • max_instances

    (int, default: 2 ) –

    Maximum instances to spawn when scaling a variable resolution clip.

  • kernel

    (KernelLike, default: Catrom ) –

    Base kernel to be used for certain scaling/shifting/resampling operations. Defaults to Catrom.

  • scaler

    (ScalerLike | None, default: None ) –

    Scaler used for scaling operations. Defaults to kernel.

  • shifter

    (KernelLike | None, default: None ) –

    Kernel used for shifting operations. Defaults to kernel.

  • **kwargs

    (Any, default: {} ) –

    Additional arguments to pass to the backend. See the vsmlrt backend's docstring for more details.

Classes:

  • C16F64

    Very fast and good enough for AA purposes but the onnx variant is officially deprecated.

  • C16F64_Chroma

    The bigger of the two chroma models.

  • C16F64_DS

    The same as C16F64 but intended to also sharpen and denoise.

  • C4F16

    This has 4 internal convolution layers with 16 filters each.

  • C4F16_DS

    The same as C4F16 but intended to also sharpen and denoise.

  • C4F32

    This has 4 internal convolution layers with 32 filters each.

  • C4F32_Chroma

    The smaller of the two chroma models.

  • C4F32_DS

    The same as C4F32 but intended to also sharpen and denoise.

  • R16F96

    The biggest model. Can compete with or outperform Waifu2x Cunet.

  • R8F64

    A smaller and faster version of R16F96 but very competitive.

  • R8F64_Chroma

    The new and fancy big chroma model.

  • R8F64_DS

    The same as R8F64 but intended to also sharpen and denoise.

  • cached_property

    Read only version of functools.cached_property.

Methods:

  • calc_tilesize

    Reimplementation of vsmlrt.calc_tilesize helper function

  • ensure_obj

    Ensure that the input is a scaler instance, resolving it if necessary.

  • from_param

    Resolve and return a scaler type from a given input (string, type, or instance).

  • get_scale_args

    Generate the keyword arguments used for scaling.

  • inference
  • kernel_radius

    Return the effective kernel radius for the scaler.

  • multi

    Deprecated alias for supersample.

  • postprocess_clip

    Handles postprocessing of the model's output after inference.

  • preprocess_clip
  • pretty_string

    Cached property returning a user-friendly string representation.

  • scale

    Scale the given clip using the ONNX model.

  • supersample

    Supersample a clip by a given scaling factor.

Attributes:

Source code
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
def __init__(
    self,
    backend: Any | None = None,
    tiles: int | tuple[int, int] | None = None,
    tilesize: int | tuple[int, int] | None = None,
    overlap: int | tuple[int, int] | None = None,
    max_instances: int = 2,
    *,
    kernel: KernelLike = Catrom,
    scaler: ScalerLike | None = None,
    shifter: KernelLike | None = None,
    **kwargs: Any
) -> None:
    """
    :param backend:         The backend to be used with the vs-mlrt framework.
                            If set to None, the most suitable backend will be automatically selected, prioritizing fp16 support.
    :param tiles:           Whether to split the image into multiple tiles.
                            This can help reduce VRAM usage, but note that the model's behavior may vary when they are used.
    :param tilesize:        The size of each tile when splitting the image (if tiles are enabled).
    :param overlap:         The size of overlap between tiles.
    :param max_instances:   Maximum instances to spawn when scaling a variable resolution clip.
    :param kernel:          Base kernel to be used for certain scaling/shifting/resampling operations.
                            Defaults to Catrom.
    :param scaler:          Scaler used for scaling operations. Defaults to kernel.
    :param shifter:         Kernel used for shifting operations. Defaults to kernel.
    :param **kwargs:        Additional arguments to pass to the backend.
                            See the vsmlrt backend's docstring for more details.
    """
    super().__init__(
        None, backend, tiles, tilesize, overlap, max_instances, kernel=kernel, scaler=scaler, shifter=shifter, **kwargs
    )

backend instance-attribute

backend = autoselect_backend(**_default_args | kwargs)

kernel instance-attribute

kernel = ensure_obj(kernel, __class__)

kwargs instance-attribute

kwargs: dict[str, Any] = kwargs

Arguments passed to the implemented funcs or internal scale function.

max_instances instance-attribute

max_instances = max_instances

model instance-attribute

model = str(resolve())

overlap instance-attribute

overlap = overlap

overlap_h instance-attribute

overlap_h = 8

overlap_w instance-attribute

overlap_w = 8

scale_function instance-attribute

scale_function: Callable[..., VideoNode]

Scale function called internally when performing scaling operations.

scaler instance-attribute

scaler = ensure_obj(scaler or kernel, __class__)

shifter instance-attribute

shifter = ensure_obj(shifter or kernel, __class__)

tiles instance-attribute

tiles = tiles

tilesize instance-attribute

tilesize = tilesize

C16F64

C16F64(
    backend: Any | None = None,
    tiles: int | tuple[int, int] | None = None,
    tilesize: int | tuple[int, int] | None = None,
    overlap: int | tuple[int, int] | None = None,
    max_instances: int = 2,
    *,
    kernel: KernelLike = Catrom,
    scaler: ScalerLike | None = None,
    shifter: KernelLike | None = None,
    **kwargs: Any
)

Bases: BaseArtCNNLuma

Very fast and good enough for AA purposes but the onnx variant is officially deprecated.

This has 16 internal convolution layers with 64 filters each.

ONNX files available at https://github.com/Artoriuz/ArtCNN/tree/388b91797ff2e675fd03065953cc1147d6f972c2/ONNX

Example usage:

from vsscale import ArtCNN

doubled = ArtCNN.C16F64().scale(clip, clip.width * 2, clip.height * 2)

Parameters:

  • backend

    (Any | None, default: None ) –

    The backend to be used with the vs-mlrt framework. If set to None, the most suitable backend will be automatically selected, prioritizing fp16 support.

  • tiles

    (int | tuple[int, int] | None, default: None ) –

    Whether to split the image into multiple tiles. This can help reduce VRAM usage, but note that the model's behavior may vary when they are used.

  • tilesize

    (int | tuple[int, int] | None, default: None ) –

    The size of each tile when splitting the image (if tiles are enabled).

  • overlap

    (int | tuple[int, int] | None, default: None ) –

    The size of overlap between tiles.

  • max_instances

    (int, default: 2 ) –

    Maximum instances to spawn when scaling a variable resolution clip.

  • kernel

    (KernelLike, default: Catrom ) –

    Base kernel to be used for certain scaling/shifting/resampling operations. Defaults to Catrom.

  • scaler

    (ScalerLike | None, default: None ) –

    Scaler used for scaling operations. Defaults to kernel.

  • shifter

    (KernelLike | None, default: None ) –

    Kernel used for shifting operations. Defaults to kernel.

  • **kwargs

    (Any, default: {} ) –

    Additional arguments to pass to the backend. See the vsmlrt backend's docstring for more details.

Classes:

Methods:

  • calc_tilesize

    Reimplementation of vsmlrt.calc_tilesize helper function

  • ensure_obj

    Ensure that the input is a scaler instance, resolving it if necessary.

  • from_param

    Resolve and return a scaler type from a given input (string, type, or instance).

  • get_scale_args

    Generate the keyword arguments used for scaling.

  • inference
  • kernel_radius

    Return the effective kernel radius for the scaler.

  • multi

    Deprecated alias for supersample.

  • postprocess_clip

    Handles postprocessing of the model's output after inference.

  • preprocess_clip
  • pretty_string

    Cached property returning a user-friendly string representation.

  • scale

    Scale the given clip using the ONNX model.

  • supersample

    Supersample a clip by a given scaling factor.

Attributes:

Source code
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
def __init__(
    self,
    backend: Any | None = None,
    tiles: int | tuple[int, int] | None = None,
    tilesize: int | tuple[int, int] | None = None,
    overlap: int | tuple[int, int] | None = None,
    max_instances: int = 2,
    *,
    kernel: KernelLike = Catrom,
    scaler: ScalerLike | None = None,
    shifter: KernelLike | None = None,
    **kwargs: Any
) -> None:
    """
    :param backend:         The backend to be used with the vs-mlrt framework.
                            If set to None, the most suitable backend will be automatically selected, prioritizing fp16 support.
    :param tiles:           Whether to split the image into multiple tiles.
                            This can help reduce VRAM usage, but note that the model's behavior may vary when they are used.
    :param tilesize:        The size of each tile when splitting the image (if tiles are enabled).
    :param overlap:         The size of overlap between tiles.
    :param max_instances:   Maximum instances to spawn when scaling a variable resolution clip.
    :param kernel:          Base kernel to be used for certain scaling/shifting/resampling operations.
                            Defaults to Catrom.
    :param scaler:          Scaler used for scaling operations. Defaults to kernel.
    :param shifter:         Kernel used for shifting operations. Defaults to kernel.
    :param **kwargs:        Additional arguments to pass to the backend.
                            See the vsmlrt backend's docstring for more details.
    """
    super().__init__(
        None, backend, tiles, tilesize, overlap, max_instances, kernel=kernel, scaler=scaler, shifter=shifter, **kwargs
    )

backend instance-attribute

backend = autoselect_backend(**_default_args | kwargs)

kernel instance-attribute

kernel = ensure_obj(kernel, __class__)

kwargs instance-attribute

kwargs: dict[str, Any] = kwargs

Arguments passed to the implemented funcs or internal scale function.

max_instances instance-attribute

max_instances = max_instances

model instance-attribute

model = str(resolve())

overlap instance-attribute

overlap = overlap

overlap_h instance-attribute

overlap_h = 8

overlap_w instance-attribute

overlap_w = 8

scale_function instance-attribute

scale_function: Callable[..., VideoNode]

Scale function called internally when performing scaling operations.

scaler instance-attribute

scaler = ensure_obj(scaler or kernel, __class__)

shifter instance-attribute

shifter = ensure_obj(shifter or kernel, __class__)

tiles instance-attribute

tiles = tiles

tilesize instance-attribute

tilesize = tilesize

cached_property

cached_property(func: Callable[Concatenate[_BaseScalerT, P], T_co])

Bases: cached_property[T_co]

Read only version of functools.cached_property.

Source code
265
def __init__(self, func: Callable[Concatenate[_BaseScalerT, P], T_co]) -> None: ...

calc_tilesize

calc_tilesize(
    clip: VideoNode, **kwargs: Any
) -> tuple[tuple[int, int], tuple[int, int]]

Reimplementation of vsmlrt.calc_tilesize helper function

Source code
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
def calc_tilesize(self, clip: vs.VideoNode, **kwargs: Any) -> tuple[tuple[int, int], tuple[int, int]]:
    """Reimplementation of vsmlrt.calc_tilesize helper function"""

    from vsmlrt import calc_tilesize

    kwargs = dict(
        tiles=self.tiles,
        tilesize=self.tilesize,
        width=clip.width,
        height=clip.height,
        multiple=1,
        overlap_w=self.overlap_w,
        overlap_h=self.overlap_h,
    ) | kwargs

    return calc_tilesize(**kwargs)

ensure_obj classmethod

ensure_obj(
    scaler: str | type[Self] | Self | None = None,
    /,
    func_except: FuncExceptT | None = None,
) -> Self

Ensure that the input is a scaler instance, resolving it if necessary.

Parameters:

  • scaler
    (str | type[Self] | Self | None, default: None ) –

    Scaler identifier (string, class, or instance).

  • func_except
    (FuncExceptT | None, default: None ) –

    Function returned for custom error handling.

Returns:

  • Self

    Scaler instance.

Source code
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
@classmethod
def ensure_obj(
    cls,
    scaler: str | type[Self] | Self | None = None,
    /,
    func_except: FuncExceptT | None = None,
) -> Self:
    """
    Ensure that the input is a scaler instance, resolving it if necessary.

    :param scaler:          Scaler identifier (string, class, or instance).
    :param func_except:     Function returned for custom error handling.
    :return:                Scaler instance.
    """
    return _base_ensure_obj(cls, scaler, func_except)

from_param classmethod

from_param(
    scaler: str | type[Self] | Self | None = None,
    /,
    func_except: FuncExceptT | None = None,
) -> type[Self]

Resolve and return a scaler type from a given input (string, type, or instance).

Parameters:

  • scaler
    (str | type[Self] | Self | None, default: None ) –

    Scaler identifier (string, class, or instance).

  • func_except
    (FuncExceptT | None, default: None ) –

    Function returned for custom error handling.

Returns:

  • type[Self]

    Resolved scaler type.

Source code
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
@classmethod
def from_param(
    cls,
    scaler: str | type[Self] | Self | None = None,
    /,
    func_except: FuncExceptT | None = None,
) -> type[Self]:
    """
    Resolve and return a scaler type from a given input (string, type, or instance).

    :param scaler:          Scaler identifier (string, class, or instance).
    :param func_except:     Function returned for custom error handling.
    :return:                Resolved scaler type.
    """
    return _base_from_param(cls, scaler, cls._err_class, func_except)

get_scale_args

get_scale_args(
    clip: VideoNode,
    shift: tuple[TopShift, LeftShift] = (0, 0),
    width: int | None = None,
    height: int | None = None,
    **kwargs: Any
) -> dict[str, Any]

Generate the keyword arguments used for scaling.

Parameters:

  • clip
    (VideoNode) –

    The source clip.

  • shift
    (tuple[TopShift, LeftShift], default: (0, 0) ) –

    Subpixel shift (top, left).

  • width
    (int | None, default: None ) –

    Target width.

  • height
    (int | None, default: None ) –

    Target height.

  • kwargs
    (Any, default: {} ) –

    Extra parameters to merge.

Returns:

  • dict[str, Any]

    Final dictionary of keyword arguments for the scale function.

Source code
466
467
468
469
470
471
472
473
474
475
476
477
478
479
480
481
482
483
484
def get_scale_args(
    self,
    clip: vs.VideoNode,
    shift: tuple[TopShift, LeftShift] = (0, 0),
    width: int | None = None,
    height: int | None = None,
    **kwargs: Any,
) -> dict[str, Any]:
    """
    Generate the keyword arguments used for scaling.

    :param clip:    The source clip.
    :param shift:   Subpixel shift (top, left).
    :param width:   Target width.
    :param height:  Target height.
    :param kwargs:  Extra parameters to merge.
    :return:        Final dictionary of keyword arguments for the scale function.
    """
    return dict(width=width, height=height, src_top=shift[0], src_left=shift[1]) | self.kwargs | kwargs

inference

inference(
    clip: ConstantFormatVideoNode, **kwargs: Any
) -> ConstantFormatVideoNode
Source code
302
303
304
305
306
def inference(self, clip: ConstantFormatVideoNode, **kwargs: Any) -> ConstantFormatVideoNode:
    from vsmlrt import ArtCNN as mlrt_ArtCNN
    from vsmlrt import ArtCNNModel

    return mlrt_ArtCNN(clip, self.tiles, self.tilesize, self.overlap, ArtCNNModel(self._model), self.backend)

kernel_radius

kernel_radius() -> int

Return the effective kernel radius for the scaler.

Returns:

  • int

    Kernel radius.

Raises:

  • CustomNotImplementedError

    If no kernel radius is defined.

Source code
347
348
349
350
351
352
353
354
355
@cached_property
def kernel_radius(self) -> int:
    """
    Return the effective kernel radius for the scaler.

    :raises CustomNotImplementedError:  If no kernel radius is defined.
    :return:                            Kernel radius.
    """
    ...

multi

multi(
    clip: VideoNodeT,
    multi: float = 2.0,
    shift: tuple[TopShift, LeftShift] = (0, 0),
    **kwargs: Any
) -> VideoNodeT

Deprecated alias for supersample.

Parameters:

  • clip
    (VideoNodeT) –

    The source clip.

  • multi
    (float, default: 2.0 ) –

    Supersampling factor.

  • shift
    (tuple[TopShift, LeftShift], default: (0, 0) ) –

    Subpixel shift (top, left) applied during scaling.

  • kwargs
    (Any, default: {} ) –

    Additional arguments forwarded to the scale function.

Returns:

Source code
451
452
453
454
455
456
457
458
459
460
461
462
463
464
@deprecated('The "multi" method is deprecated. Use "supersample" instead.', category=DeprecationWarning)
def multi(
    self, clip: VideoNodeT, multi: float = 2.0, shift: tuple[TopShift, LeftShift] = (0, 0), **kwargs: Any
) -> VideoNodeT:
    """
    Deprecated alias for `supersample`.

    :param clip:    The source clip.
    :param multi:   Supersampling factor.
    :param shift:   Subpixel shift (top, left) applied during scaling.
    :param kwargs:  Additional arguments forwarded to the scale function.
    :return:        The supersampled clip.
    """
    return self.supersample(clip, multi, shift, **kwargs)

postprocess_clip

postprocess_clip(
    clip: VideoNode, input_clip: VideoNode, **kwargs: Any
) -> ConstantFormatVideoNode

Handles postprocessing of the model's output after inference.

Source code
231
232
233
234
235
236
def postprocess_clip(self, clip: vs.VideoNode, input_clip: vs.VideoNode, **kwargs: Any) -> ConstantFormatVideoNode:
    """Handles postprocessing of the model's output after inference."""

    return depth(
        clip, input_clip, dither_type=DitherType.ORDERED if 0 in {clip.width, clip.height} else DitherType.AUTO
    )

preprocess_clip

preprocess_clip(clip: VideoNode, **kwargs: Any) -> ConstantFormatVideoNode
Source code
310
311
def preprocess_clip(self, clip: vs.VideoNode, **kwargs: Any) -> ConstantFormatVideoNode:
    return super().preprocess_clip(get_y(clip), **kwargs)

pretty_string

pretty_string() -> str

Cached property returning a user-friendly string representation.

Returns:

  • str

    Pretty-printed string with arguments.

Source code
368
369
370
371
372
373
374
375
@cached_property
def pretty_string(self) -> str:
    """
    Cached property returning a user-friendly string representation.

    :return: Pretty-printed string with arguments.
    """
    return self._pretty_string()

scale

scale(
    clip: VideoNode,
    width: int | None = None,
    height: int | None = None,
    shift: tuple[float, float] = (0, 0),
    **kwargs: Any
) -> ConstantFormatVideoNode

Scale the given clip using the ONNX model.

Parameters:

  • clip
    (VideoNode) –

    The input clip to be scaled.

  • width
    (int | None, default: None ) –

    The target width for scaling. If None, the width of the input clip will be used.

  • height
    (int | None, default: None ) –

    The target height for scaling. If None, the height of the input clip will be used.

  • shift
    (tuple[float, float], default: (0, 0) ) –

    A tuple representing the shift values for the x and y axes.

  • **kwargs
    (Any, default: {} ) –

    Additional arguments to be passed to the preprocess_clip, postprocess_clip, inference, and _final_scale methods. Use the prefix preprocess_ or postprocess_ to pass an argument to the respective method. Use the prefix inference_ to pass an argument to the inference method.

Returns:

  • ConstantFormatVideoNode

    The scaled clip.

Source code
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
def scale(
    self,
    clip: vs.VideoNode,
    width: int | None = None,
    height: int | None = None,
    shift: tuple[float, float] = (0, 0),
    **kwargs: Any
) -> ConstantFormatVideoNode:
    """
    Scale the given clip using the ONNX model.

    :param clip:        The input clip to be scaled.
    :param width:       The target width for scaling. If None, the width of the input clip will be used.
    :param height:      The target height for scaling. If None, the height of the input clip will be used.
    :param shift:       A tuple representing the shift values for the x and y axes.
    :param **kwargs:    Additional arguments to be passed to the `preprocess_clip`, `postprocess_clip`,
                        `inference`, and `_final_scale` methods.
                        Use the prefix `preprocess_` or `postprocess_` to pass an argument to the respective method.
                        Use the prefix `inference_` to pass an argument to the inference method.

    :return:            The scaled clip.
    """
    from vsmlrt import Backend

    assert check_variable_format(clip, self.__class__)

    width, height = self._wh_norm(clip, width, height)

    preprocess_kwargs = dict[str, Any]()
    postprocess_kwargs = dict[str, Any]()
    inference_kwargs = dict[str, Any]()

    for k in kwargs.copy():
        for prefix, ckwargs in zip(
            ("preprocess_", "postprocess_", "inference_"),
            (preprocess_kwargs, postprocess_kwargs, inference_kwargs)
        ):
            if k.startswith(prefix):
                ckwargs[k.removeprefix(prefix)] = kwargs.pop(k)
                break

    wclip = self.preprocess_clip(clip, **preprocess_kwargs)

    if 0 not in {clip.width, clip.height}:
        scaled = self.inference(wclip, **inference_kwargs)
    else:
        if not isinstance(self.backend, Backend.TRT):
            raise CustomValueError(
                "Variable resolution clips can only be processed with TRT Backend!", self.__class__, self.backend
            )

        warning(f"{self.__class__.__name__}: Variable resolution clip detected!")

        if self.backend.static_shape:
            warning("static_shape is True, setting it to False...")
            self.backend.static_shape = False

        if not self.backend.max_shapes:
            warning("max_shapes is None, setting it to (1936, 1088). You may want to adjust it...")
            self.backend.max_shapes = (1936, 1088)

        if not self.backend.opt_shapes:
            warning("opt_shapes is None, setting it to (64, 64). You may want to adjust it...")
            self.backend.opt_shapes = (64, 64)

        scaled = ProcessVariableResClip[ConstantFormatVideoNode].from_func(
            wclip, lambda c: self.inference(c, **inference_kwargs), False, wclip.format, self.max_instances
        )

    scaled = self.postprocess_clip(scaled, clip, **postprocess_kwargs)

    return self._finish_scale(scaled, clip, width, height, shift, **kwargs)

supersample

supersample(
    clip: VideoNodeT,
    rfactor: float = 2.0,
    shift: tuple[TopShift, LeftShift] = (0, 0),
    **kwargs: Any
) -> VideoNodeT

Supersample a clip by a given scaling factor.

Parameters:

  • clip
    (VideoNodeT) –

    The source clip.

  • rfactor
    (float, default: 2.0 ) –

    Scaling factor for supersampling.

  • shift
    (tuple[TopShift, LeftShift], default: (0, 0) ) –

    Subpixel shift (top, left) applied during scaling.

  • kwargs
    (Any, default: {} ) –

    Additional arguments forwarded to the scale function.

Returns:

Raises:

  • CustomValueError

    If resulting resolution is non-positive.

Source code
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
def supersample(
    self, clip: VideoNodeT, rfactor: float = 2.0, shift: tuple[TopShift, LeftShift] = (0, 0), **kwargs: Any
) -> VideoNodeT:
    """
    Supersample a clip by a given scaling factor.

    :param clip:                The source clip.
    :param rfactor:             Scaling factor for supersampling.
    :param shift:               Subpixel shift (top, left) applied during scaling.
    :param kwargs:              Additional arguments forwarded to the scale function.
    :raises CustomValueError:   If resulting resolution is non-positive.
    :return:                    The supersampled clip.
    """
    assert check_variable_resolution(clip, self.supersample)

    dst_width, dst_height = ceil(clip.width * rfactor), ceil(clip.height * rfactor)

    if max(dst_width, dst_height) <= 0.0:
        raise CustomValueError(
            'Multiplying the resolution by "rfactor" must result in a positive resolution!',
            self.supersample,
            rfactor,
        )

    return self.scale(clip, dst_width, dst_height, shift, **kwargs)  # type: ignore[return-value]

C16F64_Chroma

C16F64_Chroma(
    backend: Any | None = None,
    tiles: int | tuple[int, int] | None = None,
    tilesize: int | tuple[int, int] | None = None,
    overlap: int | tuple[int, int] | None = None,
    max_instances: int = 2,
    *,
    kernel: KernelLike = Catrom,
    scaler: ScalerLike | None = None,
    shifter: KernelLike | None = None,
    **kwargs: Any
)

Bases: BaseArtCNNChroma

The bigger of the two chroma models.

These don't double the input clip and rather just try to enhance the chroma using luma information.

Example usage:

from vsscale import ArtCNN

chroma_upscaled = ArtCNN.C16F64_Chroma().scale(clip)

Parameters:

  • backend

    (Any | None, default: None ) –

    The backend to be used with the vs-mlrt framework. If set to None, the most suitable backend will be automatically selected, prioritizing fp16 support.

  • tiles

    (int | tuple[int, int] | None, default: None ) –

    Whether to split the image into multiple tiles. This can help reduce VRAM usage, but note that the model's behavior may vary when they are used.

  • tilesize

    (int | tuple[int, int] | None, default: None ) –

    The size of each tile when splitting the image (if tiles are enabled).

  • overlap

    (int | tuple[int, int] | None, default: None ) –

    The size of overlap between tiles.

  • max_instances

    (int, default: 2 ) –

    Maximum instances to spawn when scaling a variable resolution clip.

  • kernel

    (KernelLike, default: Catrom ) –

    Base kernel to be used for certain scaling/shifting/resampling operations. Defaults to Catrom.

  • scaler

    (ScalerLike | None, default: None ) –

    Scaler used for scaling operations. Defaults to kernel.

  • shifter

    (KernelLike | None, default: None ) –

    Kernel used for shifting operations. Defaults to kernel.

  • **kwargs

    (Any, default: {} ) –

    Additional arguments to pass to the backend. See the vsmlrt backend's docstring for more details.

Classes:

Methods:

  • calc_tilesize

    Reimplementation of vsmlrt.calc_tilesize helper function

  • ensure_obj

    Ensure that the input is a scaler instance, resolving it if necessary.

  • from_param

    Resolve and return a scaler type from a given input (string, type, or instance).

  • get_scale_args

    Generate the keyword arguments used for scaling.

  • inference
  • kernel_radius

    Return the effective kernel radius for the scaler.

  • multi

    Deprecated alias for supersample.

  • postprocess_clip

    Handles postprocessing of the model's output after inference.

  • preprocess_clip
  • pretty_string

    Cached property returning a user-friendly string representation.

  • scale

    Scale the given clip using the ONNX model.

  • supersample

    Supersample a clip by a given scaling factor.

Attributes:

Source code
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
def __init__(
    self,
    backend: Any | None = None,
    tiles: int | tuple[int, int] | None = None,
    tilesize: int | tuple[int, int] | None = None,
    overlap: int | tuple[int, int] | None = None,
    max_instances: int = 2,
    *,
    kernel: KernelLike = Catrom,
    scaler: ScalerLike | None = None,
    shifter: KernelLike | None = None,
    **kwargs: Any
) -> None:
    """
    :param backend:         The backend to be used with the vs-mlrt framework.
                            If set to None, the most suitable backend will be automatically selected, prioritizing fp16 support.
    :param tiles:           Whether to split the image into multiple tiles.
                            This can help reduce VRAM usage, but note that the model's behavior may vary when they are used.
    :param tilesize:        The size of each tile when splitting the image (if tiles are enabled).
    :param overlap:         The size of overlap between tiles.
    :param max_instances:   Maximum instances to spawn when scaling a variable resolution clip.
    :param kernel:          Base kernel to be used for certain scaling/shifting/resampling operations.
                            Defaults to Catrom.
    :param scaler:          Scaler used for scaling operations. Defaults to kernel.
    :param shifter:         Kernel used for shifting operations. Defaults to kernel.
    :param **kwargs:        Additional arguments to pass to the backend.
                            See the vsmlrt backend's docstring for more details.
    """
    super().__init__(
        None, backend, tiles, tilesize, overlap, max_instances, kernel=kernel, scaler=scaler, shifter=shifter, **kwargs
    )

backend instance-attribute

backend = autoselect_backend(**_default_args | kwargs)

kernel instance-attribute

kernel = ensure_obj(kernel, __class__)

kwargs instance-attribute

kwargs: dict[str, Any] = kwargs

Arguments passed to the implemented funcs or internal scale function.

max_instances instance-attribute

max_instances = max_instances

model instance-attribute

model = str(resolve())

overlap instance-attribute

overlap = overlap

overlap_h instance-attribute

overlap_h = 8

overlap_w instance-attribute

overlap_w = 8

scale_function instance-attribute

scale_function: Callable[..., VideoNode]

Scale function called internally when performing scaling operations.

scaler instance-attribute

scaler = ensure_obj(scaler or kernel, __class__)

shifter instance-attribute

shifter = ensure_obj(shifter or kernel, __class__)

tiles instance-attribute

tiles = tiles

tilesize instance-attribute

tilesize = tilesize

cached_property

cached_property(func: Callable[Concatenate[_BaseScalerT, P], T_co])

Bases: cached_property[T_co]

Read only version of functools.cached_property.

Source code
265
def __init__(self, func: Callable[Concatenate[_BaseScalerT, P], T_co]) -> None: ...

calc_tilesize

calc_tilesize(
    clip: VideoNode, **kwargs: Any
) -> tuple[tuple[int, int], tuple[int, int]]

Reimplementation of vsmlrt.calc_tilesize helper function

Source code
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
def calc_tilesize(self, clip: vs.VideoNode, **kwargs: Any) -> tuple[tuple[int, int], tuple[int, int]]:
    """Reimplementation of vsmlrt.calc_tilesize helper function"""

    from vsmlrt import calc_tilesize

    kwargs = dict(
        tiles=self.tiles,
        tilesize=self.tilesize,
        width=clip.width,
        height=clip.height,
        multiple=1,
        overlap_w=self.overlap_w,
        overlap_h=self.overlap_h,
    ) | kwargs

    return calc_tilesize(**kwargs)

ensure_obj classmethod

ensure_obj(
    scaler: str | type[Self] | Self | None = None,
    /,
    func_except: FuncExceptT | None = None,
) -> Self

Ensure that the input is a scaler instance, resolving it if necessary.

Parameters:

  • scaler
    (str | type[Self] | Self | None, default: None ) –

    Scaler identifier (string, class, or instance).

  • func_except
    (FuncExceptT | None, default: None ) –

    Function returned for custom error handling.

Returns:

  • Self

    Scaler instance.

Source code
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
@classmethod
def ensure_obj(
    cls,
    scaler: str | type[Self] | Self | None = None,
    /,
    func_except: FuncExceptT | None = None,
) -> Self:
    """
    Ensure that the input is a scaler instance, resolving it if necessary.

    :param scaler:          Scaler identifier (string, class, or instance).
    :param func_except:     Function returned for custom error handling.
    :return:                Scaler instance.
    """
    return _base_ensure_obj(cls, scaler, func_except)

from_param classmethod

from_param(
    scaler: str | type[Self] | Self | None = None,
    /,
    func_except: FuncExceptT | None = None,
) -> type[Self]

Resolve and return a scaler type from a given input (string, type, or instance).

Parameters:

  • scaler
    (str | type[Self] | Self | None, default: None ) –

    Scaler identifier (string, class, or instance).

  • func_except
    (FuncExceptT | None, default: None ) –

    Function returned for custom error handling.

Returns:

  • type[Self]

    Resolved scaler type.

Source code
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
@classmethod
def from_param(
    cls,
    scaler: str | type[Self] | Self | None = None,
    /,
    func_except: FuncExceptT | None = None,
) -> type[Self]:
    """
    Resolve and return a scaler type from a given input (string, type, or instance).

    :param scaler:          Scaler identifier (string, class, or instance).
    :param func_except:     Function returned for custom error handling.
    :return:                Resolved scaler type.
    """
    return _base_from_param(cls, scaler, cls._err_class, func_except)

get_scale_args

get_scale_args(
    clip: VideoNode,
    shift: tuple[TopShift, LeftShift] = (0, 0),
    width: int | None = None,
    height: int | None = None,
    **kwargs: Any
) -> dict[str, Any]

Generate the keyword arguments used for scaling.

Parameters:

  • clip
    (VideoNode) –

    The source clip.

  • shift
    (tuple[TopShift, LeftShift], default: (0, 0) ) –

    Subpixel shift (top, left).

  • width
    (int | None, default: None ) –

    Target width.

  • height
    (int | None, default: None ) –

    Target height.

  • kwargs
    (Any, default: {} ) –

    Extra parameters to merge.

Returns:

  • dict[str, Any]

    Final dictionary of keyword arguments for the scale function.

Source code
466
467
468
469
470
471
472
473
474
475
476
477
478
479
480
481
482
483
484
def get_scale_args(
    self,
    clip: vs.VideoNode,
    shift: tuple[TopShift, LeftShift] = (0, 0),
    width: int | None = None,
    height: int | None = None,
    **kwargs: Any,
) -> dict[str, Any]:
    """
    Generate the keyword arguments used for scaling.

    :param clip:    The source clip.
    :param shift:   Subpixel shift (top, left).
    :param width:   Target width.
    :param height:  Target height.
    :param kwargs:  Extra parameters to merge.
    :return:        Final dictionary of keyword arguments for the scale function.
    """
    return dict(width=width, height=height, src_top=shift[0], src_left=shift[1]) | self.kwargs | kwargs

inference

inference(
    clip: ConstantFormatVideoNode, **kwargs: Any
) -> ConstantFormatVideoNode
Source code
302
303
304
305
306
def inference(self, clip: ConstantFormatVideoNode, **kwargs: Any) -> ConstantFormatVideoNode:
    from vsmlrt import ArtCNN as mlrt_ArtCNN
    from vsmlrt import ArtCNNModel

    return mlrt_ArtCNN(clip, self.tiles, self.tilesize, self.overlap, ArtCNNModel(self._model), self.backend)

kernel_radius

kernel_radius() -> int

Return the effective kernel radius for the scaler.

Returns:

  • int

    Kernel radius.

Raises:

  • CustomNotImplementedError

    If no kernel radius is defined.

Source code
347
348
349
350
351
352
353
354
355
@cached_property
def kernel_radius(self) -> int:
    """
    Return the effective kernel radius for the scaler.

    :raises CustomNotImplementedError:  If no kernel radius is defined.
    :return:                            Kernel radius.
    """
    ...

multi

multi(
    clip: VideoNodeT,
    multi: float = 2.0,
    shift: tuple[TopShift, LeftShift] = (0, 0),
    **kwargs: Any
) -> VideoNodeT

Deprecated alias for supersample.

Parameters:

  • clip
    (VideoNodeT) –

    The source clip.

  • multi
    (float, default: 2.0 ) –

    Supersampling factor.

  • shift
    (tuple[TopShift, LeftShift], default: (0, 0) ) –

    Subpixel shift (top, left) applied during scaling.

  • kwargs
    (Any, default: {} ) –

    Additional arguments forwarded to the scale function.

Returns:

Source code
451
452
453
454
455
456
457
458
459
460
461
462
463
464
@deprecated('The "multi" method is deprecated. Use "supersample" instead.', category=DeprecationWarning)
def multi(
    self, clip: VideoNodeT, multi: float = 2.0, shift: tuple[TopShift, LeftShift] = (0, 0), **kwargs: Any
) -> VideoNodeT:
    """
    Deprecated alias for `supersample`.

    :param clip:    The source clip.
    :param multi:   Supersampling factor.
    :param shift:   Subpixel shift (top, left) applied during scaling.
    :param kwargs:  Additional arguments forwarded to the scale function.
    :return:        The supersampled clip.
    """
    return self.supersample(clip, multi, shift, **kwargs)

postprocess_clip

postprocess_clip(
    clip: VideoNode, input_clip: VideoNode, **kwargs: Any
) -> ConstantFormatVideoNode

Handles postprocessing of the model's output after inference.

Source code
231
232
233
234
235
236
def postprocess_clip(self, clip: vs.VideoNode, input_clip: vs.VideoNode, **kwargs: Any) -> ConstantFormatVideoNode:
    """Handles postprocessing of the model's output after inference."""

    return depth(
        clip, input_clip, dither_type=DitherType.ORDERED if 0 in {clip.width, clip.height} else DitherType.AUTO
    )

preprocess_clip

preprocess_clip(clip: VideoNode, **kwargs: Any) -> ConstantFormatVideoNode
Source code
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
def preprocess_clip(self, clip: vs.VideoNode, **kwargs: Any) -> ConstantFormatVideoNode:
    assert check_variable_format(clip, self.__class__)

    if clip.format.subsampling_h != 0 or clip.format.subsampling_w != 0:
        chroma_scaler = Kernel.ensure_obj(kwargs.pop("chroma_scaler", Bilinear))

        clip = chroma_scaler.resample(
            clip, clip.format.replace(
                subsampling_h=0, subsampling_w=0,
                sample_type=vs.FLOAT, bits_per_sample=16 if self.backend.fp16 else 32
            )
        )
        return limiter(clip, func=self.__class__)

    return super().preprocess_clip(clip, **kwargs)

pretty_string

pretty_string() -> str

Cached property returning a user-friendly string representation.

Returns:

  • str

    Pretty-printed string with arguments.

Source code
368
369
370
371
372
373
374
375
@cached_property
def pretty_string(self) -> str:
    """
    Cached property returning a user-friendly string representation.

    :return: Pretty-printed string with arguments.
    """
    return self._pretty_string()

scale

scale(
    clip: VideoNode,
    width: int | None = None,
    height: int | None = None,
    shift: tuple[float, float] = (0, 0),
    **kwargs: Any
) -> ConstantFormatVideoNode

Scale the given clip using the ONNX model.

Parameters:

  • clip
    (VideoNode) –

    The input clip to be scaled.

  • width
    (int | None, default: None ) –

    The target width for scaling. If None, the width of the input clip will be used.

  • height
    (int | None, default: None ) –

    The target height for scaling. If None, the height of the input clip will be used.

  • shift
    (tuple[float, float], default: (0, 0) ) –

    A tuple representing the shift values for the x and y axes.

  • **kwargs
    (Any, default: {} ) –

    Additional arguments to be passed to the preprocess_clip, postprocess_clip, inference, and _final_scale methods. Use the prefix preprocess_ or postprocess_ to pass an argument to the respective method. Use the prefix inference_ to pass an argument to the inference method.

Returns:

  • ConstantFormatVideoNode

    The scaled clip.

Source code
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
def scale(
    self,
    clip: vs.VideoNode,
    width: int | None = None,
    height: int | None = None,
    shift: tuple[float, float] = (0, 0),
    **kwargs: Any
) -> ConstantFormatVideoNode:
    """
    Scale the given clip using the ONNX model.

    :param clip:        The input clip to be scaled.
    :param width:       The target width for scaling. If None, the width of the input clip will be used.
    :param height:      The target height for scaling. If None, the height of the input clip will be used.
    :param shift:       A tuple representing the shift values for the x and y axes.
    :param **kwargs:    Additional arguments to be passed to the `preprocess_clip`, `postprocess_clip`,
                        `inference`, and `_final_scale` methods.
                        Use the prefix `preprocess_` or `postprocess_` to pass an argument to the respective method.
                        Use the prefix `inference_` to pass an argument to the inference method.

    :return:            The scaled clip.
    """
    from vsmlrt import Backend

    assert check_variable_format(clip, self.__class__)

    width, height = self._wh_norm(clip, width, height)

    preprocess_kwargs = dict[str, Any]()
    postprocess_kwargs = dict[str, Any]()
    inference_kwargs = dict[str, Any]()

    for k in kwargs.copy():
        for prefix, ckwargs in zip(
            ("preprocess_", "postprocess_", "inference_"),
            (preprocess_kwargs, postprocess_kwargs, inference_kwargs)
        ):
            if k.startswith(prefix):
                ckwargs[k.removeprefix(prefix)] = kwargs.pop(k)
                break

    wclip = self.preprocess_clip(clip, **preprocess_kwargs)

    if 0 not in {clip.width, clip.height}:
        scaled = self.inference(wclip, **inference_kwargs)
    else:
        if not isinstance(self.backend, Backend.TRT):
            raise CustomValueError(
                "Variable resolution clips can only be processed with TRT Backend!", self.__class__, self.backend
            )

        warning(f"{self.__class__.__name__}: Variable resolution clip detected!")

        if self.backend.static_shape:
            warning("static_shape is True, setting it to False...")
            self.backend.static_shape = False

        if not self.backend.max_shapes:
            warning("max_shapes is None, setting it to (1936, 1088). You may want to adjust it...")
            self.backend.max_shapes = (1936, 1088)

        if not self.backend.opt_shapes:
            warning("opt_shapes is None, setting it to (64, 64). You may want to adjust it...")
            self.backend.opt_shapes = (64, 64)

        scaled = ProcessVariableResClip[ConstantFormatVideoNode].from_func(
            wclip, lambda c: self.inference(c, **inference_kwargs), False, wclip.format, self.max_instances
        )

    scaled = self.postprocess_clip(scaled, clip, **postprocess_kwargs)

    return self._finish_scale(scaled, clip, width, height, shift, **kwargs)

supersample

supersample(
    clip: VideoNodeT,
    rfactor: float = 2.0,
    shift: tuple[TopShift, LeftShift] = (0, 0),
    **kwargs: Any
) -> VideoNodeT

Supersample a clip by a given scaling factor.

Parameters:

  • clip
    (VideoNodeT) –

    The source clip.

  • rfactor
    (float, default: 2.0 ) –

    Scaling factor for supersampling.

  • shift
    (tuple[TopShift, LeftShift], default: (0, 0) ) –

    Subpixel shift (top, left) applied during scaling.

  • kwargs
    (Any, default: {} ) –

    Additional arguments forwarded to the scale function.

Returns:

Raises:

  • CustomValueError

    If resulting resolution is non-positive.

Source code
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
def supersample(
    self, clip: VideoNodeT, rfactor: float = 2.0, shift: tuple[TopShift, LeftShift] = (0, 0), **kwargs: Any
) -> VideoNodeT:
    """
    Supersample a clip by a given scaling factor.

    :param clip:                The source clip.
    :param rfactor:             Scaling factor for supersampling.
    :param shift:               Subpixel shift (top, left) applied during scaling.
    :param kwargs:              Additional arguments forwarded to the scale function.
    :raises CustomValueError:   If resulting resolution is non-positive.
    :return:                    The supersampled clip.
    """
    assert check_variable_resolution(clip, self.supersample)

    dst_width, dst_height = ceil(clip.width * rfactor), ceil(clip.height * rfactor)

    if max(dst_width, dst_height) <= 0.0:
        raise CustomValueError(
            'Multiplying the resolution by "rfactor" must result in a positive resolution!',
            self.supersample,
            rfactor,
        )

    return self.scale(clip, dst_width, dst_height, shift, **kwargs)  # type: ignore[return-value]

C16F64_DS

C16F64_DS(
    backend: Any | None = None,
    tiles: int | tuple[int, int] | None = None,
    tilesize: int | tuple[int, int] | None = None,
    overlap: int | tuple[int, int] | None = None,
    max_instances: int = 2,
    *,
    kernel: KernelLike = Catrom,
    scaler: ScalerLike | None = None,
    shifter: KernelLike | None = None,
    **kwargs: Any
)

Bases: BaseArtCNNLuma

The same as C16F64 but intended to also sharpen and denoise.

Example usage:

from vsscale import ArtCNN

doubled = ArtCNN.C16F64_DS().scale(clip, clip.width * 2, clip.height * 2)

Parameters:

  • backend

    (Any | None, default: None ) –

    The backend to be used with the vs-mlrt framework. If set to None, the most suitable backend will be automatically selected, prioritizing fp16 support.

  • tiles

    (int | tuple[int, int] | None, default: None ) –

    Whether to split the image into multiple tiles. This can help reduce VRAM usage, but note that the model's behavior may vary when they are used.

  • tilesize

    (int | tuple[int, int] | None, default: None ) –

    The size of each tile when splitting the image (if tiles are enabled).

  • overlap

    (int | tuple[int, int] | None, default: None ) –

    The size of overlap between tiles.

  • max_instances

    (int, default: 2 ) –

    Maximum instances to spawn when scaling a variable resolution clip.

  • kernel

    (KernelLike, default: Catrom ) –

    Base kernel to be used for certain scaling/shifting/resampling operations. Defaults to Catrom.

  • scaler

    (ScalerLike | None, default: None ) –

    Scaler used for scaling operations. Defaults to kernel.

  • shifter

    (KernelLike | None, default: None ) –

    Kernel used for shifting operations. Defaults to kernel.

  • **kwargs

    (Any, default: {} ) –

    Additional arguments to pass to the backend. See the vsmlrt backend's docstring for more details.

Classes:

Methods:

  • calc_tilesize

    Reimplementation of vsmlrt.calc_tilesize helper function

  • ensure_obj

    Ensure that the input is a scaler instance, resolving it if necessary.

  • from_param

    Resolve and return a scaler type from a given input (string, type, or instance).

  • get_scale_args

    Generate the keyword arguments used for scaling.

  • inference
  • kernel_radius

    Return the effective kernel radius for the scaler.

  • multi

    Deprecated alias for supersample.

  • postprocess_clip

    Handles postprocessing of the model's output after inference.

  • preprocess_clip
  • pretty_string

    Cached property returning a user-friendly string representation.

  • scale

    Scale the given clip using the ONNX model.

  • supersample

    Supersample a clip by a given scaling factor.

Attributes:

Source code
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
def __init__(
    self,
    backend: Any | None = None,
    tiles: int | tuple[int, int] | None = None,
    tilesize: int | tuple[int, int] | None = None,
    overlap: int | tuple[int, int] | None = None,
    max_instances: int = 2,
    *,
    kernel: KernelLike = Catrom,
    scaler: ScalerLike | None = None,
    shifter: KernelLike | None = None,
    **kwargs: Any
) -> None:
    """
    :param backend:         The backend to be used with the vs-mlrt framework.
                            If set to None, the most suitable backend will be automatically selected, prioritizing fp16 support.
    :param tiles:           Whether to split the image into multiple tiles.
                            This can help reduce VRAM usage, but note that the model's behavior may vary when they are used.
    :param tilesize:        The size of each tile when splitting the image (if tiles are enabled).
    :param overlap:         The size of overlap between tiles.
    :param max_instances:   Maximum instances to spawn when scaling a variable resolution clip.
    :param kernel:          Base kernel to be used for certain scaling/shifting/resampling operations.
                            Defaults to Catrom.
    :param scaler:          Scaler used for scaling operations. Defaults to kernel.
    :param shifter:         Kernel used for shifting operations. Defaults to kernel.
    :param **kwargs:        Additional arguments to pass to the backend.
                            See the vsmlrt backend's docstring for more details.
    """
    super().__init__(
        None, backend, tiles, tilesize, overlap, max_instances, kernel=kernel, scaler=scaler, shifter=shifter, **kwargs
    )

backend instance-attribute

backend = autoselect_backend(**_default_args | kwargs)

kernel instance-attribute

kernel = ensure_obj(kernel, __class__)

kwargs instance-attribute

kwargs: dict[str, Any] = kwargs

Arguments passed to the implemented funcs or internal scale function.

max_instances instance-attribute

max_instances = max_instances

model instance-attribute

model = str(resolve())

overlap instance-attribute

overlap = overlap

overlap_h instance-attribute

overlap_h = 8

overlap_w instance-attribute

overlap_w = 8

scale_function instance-attribute

scale_function: Callable[..., VideoNode]

Scale function called internally when performing scaling operations.

scaler instance-attribute

scaler = ensure_obj(scaler or kernel, __class__)

shifter instance-attribute

shifter = ensure_obj(shifter or kernel, __class__)

tiles instance-attribute

tiles = tiles

tilesize instance-attribute

tilesize = tilesize

cached_property

cached_property(func: Callable[Concatenate[_BaseScalerT, P], T_co])

Bases: cached_property[T_co]

Read only version of functools.cached_property.

Source code
265
def __init__(self, func: Callable[Concatenate[_BaseScalerT, P], T_co]) -> None: ...

calc_tilesize

calc_tilesize(
    clip: VideoNode, **kwargs: Any
) -> tuple[tuple[int, int], tuple[int, int]]

Reimplementation of vsmlrt.calc_tilesize helper function

Source code
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
def calc_tilesize(self, clip: vs.VideoNode, **kwargs: Any) -> tuple[tuple[int, int], tuple[int, int]]:
    """Reimplementation of vsmlrt.calc_tilesize helper function"""

    from vsmlrt import calc_tilesize

    kwargs = dict(
        tiles=self.tiles,
        tilesize=self.tilesize,
        width=clip.width,
        height=clip.height,
        multiple=1,
        overlap_w=self.overlap_w,
        overlap_h=self.overlap_h,
    ) | kwargs

    return calc_tilesize(**kwargs)

ensure_obj classmethod

ensure_obj(
    scaler: str | type[Self] | Self | None = None,
    /,
    func_except: FuncExceptT | None = None,
) -> Self

Ensure that the input is a scaler instance, resolving it if necessary.

Parameters:

  • scaler
    (str | type[Self] | Self | None, default: None ) –

    Scaler identifier (string, class, or instance).

  • func_except
    (FuncExceptT | None, default: None ) –

    Function returned for custom error handling.

Returns:

  • Self

    Scaler instance.

Source code
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
@classmethod
def ensure_obj(
    cls,
    scaler: str | type[Self] | Self | None = None,
    /,
    func_except: FuncExceptT | None = None,
) -> Self:
    """
    Ensure that the input is a scaler instance, resolving it if necessary.

    :param scaler:          Scaler identifier (string, class, or instance).
    :param func_except:     Function returned for custom error handling.
    :return:                Scaler instance.
    """
    return _base_ensure_obj(cls, scaler, func_except)

from_param classmethod

from_param(
    scaler: str | type[Self] | Self | None = None,
    /,
    func_except: FuncExceptT | None = None,
) -> type[Self]

Resolve and return a scaler type from a given input (string, type, or instance).

Parameters:

  • scaler
    (str | type[Self] | Self | None, default: None ) –

    Scaler identifier (string, class, or instance).

  • func_except
    (FuncExceptT | None, default: None ) –

    Function returned for custom error handling.

Returns:

  • type[Self]

    Resolved scaler type.

Source code
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
@classmethod
def from_param(
    cls,
    scaler: str | type[Self] | Self | None = None,
    /,
    func_except: FuncExceptT | None = None,
) -> type[Self]:
    """
    Resolve and return a scaler type from a given input (string, type, or instance).

    :param scaler:          Scaler identifier (string, class, or instance).
    :param func_except:     Function returned for custom error handling.
    :return:                Resolved scaler type.
    """
    return _base_from_param(cls, scaler, cls._err_class, func_except)

get_scale_args

get_scale_args(
    clip: VideoNode,
    shift: tuple[TopShift, LeftShift] = (0, 0),
    width: int | None = None,
    height: int | None = None,
    **kwargs: Any
) -> dict[str, Any]

Generate the keyword arguments used for scaling.

Parameters:

  • clip
    (VideoNode) –

    The source clip.

  • shift
    (tuple[TopShift, LeftShift], default: (0, 0) ) –

    Subpixel shift (top, left).

  • width
    (int | None, default: None ) –

    Target width.

  • height
    (int | None, default: None ) –

    Target height.

  • kwargs
    (Any, default: {} ) –

    Extra parameters to merge.

Returns:

  • dict[str, Any]

    Final dictionary of keyword arguments for the scale function.

Source code
466
467
468
469
470
471
472
473
474
475
476
477
478
479
480
481
482
483
484
def get_scale_args(
    self,
    clip: vs.VideoNode,
    shift: tuple[TopShift, LeftShift] = (0, 0),
    width: int | None = None,
    height: int | None = None,
    **kwargs: Any,
) -> dict[str, Any]:
    """
    Generate the keyword arguments used for scaling.

    :param clip:    The source clip.
    :param shift:   Subpixel shift (top, left).
    :param width:   Target width.
    :param height:  Target height.
    :param kwargs:  Extra parameters to merge.
    :return:        Final dictionary of keyword arguments for the scale function.
    """
    return dict(width=width, height=height, src_top=shift[0], src_left=shift[1]) | self.kwargs | kwargs

inference

inference(
    clip: ConstantFormatVideoNode, **kwargs: Any
) -> ConstantFormatVideoNode
Source code
302
303
304
305
306
def inference(self, clip: ConstantFormatVideoNode, **kwargs: Any) -> ConstantFormatVideoNode:
    from vsmlrt import ArtCNN as mlrt_ArtCNN
    from vsmlrt import ArtCNNModel

    return mlrt_ArtCNN(clip, self.tiles, self.tilesize, self.overlap, ArtCNNModel(self._model), self.backend)

kernel_radius

kernel_radius() -> int

Return the effective kernel radius for the scaler.

Returns:

  • int

    Kernel radius.

Raises:

  • CustomNotImplementedError

    If no kernel radius is defined.

Source code
347
348
349
350
351
352
353
354
355
@cached_property
def kernel_radius(self) -> int:
    """
    Return the effective kernel radius for the scaler.

    :raises CustomNotImplementedError:  If no kernel radius is defined.
    :return:                            Kernel radius.
    """
    ...

multi

multi(
    clip: VideoNodeT,
    multi: float = 2.0,
    shift: tuple[TopShift, LeftShift] = (0, 0),
    **kwargs: Any
) -> VideoNodeT

Deprecated alias for supersample.

Parameters:

  • clip
    (VideoNodeT) –

    The source clip.

  • multi
    (float, default: 2.0 ) –

    Supersampling factor.

  • shift
    (tuple[TopShift, LeftShift], default: (0, 0) ) –

    Subpixel shift (top, left) applied during scaling.

  • kwargs
    (Any, default: {} ) –

    Additional arguments forwarded to the scale function.

Returns:

Source code
451
452
453
454
455
456
457
458
459
460
461
462
463
464
@deprecated('The "multi" method is deprecated. Use "supersample" instead.', category=DeprecationWarning)
def multi(
    self, clip: VideoNodeT, multi: float = 2.0, shift: tuple[TopShift, LeftShift] = (0, 0), **kwargs: Any
) -> VideoNodeT:
    """
    Deprecated alias for `supersample`.

    :param clip:    The source clip.
    :param multi:   Supersampling factor.
    :param shift:   Subpixel shift (top, left) applied during scaling.
    :param kwargs:  Additional arguments forwarded to the scale function.
    :return:        The supersampled clip.
    """
    return self.supersample(clip, multi, shift, **kwargs)

postprocess_clip

postprocess_clip(
    clip: VideoNode, input_clip: VideoNode, **kwargs: Any
) -> ConstantFormatVideoNode

Handles postprocessing of the model's output after inference.

Source code
231
232
233
234
235
236
def postprocess_clip(self, clip: vs.VideoNode, input_clip: vs.VideoNode, **kwargs: Any) -> ConstantFormatVideoNode:
    """Handles postprocessing of the model's output after inference."""

    return depth(
        clip, input_clip, dither_type=DitherType.ORDERED if 0 in {clip.width, clip.height} else DitherType.AUTO
    )

preprocess_clip

preprocess_clip(clip: VideoNode, **kwargs: Any) -> ConstantFormatVideoNode
Source code
310
311
def preprocess_clip(self, clip: vs.VideoNode, **kwargs: Any) -> ConstantFormatVideoNode:
    return super().preprocess_clip(get_y(clip), **kwargs)

pretty_string

pretty_string() -> str

Cached property returning a user-friendly string representation.

Returns:

  • str

    Pretty-printed string with arguments.

Source code
368
369
370
371
372
373
374
375
@cached_property
def pretty_string(self) -> str:
    """
    Cached property returning a user-friendly string representation.

    :return: Pretty-printed string with arguments.
    """
    return self._pretty_string()

scale

scale(
    clip: VideoNode,
    width: int | None = None,
    height: int | None = None,
    shift: tuple[float, float] = (0, 0),
    **kwargs: Any
) -> ConstantFormatVideoNode

Scale the given clip using the ONNX model.

Parameters:

  • clip
    (VideoNode) –

    The input clip to be scaled.

  • width
    (int | None, default: None ) –

    The target width for scaling. If None, the width of the input clip will be used.

  • height
    (int | None, default: None ) –

    The target height for scaling. If None, the height of the input clip will be used.

  • shift
    (tuple[float, float], default: (0, 0) ) –

    A tuple representing the shift values for the x and y axes.

  • **kwargs
    (Any, default: {} ) –

    Additional arguments to be passed to the preprocess_clip, postprocess_clip, inference, and _final_scale methods. Use the prefix preprocess_ or postprocess_ to pass an argument to the respective method. Use the prefix inference_ to pass an argument to the inference method.

Returns:

  • ConstantFormatVideoNode

    The scaled clip.

Source code
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
def scale(
    self,
    clip: vs.VideoNode,
    width: int | None = None,
    height: int | None = None,
    shift: tuple[float, float] = (0, 0),
    **kwargs: Any
) -> ConstantFormatVideoNode:
    """
    Scale the given clip using the ONNX model.

    :param clip:        The input clip to be scaled.
    :param width:       The target width for scaling. If None, the width of the input clip will be used.
    :param height:      The target height for scaling. If None, the height of the input clip will be used.
    :param shift:       A tuple representing the shift values for the x and y axes.
    :param **kwargs:    Additional arguments to be passed to the `preprocess_clip`, `postprocess_clip`,
                        `inference`, and `_final_scale` methods.
                        Use the prefix `preprocess_` or `postprocess_` to pass an argument to the respective method.
                        Use the prefix `inference_` to pass an argument to the inference method.

    :return:            The scaled clip.
    """
    from vsmlrt import Backend

    assert check_variable_format(clip, self.__class__)

    width, height = self._wh_norm(clip, width, height)

    preprocess_kwargs = dict[str, Any]()
    postprocess_kwargs = dict[str, Any]()
    inference_kwargs = dict[str, Any]()

    for k in kwargs.copy():
        for prefix, ckwargs in zip(
            ("preprocess_", "postprocess_", "inference_"),
            (preprocess_kwargs, postprocess_kwargs, inference_kwargs)
        ):
            if k.startswith(prefix):
                ckwargs[k.removeprefix(prefix)] = kwargs.pop(k)
                break

    wclip = self.preprocess_clip(clip, **preprocess_kwargs)

    if 0 not in {clip.width, clip.height}:
        scaled = self.inference(wclip, **inference_kwargs)
    else:
        if not isinstance(self.backend, Backend.TRT):
            raise CustomValueError(
                "Variable resolution clips can only be processed with TRT Backend!", self.__class__, self.backend
            )

        warning(f"{self.__class__.__name__}: Variable resolution clip detected!")

        if self.backend.static_shape:
            warning("static_shape is True, setting it to False...")
            self.backend.static_shape = False

        if not self.backend.max_shapes:
            warning("max_shapes is None, setting it to (1936, 1088). You may want to adjust it...")
            self.backend.max_shapes = (1936, 1088)

        if not self.backend.opt_shapes:
            warning("opt_shapes is None, setting it to (64, 64). You may want to adjust it...")
            self.backend.opt_shapes = (64, 64)

        scaled = ProcessVariableResClip[ConstantFormatVideoNode].from_func(
            wclip, lambda c: self.inference(c, **inference_kwargs), False, wclip.format, self.max_instances
        )

    scaled = self.postprocess_clip(scaled, clip, **postprocess_kwargs)

    return self._finish_scale(scaled, clip, width, height, shift, **kwargs)

supersample

supersample(
    clip: VideoNodeT,
    rfactor: float = 2.0,
    shift: tuple[TopShift, LeftShift] = (0, 0),
    **kwargs: Any
) -> VideoNodeT

Supersample a clip by a given scaling factor.

Parameters:

  • clip
    (VideoNodeT) –

    The source clip.

  • rfactor
    (float, default: 2.0 ) –

    Scaling factor for supersampling.

  • shift
    (tuple[TopShift, LeftShift], default: (0, 0) ) –

    Subpixel shift (top, left) applied during scaling.

  • kwargs
    (Any, default: {} ) –

    Additional arguments forwarded to the scale function.

Returns:

Raises:

  • CustomValueError

    If resulting resolution is non-positive.

Source code
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
def supersample(
    self, clip: VideoNodeT, rfactor: float = 2.0, shift: tuple[TopShift, LeftShift] = (0, 0), **kwargs: Any
) -> VideoNodeT:
    """
    Supersample a clip by a given scaling factor.

    :param clip:                The source clip.
    :param rfactor:             Scaling factor for supersampling.
    :param shift:               Subpixel shift (top, left) applied during scaling.
    :param kwargs:              Additional arguments forwarded to the scale function.
    :raises CustomValueError:   If resulting resolution is non-positive.
    :return:                    The supersampled clip.
    """
    assert check_variable_resolution(clip, self.supersample)

    dst_width, dst_height = ceil(clip.width * rfactor), ceil(clip.height * rfactor)

    if max(dst_width, dst_height) <= 0.0:
        raise CustomValueError(
            'Multiplying the resolution by "rfactor" must result in a positive resolution!',
            self.supersample,
            rfactor,
        )

    return self.scale(clip, dst_width, dst_height, shift, **kwargs)  # type: ignore[return-value]

C4F16

C4F16(
    backend: Any | None = None,
    tiles: int | tuple[int, int] | None = None,
    tilesize: int | tuple[int, int] | None = None,
    overlap: int | tuple[int, int] | None = None,
    max_instances: int = 2,
    *,
    kernel: KernelLike = Catrom,
    scaler: ScalerLike | None = None,
    shifter: KernelLike | None = None,
    **kwargs: Any
)

Bases: BaseArtCNNLuma

This has 4 internal convolution layers with 16 filters each.

The currently fastest variant. Not really recommended for any filtering.

Should strictly be used for real-time applications and even then the other non R ones should be fast enough...

Example usage:

from vsscale import ArtCNN

doubled = ArtCNN.C4F16().scale(clip, clip.width * 2, clip.height * 2)

Parameters:

  • backend

    (Any | None, default: None ) –

    The backend to be used with the vs-mlrt framework. If set to None, the most suitable backend will be automatically selected, prioritizing fp16 support.

  • tiles

    (int | tuple[int, int] | None, default: None ) –

    Whether to split the image into multiple tiles. This can help reduce VRAM usage, but note that the model's behavior may vary when they are used.

  • tilesize

    (int | tuple[int, int] | None, default: None ) –

    The size of each tile when splitting the image (if tiles are enabled).

  • overlap

    (int | tuple[int, int] | None, default: None ) –

    The size of overlap between tiles.

  • max_instances

    (int, default: 2 ) –

    Maximum instances to spawn when scaling a variable resolution clip.

  • kernel

    (KernelLike, default: Catrom ) –

    Base kernel to be used for certain scaling/shifting/resampling operations. Defaults to Catrom.

  • scaler

    (ScalerLike | None, default: None ) –

    Scaler used for scaling operations. Defaults to kernel.

  • shifter

    (KernelLike | None, default: None ) –

    Kernel used for shifting operations. Defaults to kernel.

  • **kwargs

    (Any, default: {} ) –

    Additional arguments to pass to the backend. See the vsmlrt backend's docstring for more details.

Classes:

Methods:

  • calc_tilesize

    Reimplementation of vsmlrt.calc_tilesize helper function

  • ensure_obj

    Ensure that the input is a scaler instance, resolving it if necessary.

  • from_param

    Resolve and return a scaler type from a given input (string, type, or instance).

  • get_scale_args

    Generate the keyword arguments used for scaling.

  • inference
  • kernel_radius

    Return the effective kernel radius for the scaler.

  • multi

    Deprecated alias for supersample.

  • postprocess_clip

    Handles postprocessing of the model's output after inference.

  • preprocess_clip
  • pretty_string

    Cached property returning a user-friendly string representation.

  • scale

    Scale the given clip using the ONNX model.

  • supersample

    Supersample a clip by a given scaling factor.

Attributes:

Source code
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
def __init__(
    self,
    backend: Any | None = None,
    tiles: int | tuple[int, int] | None = None,
    tilesize: int | tuple[int, int] | None = None,
    overlap: int | tuple[int, int] | None = None,
    max_instances: int = 2,
    *,
    kernel: KernelLike = Catrom,
    scaler: ScalerLike | None = None,
    shifter: KernelLike | None = None,
    **kwargs: Any
) -> None:
    """
    :param backend:         The backend to be used with the vs-mlrt framework.
                            If set to None, the most suitable backend will be automatically selected, prioritizing fp16 support.
    :param tiles:           Whether to split the image into multiple tiles.
                            This can help reduce VRAM usage, but note that the model's behavior may vary when they are used.
    :param tilesize:        The size of each tile when splitting the image (if tiles are enabled).
    :param overlap:         The size of overlap between tiles.
    :param max_instances:   Maximum instances to spawn when scaling a variable resolution clip.
    :param kernel:          Base kernel to be used for certain scaling/shifting/resampling operations.
                            Defaults to Catrom.
    :param scaler:          Scaler used for scaling operations. Defaults to kernel.
    :param shifter:         Kernel used for shifting operations. Defaults to kernel.
    :param **kwargs:        Additional arguments to pass to the backend.
                            See the vsmlrt backend's docstring for more details.
    """
    super().__init__(
        None, backend, tiles, tilesize, overlap, max_instances, kernel=kernel, scaler=scaler, shifter=shifter, **kwargs
    )

backend instance-attribute

backend = autoselect_backend(**_default_args | kwargs)

kernel instance-attribute

kernel = ensure_obj(kernel, __class__)

kwargs instance-attribute

kwargs: dict[str, Any] = kwargs

Arguments passed to the implemented funcs or internal scale function.

max_instances instance-attribute

max_instances = max_instances

model instance-attribute

model = str(resolve())

overlap instance-attribute

overlap = overlap

overlap_h instance-attribute

overlap_h = 8

overlap_w instance-attribute

overlap_w = 8

scale_function instance-attribute

scale_function: Callable[..., VideoNode]

Scale function called internally when performing scaling operations.

scaler instance-attribute

scaler = ensure_obj(scaler or kernel, __class__)

shifter instance-attribute

shifter = ensure_obj(shifter or kernel, __class__)

tiles instance-attribute

tiles = tiles

tilesize instance-attribute

tilesize = tilesize

cached_property

cached_property(func: Callable[Concatenate[_BaseScalerT, P], T_co])

Bases: cached_property[T_co]

Read only version of functools.cached_property.

Source code
265
def __init__(self, func: Callable[Concatenate[_BaseScalerT, P], T_co]) -> None: ...

calc_tilesize

calc_tilesize(
    clip: VideoNode, **kwargs: Any
) -> tuple[tuple[int, int], tuple[int, int]]

Reimplementation of vsmlrt.calc_tilesize helper function

Source code
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
def calc_tilesize(self, clip: vs.VideoNode, **kwargs: Any) -> tuple[tuple[int, int], tuple[int, int]]:
    """Reimplementation of vsmlrt.calc_tilesize helper function"""

    from vsmlrt import calc_tilesize

    kwargs = dict(
        tiles=self.tiles,
        tilesize=self.tilesize,
        width=clip.width,
        height=clip.height,
        multiple=1,
        overlap_w=self.overlap_w,
        overlap_h=self.overlap_h,
    ) | kwargs

    return calc_tilesize(**kwargs)

ensure_obj classmethod

ensure_obj(
    scaler: str | type[Self] | Self | None = None,
    /,
    func_except: FuncExceptT | None = None,
) -> Self

Ensure that the input is a scaler instance, resolving it if necessary.

Parameters:

  • scaler
    (str | type[Self] | Self | None, default: None ) –

    Scaler identifier (string, class, or instance).

  • func_except
    (FuncExceptT | None, default: None ) –

    Function returned for custom error handling.

Returns:

  • Self

    Scaler instance.

Source code
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
@classmethod
def ensure_obj(
    cls,
    scaler: str | type[Self] | Self | None = None,
    /,
    func_except: FuncExceptT | None = None,
) -> Self:
    """
    Ensure that the input is a scaler instance, resolving it if necessary.

    :param scaler:          Scaler identifier (string, class, or instance).
    :param func_except:     Function returned for custom error handling.
    :return:                Scaler instance.
    """
    return _base_ensure_obj(cls, scaler, func_except)

from_param classmethod

from_param(
    scaler: str | type[Self] | Self | None = None,
    /,
    func_except: FuncExceptT | None = None,
) -> type[Self]

Resolve and return a scaler type from a given input (string, type, or instance).

Parameters:

  • scaler
    (str | type[Self] | Self | None, default: None ) –

    Scaler identifier (string, class, or instance).

  • func_except
    (FuncExceptT | None, default: None ) –

    Function returned for custom error handling.

Returns:

  • type[Self]

    Resolved scaler type.

Source code
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
@classmethod
def from_param(
    cls,
    scaler: str | type[Self] | Self | None = None,
    /,
    func_except: FuncExceptT | None = None,
) -> type[Self]:
    """
    Resolve and return a scaler type from a given input (string, type, or instance).

    :param scaler:          Scaler identifier (string, class, or instance).
    :param func_except:     Function returned for custom error handling.
    :return:                Resolved scaler type.
    """
    return _base_from_param(cls, scaler, cls._err_class, func_except)

get_scale_args

get_scale_args(
    clip: VideoNode,
    shift: tuple[TopShift, LeftShift] = (0, 0),
    width: int | None = None,
    height: int | None = None,
    **kwargs: Any
) -> dict[str, Any]

Generate the keyword arguments used for scaling.

Parameters:

  • clip
    (VideoNode) –

    The source clip.

  • shift
    (tuple[TopShift, LeftShift], default: (0, 0) ) –

    Subpixel shift (top, left).

  • width
    (int | None, default: None ) –

    Target width.

  • height
    (int | None, default: None ) –

    Target height.

  • kwargs
    (Any, default: {} ) –

    Extra parameters to merge.

Returns:

  • dict[str, Any]

    Final dictionary of keyword arguments for the scale function.

Source code
466
467
468
469
470
471
472
473
474
475
476
477
478
479
480
481
482
483
484
def get_scale_args(
    self,
    clip: vs.VideoNode,
    shift: tuple[TopShift, LeftShift] = (0, 0),
    width: int | None = None,
    height: int | None = None,
    **kwargs: Any,
) -> dict[str, Any]:
    """
    Generate the keyword arguments used for scaling.

    :param clip:    The source clip.
    :param shift:   Subpixel shift (top, left).
    :param width:   Target width.
    :param height:  Target height.
    :param kwargs:  Extra parameters to merge.
    :return:        Final dictionary of keyword arguments for the scale function.
    """
    return dict(width=width, height=height, src_top=shift[0], src_left=shift[1]) | self.kwargs | kwargs

inference

inference(
    clip: ConstantFormatVideoNode, **kwargs: Any
) -> ConstantFormatVideoNode
Source code
302
303
304
305
306
def inference(self, clip: ConstantFormatVideoNode, **kwargs: Any) -> ConstantFormatVideoNode:
    from vsmlrt import ArtCNN as mlrt_ArtCNN
    from vsmlrt import ArtCNNModel

    return mlrt_ArtCNN(clip, self.tiles, self.tilesize, self.overlap, ArtCNNModel(self._model), self.backend)

kernel_radius

kernel_radius() -> int

Return the effective kernel radius for the scaler.

Returns:

  • int

    Kernel radius.

Raises:

  • CustomNotImplementedError

    If no kernel radius is defined.

Source code
347
348
349
350
351
352
353
354
355
@cached_property
def kernel_radius(self) -> int:
    """
    Return the effective kernel radius for the scaler.

    :raises CustomNotImplementedError:  If no kernel radius is defined.
    :return:                            Kernel radius.
    """
    ...

multi

multi(
    clip: VideoNodeT,
    multi: float = 2.0,
    shift: tuple[TopShift, LeftShift] = (0, 0),
    **kwargs: Any
) -> VideoNodeT

Deprecated alias for supersample.

Parameters:

  • clip
    (VideoNodeT) –

    The source clip.

  • multi
    (float, default: 2.0 ) –

    Supersampling factor.

  • shift
    (tuple[TopShift, LeftShift], default: (0, 0) ) –

    Subpixel shift (top, left) applied during scaling.

  • kwargs
    (Any, default: {} ) –

    Additional arguments forwarded to the scale function.

Returns:

Source code
451
452
453
454
455
456
457
458
459
460
461
462
463
464
@deprecated('The "multi" method is deprecated. Use "supersample" instead.', category=DeprecationWarning)
def multi(
    self, clip: VideoNodeT, multi: float = 2.0, shift: tuple[TopShift, LeftShift] = (0, 0), **kwargs: Any
) -> VideoNodeT:
    """
    Deprecated alias for `supersample`.

    :param clip:    The source clip.
    :param multi:   Supersampling factor.
    :param shift:   Subpixel shift (top, left) applied during scaling.
    :param kwargs:  Additional arguments forwarded to the scale function.
    :return:        The supersampled clip.
    """
    return self.supersample(clip, multi, shift, **kwargs)

postprocess_clip

postprocess_clip(
    clip: VideoNode, input_clip: VideoNode, **kwargs: Any
) -> ConstantFormatVideoNode

Handles postprocessing of the model's output after inference.

Source code
231
232
233
234
235
236
def postprocess_clip(self, clip: vs.VideoNode, input_clip: vs.VideoNode, **kwargs: Any) -> ConstantFormatVideoNode:
    """Handles postprocessing of the model's output after inference."""

    return depth(
        clip, input_clip, dither_type=DitherType.ORDERED if 0 in {clip.width, clip.height} else DitherType.AUTO
    )

preprocess_clip

preprocess_clip(clip: VideoNode, **kwargs: Any) -> ConstantFormatVideoNode
Source code
310
311
def preprocess_clip(self, clip: vs.VideoNode, **kwargs: Any) -> ConstantFormatVideoNode:
    return super().preprocess_clip(get_y(clip), **kwargs)

pretty_string

pretty_string() -> str

Cached property returning a user-friendly string representation.

Returns:

  • str

    Pretty-printed string with arguments.

Source code
368
369
370
371
372
373
374
375
@cached_property
def pretty_string(self) -> str:
    """
    Cached property returning a user-friendly string representation.

    :return: Pretty-printed string with arguments.
    """
    return self._pretty_string()

scale

scale(
    clip: VideoNode,
    width: int | None = None,
    height: int | None = None,
    shift: tuple[float, float] = (0, 0),
    **kwargs: Any
) -> ConstantFormatVideoNode

Scale the given clip using the ONNX model.

Parameters:

  • clip
    (VideoNode) –

    The input clip to be scaled.

  • width
    (int | None, default: None ) –

    The target width for scaling. If None, the width of the input clip will be used.

  • height
    (int | None, default: None ) –

    The target height for scaling. If None, the height of the input clip will be used.

  • shift
    (tuple[float, float], default: (0, 0) ) –

    A tuple representing the shift values for the x and y axes.

  • **kwargs
    (Any, default: {} ) –

    Additional arguments to be passed to the preprocess_clip, postprocess_clip, inference, and _final_scale methods. Use the prefix preprocess_ or postprocess_ to pass an argument to the respective method. Use the prefix inference_ to pass an argument to the inference method.

Returns:

  • ConstantFormatVideoNode

    The scaled clip.

Source code
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
def scale(
    self,
    clip: vs.VideoNode,
    width: int | None = None,
    height: int | None = None,
    shift: tuple[float, float] = (0, 0),
    **kwargs: Any
) -> ConstantFormatVideoNode:
    """
    Scale the given clip using the ONNX model.

    :param clip:        The input clip to be scaled.
    :param width:       The target width for scaling. If None, the width of the input clip will be used.
    :param height:      The target height for scaling. If None, the height of the input clip will be used.
    :param shift:       A tuple representing the shift values for the x and y axes.
    :param **kwargs:    Additional arguments to be passed to the `preprocess_clip`, `postprocess_clip`,
                        `inference`, and `_final_scale` methods.
                        Use the prefix `preprocess_` or `postprocess_` to pass an argument to the respective method.
                        Use the prefix `inference_` to pass an argument to the inference method.

    :return:            The scaled clip.
    """
    from vsmlrt import Backend

    assert check_variable_format(clip, self.__class__)

    width, height = self._wh_norm(clip, width, height)

    preprocess_kwargs = dict[str, Any]()
    postprocess_kwargs = dict[str, Any]()
    inference_kwargs = dict[str, Any]()

    for k in kwargs.copy():
        for prefix, ckwargs in zip(
            ("preprocess_", "postprocess_", "inference_"),
            (preprocess_kwargs, postprocess_kwargs, inference_kwargs)
        ):
            if k.startswith(prefix):
                ckwargs[k.removeprefix(prefix)] = kwargs.pop(k)
                break

    wclip = self.preprocess_clip(clip, **preprocess_kwargs)

    if 0 not in {clip.width, clip.height}:
        scaled = self.inference(wclip, **inference_kwargs)
    else:
        if not isinstance(self.backend, Backend.TRT):
            raise CustomValueError(
                "Variable resolution clips can only be processed with TRT Backend!", self.__class__, self.backend
            )

        warning(f"{self.__class__.__name__}: Variable resolution clip detected!")

        if self.backend.static_shape:
            warning("static_shape is True, setting it to False...")
            self.backend.static_shape = False

        if not self.backend.max_shapes:
            warning("max_shapes is None, setting it to (1936, 1088). You may want to adjust it...")
            self.backend.max_shapes = (1936, 1088)

        if not self.backend.opt_shapes:
            warning("opt_shapes is None, setting it to (64, 64). You may want to adjust it...")
            self.backend.opt_shapes = (64, 64)

        scaled = ProcessVariableResClip[ConstantFormatVideoNode].from_func(
            wclip, lambda c: self.inference(c, **inference_kwargs), False, wclip.format, self.max_instances
        )

    scaled = self.postprocess_clip(scaled, clip, **postprocess_kwargs)

    return self._finish_scale(scaled, clip, width, height, shift, **kwargs)

supersample

supersample(
    clip: VideoNodeT,
    rfactor: float = 2.0,
    shift: tuple[TopShift, LeftShift] = (0, 0),
    **kwargs: Any
) -> VideoNodeT

Supersample a clip by a given scaling factor.

Parameters:

  • clip
    (VideoNodeT) –

    The source clip.

  • rfactor
    (float, default: 2.0 ) –

    Scaling factor for supersampling.

  • shift
    (tuple[TopShift, LeftShift], default: (0, 0) ) –

    Subpixel shift (top, left) applied during scaling.

  • kwargs
    (Any, default: {} ) –

    Additional arguments forwarded to the scale function.

Returns:

Raises:

  • CustomValueError

    If resulting resolution is non-positive.

Source code
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
def supersample(
    self, clip: VideoNodeT, rfactor: float = 2.0, shift: tuple[TopShift, LeftShift] = (0, 0), **kwargs: Any
) -> VideoNodeT:
    """
    Supersample a clip by a given scaling factor.

    :param clip:                The source clip.
    :param rfactor:             Scaling factor for supersampling.
    :param shift:               Subpixel shift (top, left) applied during scaling.
    :param kwargs:              Additional arguments forwarded to the scale function.
    :raises CustomValueError:   If resulting resolution is non-positive.
    :return:                    The supersampled clip.
    """
    assert check_variable_resolution(clip, self.supersample)

    dst_width, dst_height = ceil(clip.width * rfactor), ceil(clip.height * rfactor)

    if max(dst_width, dst_height) <= 0.0:
        raise CustomValueError(
            'Multiplying the resolution by "rfactor" must result in a positive resolution!',
            self.supersample,
            rfactor,
        )

    return self.scale(clip, dst_width, dst_height, shift, **kwargs)  # type: ignore[return-value]

C4F16_DS

C4F16_DS(
    backend: Any | None = None,
    tiles: int | tuple[int, int] | None = None,
    tilesize: int | tuple[int, int] | None = None,
    overlap: int | tuple[int, int] | None = None,
    max_instances: int = 2,
    *,
    kernel: KernelLike = Catrom,
    scaler: ScalerLike | None = None,
    shifter: KernelLike | None = None,
    **kwargs: Any
)

Bases: BaseArtCNNLuma

The same as C4F16 but intended to also sharpen and denoise.

Example usage:

from vsscale import ArtCNN

doubled = ArtCNN.C4F16_DS().scale(clip, clip.width * 2, clip.height * 2)

Parameters:

  • backend

    (Any | None, default: None ) –

    The backend to be used with the vs-mlrt framework. If set to None, the most suitable backend will be automatically selected, prioritizing fp16 support.

  • tiles

    (int | tuple[int, int] | None, default: None ) –

    Whether to split the image into multiple tiles. This can help reduce VRAM usage, but note that the model's behavior may vary when they are used.

  • tilesize

    (int | tuple[int, int] | None, default: None ) –

    The size of each tile when splitting the image (if tiles are enabled).

  • overlap

    (int | tuple[int, int] | None, default: None ) –

    The size of overlap between tiles.

  • max_instances

    (int, default: 2 ) –

    Maximum instances to spawn when scaling a variable resolution clip.

  • kernel

    (KernelLike, default: Catrom ) –

    Base kernel to be used for certain scaling/shifting/resampling operations. Defaults to Catrom.

  • scaler

    (ScalerLike | None, default: None ) –

    Scaler used for scaling operations. Defaults to kernel.

  • shifter

    (KernelLike | None, default: None ) –

    Kernel used for shifting operations. Defaults to kernel.

  • **kwargs

    (Any, default: {} ) –

    Additional arguments to pass to the backend. See the vsmlrt backend's docstring for more details.

Classes:

Methods:

  • calc_tilesize

    Reimplementation of vsmlrt.calc_tilesize helper function

  • ensure_obj

    Ensure that the input is a scaler instance, resolving it if necessary.

  • from_param

    Resolve and return a scaler type from a given input (string, type, or instance).

  • get_scale_args

    Generate the keyword arguments used for scaling.

  • inference
  • kernel_radius

    Return the effective kernel radius for the scaler.

  • multi

    Deprecated alias for supersample.

  • postprocess_clip

    Handles postprocessing of the model's output after inference.

  • preprocess_clip
  • pretty_string

    Cached property returning a user-friendly string representation.

  • scale

    Scale the given clip using the ONNX model.

  • supersample

    Supersample a clip by a given scaling factor.

Attributes:

Source code
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
def __init__(
    self,
    backend: Any | None = None,
    tiles: int | tuple[int, int] | None = None,
    tilesize: int | tuple[int, int] | None = None,
    overlap: int | tuple[int, int] | None = None,
    max_instances: int = 2,
    *,
    kernel: KernelLike = Catrom,
    scaler: ScalerLike | None = None,
    shifter: KernelLike | None = None,
    **kwargs: Any
) -> None:
    """
    :param backend:         The backend to be used with the vs-mlrt framework.
                            If set to None, the most suitable backend will be automatically selected, prioritizing fp16 support.
    :param tiles:           Whether to split the image into multiple tiles.
                            This can help reduce VRAM usage, but note that the model's behavior may vary when they are used.
    :param tilesize:        The size of each tile when splitting the image (if tiles are enabled).
    :param overlap:         The size of overlap between tiles.
    :param max_instances:   Maximum instances to spawn when scaling a variable resolution clip.
    :param kernel:          Base kernel to be used for certain scaling/shifting/resampling operations.
                            Defaults to Catrom.
    :param scaler:          Scaler used for scaling operations. Defaults to kernel.
    :param shifter:         Kernel used for shifting operations. Defaults to kernel.
    :param **kwargs:        Additional arguments to pass to the backend.
                            See the vsmlrt backend's docstring for more details.
    """
    super().__init__(
        None, backend, tiles, tilesize, overlap, max_instances, kernel=kernel, scaler=scaler, shifter=shifter, **kwargs
    )

backend instance-attribute

backend = autoselect_backend(**_default_args | kwargs)

kernel instance-attribute

kernel = ensure_obj(kernel, __class__)

kwargs instance-attribute

kwargs: dict[str, Any] = kwargs

Arguments passed to the implemented funcs or internal scale function.

max_instances instance-attribute

max_instances = max_instances

model instance-attribute

model = str(resolve())

overlap instance-attribute

overlap = overlap

overlap_h instance-attribute

overlap_h = 8

overlap_w instance-attribute

overlap_w = 8

scale_function instance-attribute

scale_function: Callable[..., VideoNode]

Scale function called internally when performing scaling operations.

scaler instance-attribute

scaler = ensure_obj(scaler or kernel, __class__)

shifter instance-attribute

shifter = ensure_obj(shifter or kernel, __class__)

tiles instance-attribute

tiles = tiles

tilesize instance-attribute

tilesize = tilesize

cached_property

cached_property(func: Callable[Concatenate[_BaseScalerT, P], T_co])

Bases: cached_property[T_co]

Read only version of functools.cached_property.

Source code
265
def __init__(self, func: Callable[Concatenate[_BaseScalerT, P], T_co]) -> None: ...

calc_tilesize

calc_tilesize(
    clip: VideoNode, **kwargs: Any
) -> tuple[tuple[int, int], tuple[int, int]]

Reimplementation of vsmlrt.calc_tilesize helper function

Source code
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
def calc_tilesize(self, clip: vs.VideoNode, **kwargs: Any) -> tuple[tuple[int, int], tuple[int, int]]:
    """Reimplementation of vsmlrt.calc_tilesize helper function"""

    from vsmlrt import calc_tilesize

    kwargs = dict(
        tiles=self.tiles,
        tilesize=self.tilesize,
        width=clip.width,
        height=clip.height,
        multiple=1,
        overlap_w=self.overlap_w,
        overlap_h=self.overlap_h,
    ) | kwargs

    return calc_tilesize(**kwargs)

ensure_obj classmethod

ensure_obj(
    scaler: str | type[Self] | Self | None = None,
    /,
    func_except: FuncExceptT | None = None,
) -> Self

Ensure that the input is a scaler instance, resolving it if necessary.

Parameters:

  • scaler
    (str | type[Self] | Self | None, default: None ) –

    Scaler identifier (string, class, or instance).

  • func_except
    (FuncExceptT | None, default: None ) –

    Function returned for custom error handling.

Returns:

  • Self

    Scaler instance.

Source code
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
@classmethod
def ensure_obj(
    cls,
    scaler: str | type[Self] | Self | None = None,
    /,
    func_except: FuncExceptT | None = None,
) -> Self:
    """
    Ensure that the input is a scaler instance, resolving it if necessary.

    :param scaler:          Scaler identifier (string, class, or instance).
    :param func_except:     Function returned for custom error handling.
    :return:                Scaler instance.
    """
    return _base_ensure_obj(cls, scaler, func_except)

from_param classmethod

from_param(
    scaler: str | type[Self] | Self | None = None,
    /,
    func_except: FuncExceptT | None = None,
) -> type[Self]

Resolve and return a scaler type from a given input (string, type, or instance).

Parameters:

  • scaler
    (str | type[Self] | Self | None, default: None ) –

    Scaler identifier (string, class, or instance).

  • func_except
    (FuncExceptT | None, default: None ) –

    Function returned for custom error handling.

Returns:

  • type[Self]

    Resolved scaler type.

Source code
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
@classmethod
def from_param(
    cls,
    scaler: str | type[Self] | Self | None = None,
    /,
    func_except: FuncExceptT | None = None,
) -> type[Self]:
    """
    Resolve and return a scaler type from a given input (string, type, or instance).

    :param scaler:          Scaler identifier (string, class, or instance).
    :param func_except:     Function returned for custom error handling.
    :return:                Resolved scaler type.
    """
    return _base_from_param(cls, scaler, cls._err_class, func_except)

get_scale_args

get_scale_args(
    clip: VideoNode,
    shift: tuple[TopShift, LeftShift] = (0, 0),
    width: int | None = None,
    height: int | None = None,
    **kwargs: Any
) -> dict[str, Any]

Generate the keyword arguments used for scaling.

Parameters:

  • clip
    (VideoNode) –

    The source clip.

  • shift
    (tuple[TopShift, LeftShift], default: (0, 0) ) –

    Subpixel shift (top, left).

  • width
    (int | None, default: None ) –

    Target width.

  • height
    (int | None, default: None ) –

    Target height.

  • kwargs
    (Any, default: {} ) –

    Extra parameters to merge.

Returns:

  • dict[str, Any]

    Final dictionary of keyword arguments for the scale function.

Source code
466
467
468
469
470
471
472
473
474
475
476
477
478
479
480
481
482
483
484
def get_scale_args(
    self,
    clip: vs.VideoNode,
    shift: tuple[TopShift, LeftShift] = (0, 0),
    width: int | None = None,
    height: int | None = None,
    **kwargs: Any,
) -> dict[str, Any]:
    """
    Generate the keyword arguments used for scaling.

    :param clip:    The source clip.
    :param shift:   Subpixel shift (top, left).
    :param width:   Target width.
    :param height:  Target height.
    :param kwargs:  Extra parameters to merge.
    :return:        Final dictionary of keyword arguments for the scale function.
    """
    return dict(width=width, height=height, src_top=shift[0], src_left=shift[1]) | self.kwargs | kwargs

inference

inference(
    clip: ConstantFormatVideoNode, **kwargs: Any
) -> ConstantFormatVideoNode
Source code
302
303
304
305
306
def inference(self, clip: ConstantFormatVideoNode, **kwargs: Any) -> ConstantFormatVideoNode:
    from vsmlrt import ArtCNN as mlrt_ArtCNN
    from vsmlrt import ArtCNNModel

    return mlrt_ArtCNN(clip, self.tiles, self.tilesize, self.overlap, ArtCNNModel(self._model), self.backend)

kernel_radius

kernel_radius() -> int

Return the effective kernel radius for the scaler.

Returns:

  • int

    Kernel radius.

Raises:

  • CustomNotImplementedError

    If no kernel radius is defined.

Source code
347
348
349
350
351
352
353
354
355
@cached_property
def kernel_radius(self) -> int:
    """
    Return the effective kernel radius for the scaler.

    :raises CustomNotImplementedError:  If no kernel radius is defined.
    :return:                            Kernel radius.
    """
    ...

multi

multi(
    clip: VideoNodeT,
    multi: float = 2.0,
    shift: tuple[TopShift, LeftShift] = (0, 0),
    **kwargs: Any
) -> VideoNodeT

Deprecated alias for supersample.

Parameters:

  • clip
    (VideoNodeT) –

    The source clip.

  • multi
    (float, default: 2.0 ) –

    Supersampling factor.

  • shift
    (tuple[TopShift, LeftShift], default: (0, 0) ) –

    Subpixel shift (top, left) applied during scaling.

  • kwargs
    (Any, default: {} ) –

    Additional arguments forwarded to the scale function.

Returns:

Source code
451
452
453
454
455
456
457
458
459
460
461
462
463
464
@deprecated('The "multi" method is deprecated. Use "supersample" instead.', category=DeprecationWarning)
def multi(
    self, clip: VideoNodeT, multi: float = 2.0, shift: tuple[TopShift, LeftShift] = (0, 0), **kwargs: Any
) -> VideoNodeT:
    """
    Deprecated alias for `supersample`.

    :param clip:    The source clip.
    :param multi:   Supersampling factor.
    :param shift:   Subpixel shift (top, left) applied during scaling.
    :param kwargs:  Additional arguments forwarded to the scale function.
    :return:        The supersampled clip.
    """
    return self.supersample(clip, multi, shift, **kwargs)

postprocess_clip

postprocess_clip(
    clip: VideoNode, input_clip: VideoNode, **kwargs: Any
) -> ConstantFormatVideoNode

Handles postprocessing of the model's output after inference.

Source code
231
232
233
234
235
236
def postprocess_clip(self, clip: vs.VideoNode, input_clip: vs.VideoNode, **kwargs: Any) -> ConstantFormatVideoNode:
    """Handles postprocessing of the model's output after inference."""

    return depth(
        clip, input_clip, dither_type=DitherType.ORDERED if 0 in {clip.width, clip.height} else DitherType.AUTO
    )

preprocess_clip

preprocess_clip(clip: VideoNode, **kwargs: Any) -> ConstantFormatVideoNode
Source code
310
311
def preprocess_clip(self, clip: vs.VideoNode, **kwargs: Any) -> ConstantFormatVideoNode:
    return super().preprocess_clip(get_y(clip), **kwargs)

pretty_string

pretty_string() -> str

Cached property returning a user-friendly string representation.

Returns:

  • str

    Pretty-printed string with arguments.

Source code
368
369
370
371
372
373
374
375
@cached_property
def pretty_string(self) -> str:
    """
    Cached property returning a user-friendly string representation.

    :return: Pretty-printed string with arguments.
    """
    return self._pretty_string()

scale

scale(
    clip: VideoNode,
    width: int | None = None,
    height: int | None = None,
    shift: tuple[float, float] = (0, 0),
    **kwargs: Any
) -> ConstantFormatVideoNode

Scale the given clip using the ONNX model.

Parameters:

  • clip
    (VideoNode) –

    The input clip to be scaled.

  • width
    (int | None, default: None ) –

    The target width for scaling. If None, the width of the input clip will be used.

  • height
    (int | None, default: None ) –

    The target height for scaling. If None, the height of the input clip will be used.

  • shift
    (tuple[float, float], default: (0, 0) ) –

    A tuple representing the shift values for the x and y axes.

  • **kwargs
    (Any, default: {} ) –

    Additional arguments to be passed to the preprocess_clip, postprocess_clip, inference, and _final_scale methods. Use the prefix preprocess_ or postprocess_ to pass an argument to the respective method. Use the prefix inference_ to pass an argument to the inference method.

Returns:

  • ConstantFormatVideoNode

    The scaled clip.

Source code
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
def scale(
    self,
    clip: vs.VideoNode,
    width: int | None = None,
    height: int | None = None,
    shift: tuple[float, float] = (0, 0),
    **kwargs: Any
) -> ConstantFormatVideoNode:
    """
    Scale the given clip using the ONNX model.

    :param clip:        The input clip to be scaled.
    :param width:       The target width for scaling. If None, the width of the input clip will be used.
    :param height:      The target height for scaling. If None, the height of the input clip will be used.
    :param shift:       A tuple representing the shift values for the x and y axes.
    :param **kwargs:    Additional arguments to be passed to the `preprocess_clip`, `postprocess_clip`,
                        `inference`, and `_final_scale` methods.
                        Use the prefix `preprocess_` or `postprocess_` to pass an argument to the respective method.
                        Use the prefix `inference_` to pass an argument to the inference method.

    :return:            The scaled clip.
    """
    from vsmlrt import Backend

    assert check_variable_format(clip, self.__class__)

    width, height = self._wh_norm(clip, width, height)

    preprocess_kwargs = dict[str, Any]()
    postprocess_kwargs = dict[str, Any]()
    inference_kwargs = dict[str, Any]()

    for k in kwargs.copy():
        for prefix, ckwargs in zip(
            ("preprocess_", "postprocess_", "inference_"),
            (preprocess_kwargs, postprocess_kwargs, inference_kwargs)
        ):
            if k.startswith(prefix):
                ckwargs[k.removeprefix(prefix)] = kwargs.pop(k)
                break

    wclip = self.preprocess_clip(clip, **preprocess_kwargs)

    if 0 not in {clip.width, clip.height}:
        scaled = self.inference(wclip, **inference_kwargs)
    else:
        if not isinstance(self.backend, Backend.TRT):
            raise CustomValueError(
                "Variable resolution clips can only be processed with TRT Backend!", self.__class__, self.backend
            )

        warning(f"{self.__class__.__name__}: Variable resolution clip detected!")

        if self.backend.static_shape:
            warning("static_shape is True, setting it to False...")
            self.backend.static_shape = False

        if not self.backend.max_shapes:
            warning("max_shapes is None, setting it to (1936, 1088). You may want to adjust it...")
            self.backend.max_shapes = (1936, 1088)

        if not self.backend.opt_shapes:
            warning("opt_shapes is None, setting it to (64, 64). You may want to adjust it...")
            self.backend.opt_shapes = (64, 64)

        scaled = ProcessVariableResClip[ConstantFormatVideoNode].from_func(
            wclip, lambda c: self.inference(c, **inference_kwargs), False, wclip.format, self.max_instances
        )

    scaled = self.postprocess_clip(scaled, clip, **postprocess_kwargs)

    return self._finish_scale(scaled, clip, width, height, shift, **kwargs)

supersample

supersample(
    clip: VideoNodeT,
    rfactor: float = 2.0,
    shift: tuple[TopShift, LeftShift] = (0, 0),
    **kwargs: Any
) -> VideoNodeT

Supersample a clip by a given scaling factor.

Parameters:

  • clip
    (VideoNodeT) –

    The source clip.

  • rfactor
    (float, default: 2.0 ) –

    Scaling factor for supersampling.

  • shift
    (tuple[TopShift, LeftShift], default: (0, 0) ) –

    Subpixel shift (top, left) applied during scaling.

  • kwargs
    (Any, default: {} ) –

    Additional arguments forwarded to the scale function.

Returns:

Raises:

  • CustomValueError

    If resulting resolution is non-positive.

Source code
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
def supersample(
    self, clip: VideoNodeT, rfactor: float = 2.0, shift: tuple[TopShift, LeftShift] = (0, 0), **kwargs: Any
) -> VideoNodeT:
    """
    Supersample a clip by a given scaling factor.

    :param clip:                The source clip.
    :param rfactor:             Scaling factor for supersampling.
    :param shift:               Subpixel shift (top, left) applied during scaling.
    :param kwargs:              Additional arguments forwarded to the scale function.
    :raises CustomValueError:   If resulting resolution is non-positive.
    :return:                    The supersampled clip.
    """
    assert check_variable_resolution(clip, self.supersample)

    dst_width, dst_height = ceil(clip.width * rfactor), ceil(clip.height * rfactor)

    if max(dst_width, dst_height) <= 0.0:
        raise CustomValueError(
            'Multiplying the resolution by "rfactor" must result in a positive resolution!',
            self.supersample,
            rfactor,
        )

    return self.scale(clip, dst_width, dst_height, shift, **kwargs)  # type: ignore[return-value]

C4F32

C4F32(
    backend: Any | None = None,
    tiles: int | tuple[int, int] | None = None,
    tilesize: int | tuple[int, int] | None = None,
    overlap: int | tuple[int, int] | None = None,
    max_instances: int = 2,
    *,
    kernel: KernelLike = Catrom,
    scaler: ScalerLike | None = None,
    shifter: KernelLike | None = None,
    **kwargs: Any
)

Bases: BaseArtCNNLuma

This has 4 internal convolution layers with 32 filters each.

If you need an even faster model.

Example usage:

from vsscale import ArtCNN

doubled = ArtCNN.C4F32().scale(clip, clip.width * 2, clip.height * 2)

Parameters:

  • backend

    (Any | None, default: None ) –

    The backend to be used with the vs-mlrt framework. If set to None, the most suitable backend will be automatically selected, prioritizing fp16 support.

  • tiles

    (int | tuple[int, int] | None, default: None ) –

    Whether to split the image into multiple tiles. This can help reduce VRAM usage, but note that the model's behavior may vary when they are used.

  • tilesize

    (int | tuple[int, int] | None, default: None ) –

    The size of each tile when splitting the image (if tiles are enabled).

  • overlap

    (int | tuple[int, int] | None, default: None ) –

    The size of overlap between tiles.

  • max_instances

    (int, default: 2 ) –

    Maximum instances to spawn when scaling a variable resolution clip.

  • kernel

    (KernelLike, default: Catrom ) –

    Base kernel to be used for certain scaling/shifting/resampling operations. Defaults to Catrom.

  • scaler

    (ScalerLike | None, default: None ) –

    Scaler used for scaling operations. Defaults to kernel.

  • shifter

    (KernelLike | None, default: None ) –

    Kernel used for shifting operations. Defaults to kernel.

  • **kwargs

    (Any, default: {} ) –

    Additional arguments to pass to the backend. See the vsmlrt backend's docstring for more details.

Classes:

Methods:

  • calc_tilesize

    Reimplementation of vsmlrt.calc_tilesize helper function

  • ensure_obj

    Ensure that the input is a scaler instance, resolving it if necessary.

  • from_param

    Resolve and return a scaler type from a given input (string, type, or instance).

  • get_scale_args

    Generate the keyword arguments used for scaling.

  • inference
  • kernel_radius

    Return the effective kernel radius for the scaler.

  • multi

    Deprecated alias for supersample.

  • postprocess_clip

    Handles postprocessing of the model's output after inference.

  • preprocess_clip
  • pretty_string

    Cached property returning a user-friendly string representation.

  • scale

    Scale the given clip using the ONNX model.

  • supersample

    Supersample a clip by a given scaling factor.

Attributes:

Source code
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
def __init__(
    self,
    backend: Any | None = None,
    tiles: int | tuple[int, int] | None = None,
    tilesize: int | tuple[int, int] | None = None,
    overlap: int | tuple[int, int] | None = None,
    max_instances: int = 2,
    *,
    kernel: KernelLike = Catrom,
    scaler: ScalerLike | None = None,
    shifter: KernelLike | None = None,
    **kwargs: Any
) -> None:
    """
    :param backend:         The backend to be used with the vs-mlrt framework.
                            If set to None, the most suitable backend will be automatically selected, prioritizing fp16 support.
    :param tiles:           Whether to split the image into multiple tiles.
                            This can help reduce VRAM usage, but note that the model's behavior may vary when they are used.
    :param tilesize:        The size of each tile when splitting the image (if tiles are enabled).
    :param overlap:         The size of overlap between tiles.
    :param max_instances:   Maximum instances to spawn when scaling a variable resolution clip.
    :param kernel:          Base kernel to be used for certain scaling/shifting/resampling operations.
                            Defaults to Catrom.
    :param scaler:          Scaler used for scaling operations. Defaults to kernel.
    :param shifter:         Kernel used for shifting operations. Defaults to kernel.
    :param **kwargs:        Additional arguments to pass to the backend.
                            See the vsmlrt backend's docstring for more details.
    """
    super().__init__(
        None, backend, tiles, tilesize, overlap, max_instances, kernel=kernel, scaler=scaler, shifter=shifter, **kwargs
    )

backend instance-attribute

backend = autoselect_backend(**_default_args | kwargs)

kernel instance-attribute

kernel = ensure_obj(kernel, __class__)

kwargs instance-attribute

kwargs: dict[str, Any] = kwargs

Arguments passed to the implemented funcs or internal scale function.

max_instances instance-attribute

max_instances = max_instances

model instance-attribute

model = str(resolve())

overlap instance-attribute

overlap = overlap

overlap_h instance-attribute

overlap_h = 8

overlap_w instance-attribute

overlap_w = 8

scale_function instance-attribute

scale_function: Callable[..., VideoNode]

Scale function called internally when performing scaling operations.

scaler instance-attribute

scaler = ensure_obj(scaler or kernel, __class__)

shifter instance-attribute

shifter = ensure_obj(shifter or kernel, __class__)

tiles instance-attribute

tiles = tiles

tilesize instance-attribute

tilesize = tilesize

cached_property

cached_property(func: Callable[Concatenate[_BaseScalerT, P], T_co])

Bases: cached_property[T_co]

Read only version of functools.cached_property.

Source code
265
def __init__(self, func: Callable[Concatenate[_BaseScalerT, P], T_co]) -> None: ...

calc_tilesize

calc_tilesize(
    clip: VideoNode, **kwargs: Any
) -> tuple[tuple[int, int], tuple[int, int]]

Reimplementation of vsmlrt.calc_tilesize helper function

Source code
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
def calc_tilesize(self, clip: vs.VideoNode, **kwargs: Any) -> tuple[tuple[int, int], tuple[int, int]]:
    """Reimplementation of vsmlrt.calc_tilesize helper function"""

    from vsmlrt import calc_tilesize

    kwargs = dict(
        tiles=self.tiles,
        tilesize=self.tilesize,
        width=clip.width,
        height=clip.height,
        multiple=1,
        overlap_w=self.overlap_w,
        overlap_h=self.overlap_h,
    ) | kwargs

    return calc_tilesize(**kwargs)

ensure_obj classmethod

ensure_obj(
    scaler: str | type[Self] | Self | None = None,
    /,
    func_except: FuncExceptT | None = None,
) -> Self

Ensure that the input is a scaler instance, resolving it if necessary.

Parameters:

  • scaler
    (str | type[Self] | Self | None, default: None ) –

    Scaler identifier (string, class, or instance).

  • func_except
    (FuncExceptT | None, default: None ) –

    Function returned for custom error handling.

Returns:

  • Self

    Scaler instance.

Source code
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
@classmethod
def ensure_obj(
    cls,
    scaler: str | type[Self] | Self | None = None,
    /,
    func_except: FuncExceptT | None = None,
) -> Self:
    """
    Ensure that the input is a scaler instance, resolving it if necessary.

    :param scaler:          Scaler identifier (string, class, or instance).
    :param func_except:     Function returned for custom error handling.
    :return:                Scaler instance.
    """
    return _base_ensure_obj(cls, scaler, func_except)

from_param classmethod

from_param(
    scaler: str | type[Self] | Self | None = None,
    /,
    func_except: FuncExceptT | None = None,
) -> type[Self]

Resolve and return a scaler type from a given input (string, type, or instance).

Parameters:

  • scaler
    (str | type[Self] | Self | None, default: None ) –

    Scaler identifier (string, class, or instance).

  • func_except
    (FuncExceptT | None, default: None ) –

    Function returned for custom error handling.

Returns:

  • type[Self]

    Resolved scaler type.

Source code
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
@classmethod
def from_param(
    cls,
    scaler: str | type[Self] | Self | None = None,
    /,
    func_except: FuncExceptT | None = None,
) -> type[Self]:
    """
    Resolve and return a scaler type from a given input (string, type, or instance).

    :param scaler:          Scaler identifier (string, class, or instance).
    :param func_except:     Function returned for custom error handling.
    :return:                Resolved scaler type.
    """
    return _base_from_param(cls, scaler, cls._err_class, func_except)

get_scale_args

get_scale_args(
    clip: VideoNode,
    shift: tuple[TopShift, LeftShift] = (0, 0),
    width: int | None = None,
    height: int | None = None,
    **kwargs: Any
) -> dict[str, Any]

Generate the keyword arguments used for scaling.

Parameters:

  • clip
    (VideoNode) –

    The source clip.

  • shift
    (tuple[TopShift, LeftShift], default: (0, 0) ) –

    Subpixel shift (top, left).

  • width
    (int | None, default: None ) –

    Target width.

  • height
    (int | None, default: None ) –

    Target height.

  • kwargs
    (Any, default: {} ) –

    Extra parameters to merge.

Returns:

  • dict[str, Any]

    Final dictionary of keyword arguments for the scale function.

Source code
466
467
468
469
470
471
472
473
474
475
476
477
478
479
480
481
482
483
484
def get_scale_args(
    self,
    clip: vs.VideoNode,
    shift: tuple[TopShift, LeftShift] = (0, 0),
    width: int | None = None,
    height: int | None = None,
    **kwargs: Any,
) -> dict[str, Any]:
    """
    Generate the keyword arguments used for scaling.

    :param clip:    The source clip.
    :param shift:   Subpixel shift (top, left).
    :param width:   Target width.
    :param height:  Target height.
    :param kwargs:  Extra parameters to merge.
    :return:        Final dictionary of keyword arguments for the scale function.
    """
    return dict(width=width, height=height, src_top=shift[0], src_left=shift[1]) | self.kwargs | kwargs

inference

inference(
    clip: ConstantFormatVideoNode, **kwargs: Any
) -> ConstantFormatVideoNode
Source code
302
303
304
305
306
def inference(self, clip: ConstantFormatVideoNode, **kwargs: Any) -> ConstantFormatVideoNode:
    from vsmlrt import ArtCNN as mlrt_ArtCNN
    from vsmlrt import ArtCNNModel

    return mlrt_ArtCNN(clip, self.tiles, self.tilesize, self.overlap, ArtCNNModel(self._model), self.backend)

kernel_radius

kernel_radius() -> int

Return the effective kernel radius for the scaler.

Returns:

  • int

    Kernel radius.

Raises:

  • CustomNotImplementedError

    If no kernel radius is defined.

Source code
347
348
349
350
351
352
353
354
355
@cached_property
def kernel_radius(self) -> int:
    """
    Return the effective kernel radius for the scaler.

    :raises CustomNotImplementedError:  If no kernel radius is defined.
    :return:                            Kernel radius.
    """
    ...

multi

multi(
    clip: VideoNodeT,
    multi: float = 2.0,
    shift: tuple[TopShift, LeftShift] = (0, 0),
    **kwargs: Any
) -> VideoNodeT

Deprecated alias for supersample.

Parameters:

  • clip
    (VideoNodeT) –

    The source clip.

  • multi
    (float, default: 2.0 ) –

    Supersampling factor.

  • shift
    (tuple[TopShift, LeftShift], default: (0, 0) ) –

    Subpixel shift (top, left) applied during scaling.

  • kwargs
    (Any, default: {} ) –

    Additional arguments forwarded to the scale function.

Returns:

Source code
451
452
453
454
455
456
457
458
459
460
461
462
463
464
@deprecated('The "multi" method is deprecated. Use "supersample" instead.', category=DeprecationWarning)
def multi(
    self, clip: VideoNodeT, multi: float = 2.0, shift: tuple[TopShift, LeftShift] = (0, 0), **kwargs: Any
) -> VideoNodeT:
    """
    Deprecated alias for `supersample`.

    :param clip:    The source clip.
    :param multi:   Supersampling factor.
    :param shift:   Subpixel shift (top, left) applied during scaling.
    :param kwargs:  Additional arguments forwarded to the scale function.
    :return:        The supersampled clip.
    """
    return self.supersample(clip, multi, shift, **kwargs)

postprocess_clip

postprocess_clip(
    clip: VideoNode, input_clip: VideoNode, **kwargs: Any
) -> ConstantFormatVideoNode

Handles postprocessing of the model's output after inference.

Source code
231
232
233
234
235
236
def postprocess_clip(self, clip: vs.VideoNode, input_clip: vs.VideoNode, **kwargs: Any) -> ConstantFormatVideoNode:
    """Handles postprocessing of the model's output after inference."""

    return depth(
        clip, input_clip, dither_type=DitherType.ORDERED if 0 in {clip.width, clip.height} else DitherType.AUTO
    )

preprocess_clip

preprocess_clip(clip: VideoNode, **kwargs: Any) -> ConstantFormatVideoNode
Source code
310
311
def preprocess_clip(self, clip: vs.VideoNode, **kwargs: Any) -> ConstantFormatVideoNode:
    return super().preprocess_clip(get_y(clip), **kwargs)

pretty_string

pretty_string() -> str

Cached property returning a user-friendly string representation.

Returns:

  • str

    Pretty-printed string with arguments.

Source code
368
369
370
371
372
373
374
375
@cached_property
def pretty_string(self) -> str:
    """
    Cached property returning a user-friendly string representation.

    :return: Pretty-printed string with arguments.
    """
    return self._pretty_string()

scale

scale(
    clip: VideoNode,
    width: int | None = None,
    height: int | None = None,
    shift: tuple[float, float] = (0, 0),
    **kwargs: Any
) -> ConstantFormatVideoNode

Scale the given clip using the ONNX model.

Parameters:

  • clip
    (VideoNode) –

    The input clip to be scaled.

  • width
    (int | None, default: None ) –

    The target width for scaling. If None, the width of the input clip will be used.

  • height
    (int | None, default: None ) –

    The target height for scaling. If None, the height of the input clip will be used.

  • shift
    (tuple[float, float], default: (0, 0) ) –

    A tuple representing the shift values for the x and y axes.

  • **kwargs
    (Any, default: {} ) –

    Additional arguments to be passed to the preprocess_clip, postprocess_clip, inference, and _final_scale methods. Use the prefix preprocess_ or postprocess_ to pass an argument to the respective method. Use the prefix inference_ to pass an argument to the inference method.

Returns:

  • ConstantFormatVideoNode

    The scaled clip.

Source code
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
def scale(
    self,
    clip: vs.VideoNode,
    width: int | None = None,
    height: int | None = None,
    shift: tuple[float, float] = (0, 0),
    **kwargs: Any
) -> ConstantFormatVideoNode:
    """
    Scale the given clip using the ONNX model.

    :param clip:        The input clip to be scaled.
    :param width:       The target width for scaling. If None, the width of the input clip will be used.
    :param height:      The target height for scaling. If None, the height of the input clip will be used.
    :param shift:       A tuple representing the shift values for the x and y axes.
    :param **kwargs:    Additional arguments to be passed to the `preprocess_clip`, `postprocess_clip`,
                        `inference`, and `_final_scale` methods.
                        Use the prefix `preprocess_` or `postprocess_` to pass an argument to the respective method.
                        Use the prefix `inference_` to pass an argument to the inference method.

    :return:            The scaled clip.
    """
    from vsmlrt import Backend

    assert check_variable_format(clip, self.__class__)

    width, height = self._wh_norm(clip, width, height)

    preprocess_kwargs = dict[str, Any]()
    postprocess_kwargs = dict[str, Any]()
    inference_kwargs = dict[str, Any]()

    for k in kwargs.copy():
        for prefix, ckwargs in zip(
            ("preprocess_", "postprocess_", "inference_"),
            (preprocess_kwargs, postprocess_kwargs, inference_kwargs)
        ):
            if k.startswith(prefix):
                ckwargs[k.removeprefix(prefix)] = kwargs.pop(k)
                break

    wclip = self.preprocess_clip(clip, **preprocess_kwargs)

    if 0 not in {clip.width, clip.height}:
        scaled = self.inference(wclip, **inference_kwargs)
    else:
        if not isinstance(self.backend, Backend.TRT):
            raise CustomValueError(
                "Variable resolution clips can only be processed with TRT Backend!", self.__class__, self.backend
            )

        warning(f"{self.__class__.__name__}: Variable resolution clip detected!")

        if self.backend.static_shape:
            warning("static_shape is True, setting it to False...")
            self.backend.static_shape = False

        if not self.backend.max_shapes:
            warning("max_shapes is None, setting it to (1936, 1088). You may want to adjust it...")
            self.backend.max_shapes = (1936, 1088)

        if not self.backend.opt_shapes:
            warning("opt_shapes is None, setting it to (64, 64). You may want to adjust it...")
            self.backend.opt_shapes = (64, 64)

        scaled = ProcessVariableResClip[ConstantFormatVideoNode].from_func(
            wclip, lambda c: self.inference(c, **inference_kwargs), False, wclip.format, self.max_instances
        )

    scaled = self.postprocess_clip(scaled, clip, **postprocess_kwargs)

    return self._finish_scale(scaled, clip, width, height, shift, **kwargs)

supersample

supersample(
    clip: VideoNodeT,
    rfactor: float = 2.0,
    shift: tuple[TopShift, LeftShift] = (0, 0),
    **kwargs: Any
) -> VideoNodeT

Supersample a clip by a given scaling factor.

Parameters:

  • clip
    (VideoNodeT) –

    The source clip.

  • rfactor
    (float, default: 2.0 ) –

    Scaling factor for supersampling.

  • shift
    (tuple[TopShift, LeftShift], default: (0, 0) ) –

    Subpixel shift (top, left) applied during scaling.

  • kwargs
    (Any, default: {} ) –

    Additional arguments forwarded to the scale function.

Returns:

Raises:

  • CustomValueError

    If resulting resolution is non-positive.

Source code
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
def supersample(
    self, clip: VideoNodeT, rfactor: float = 2.0, shift: tuple[TopShift, LeftShift] = (0, 0), **kwargs: Any
) -> VideoNodeT:
    """
    Supersample a clip by a given scaling factor.

    :param clip:                The source clip.
    :param rfactor:             Scaling factor for supersampling.
    :param shift:               Subpixel shift (top, left) applied during scaling.
    :param kwargs:              Additional arguments forwarded to the scale function.
    :raises CustomValueError:   If resulting resolution is non-positive.
    :return:                    The supersampled clip.
    """
    assert check_variable_resolution(clip, self.supersample)

    dst_width, dst_height = ceil(clip.width * rfactor), ceil(clip.height * rfactor)

    if max(dst_width, dst_height) <= 0.0:
        raise CustomValueError(
            'Multiplying the resolution by "rfactor" must result in a positive resolution!',
            self.supersample,
            rfactor,
        )

    return self.scale(clip, dst_width, dst_height, shift, **kwargs)  # type: ignore[return-value]

C4F32_Chroma

C4F32_Chroma(
    backend: Any | None = None,
    tiles: int | tuple[int, int] | None = None,
    tilesize: int | tuple[int, int] | None = None,
    overlap: int | tuple[int, int] | None = None,
    max_instances: int = 2,
    *,
    kernel: KernelLike = Catrom,
    scaler: ScalerLike | None = None,
    shifter: KernelLike | None = None,
    **kwargs: Any
)

Bases: BaseArtCNNChroma

The smaller of the two chroma models.

These don't double the input clip and rather just try to enhance the chroma using luma information.

Example usage:

from vsscale import ArtCNN

chroma_upscaled = ArtCNN.C4F32_Chroma().scale(clip)

Parameters:

  • backend

    (Any | None, default: None ) –

    The backend to be used with the vs-mlrt framework. If set to None, the most suitable backend will be automatically selected, prioritizing fp16 support.

  • tiles

    (int | tuple[int, int] | None, default: None ) –

    Whether to split the image into multiple tiles. This can help reduce VRAM usage, but note that the model's behavior may vary when they are used.

  • tilesize

    (int | tuple[int, int] | None, default: None ) –

    The size of each tile when splitting the image (if tiles are enabled).

  • overlap

    (int | tuple[int, int] | None, default: None ) –

    The size of overlap between tiles.

  • max_instances

    (int, default: 2 ) –

    Maximum instances to spawn when scaling a variable resolution clip.

  • kernel

    (KernelLike, default: Catrom ) –

    Base kernel to be used for certain scaling/shifting/resampling operations. Defaults to Catrom.

  • scaler

    (ScalerLike | None, default: None ) –

    Scaler used for scaling operations. Defaults to kernel.

  • shifter

    (KernelLike | None, default: None ) –

    Kernel used for shifting operations. Defaults to kernel.

  • **kwargs

    (Any, default: {} ) –

    Additional arguments to pass to the backend. See the vsmlrt backend's docstring for more details.

Classes:

Methods:

  • calc_tilesize

    Reimplementation of vsmlrt.calc_tilesize helper function

  • ensure_obj

    Ensure that the input is a scaler instance, resolving it if necessary.

  • from_param

    Resolve and return a scaler type from a given input (string, type, or instance).

  • get_scale_args

    Generate the keyword arguments used for scaling.

  • inference
  • kernel_radius

    Return the effective kernel radius for the scaler.

  • multi

    Deprecated alias for supersample.

  • postprocess_clip

    Handles postprocessing of the model's output after inference.

  • preprocess_clip
  • pretty_string

    Cached property returning a user-friendly string representation.

  • scale

    Scale the given clip using the ONNX model.

  • supersample

    Supersample a clip by a given scaling factor.

Attributes:

Source code
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
def __init__(
    self,
    backend: Any | None = None,
    tiles: int | tuple[int, int] | None = None,
    tilesize: int | tuple[int, int] | None = None,
    overlap: int | tuple[int, int] | None = None,
    max_instances: int = 2,
    *,
    kernel: KernelLike = Catrom,
    scaler: ScalerLike | None = None,
    shifter: KernelLike | None = None,
    **kwargs: Any
) -> None:
    """
    :param backend:         The backend to be used with the vs-mlrt framework.
                            If set to None, the most suitable backend will be automatically selected, prioritizing fp16 support.
    :param tiles:           Whether to split the image into multiple tiles.
                            This can help reduce VRAM usage, but note that the model's behavior may vary when they are used.
    :param tilesize:        The size of each tile when splitting the image (if tiles are enabled).
    :param overlap:         The size of overlap between tiles.
    :param max_instances:   Maximum instances to spawn when scaling a variable resolution clip.
    :param kernel:          Base kernel to be used for certain scaling/shifting/resampling operations.
                            Defaults to Catrom.
    :param scaler:          Scaler used for scaling operations. Defaults to kernel.
    :param shifter:         Kernel used for shifting operations. Defaults to kernel.
    :param **kwargs:        Additional arguments to pass to the backend.
                            See the vsmlrt backend's docstring for more details.
    """
    super().__init__(
        None, backend, tiles, tilesize, overlap, max_instances, kernel=kernel, scaler=scaler, shifter=shifter, **kwargs
    )

backend instance-attribute

backend = autoselect_backend(**_default_args | kwargs)

kernel instance-attribute

kernel = ensure_obj(kernel, __class__)

kwargs instance-attribute

kwargs: dict[str, Any] = kwargs

Arguments passed to the implemented funcs or internal scale function.

max_instances instance-attribute

max_instances = max_instances

model instance-attribute

model = str(resolve())

overlap instance-attribute

overlap = overlap

overlap_h instance-attribute

overlap_h = 8

overlap_w instance-attribute

overlap_w = 8

scale_function instance-attribute

scale_function: Callable[..., VideoNode]

Scale function called internally when performing scaling operations.

scaler instance-attribute

scaler = ensure_obj(scaler or kernel, __class__)

shifter instance-attribute

shifter = ensure_obj(shifter or kernel, __class__)

tiles instance-attribute

tiles = tiles

tilesize instance-attribute

tilesize = tilesize

cached_property

cached_property(func: Callable[Concatenate[_BaseScalerT, P], T_co])

Bases: cached_property[T_co]

Read only version of functools.cached_property.

Source code
265
def __init__(self, func: Callable[Concatenate[_BaseScalerT, P], T_co]) -> None: ...

calc_tilesize

calc_tilesize(
    clip: VideoNode, **kwargs: Any
) -> tuple[tuple[int, int], tuple[int, int]]

Reimplementation of vsmlrt.calc_tilesize helper function

Source code
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
def calc_tilesize(self, clip: vs.VideoNode, **kwargs: Any) -> tuple[tuple[int, int], tuple[int, int]]:
    """Reimplementation of vsmlrt.calc_tilesize helper function"""

    from vsmlrt import calc_tilesize

    kwargs = dict(
        tiles=self.tiles,
        tilesize=self.tilesize,
        width=clip.width,
        height=clip.height,
        multiple=1,
        overlap_w=self.overlap_w,
        overlap_h=self.overlap_h,
    ) | kwargs

    return calc_tilesize(**kwargs)

ensure_obj classmethod

ensure_obj(
    scaler: str | type[Self] | Self | None = None,
    /,
    func_except: FuncExceptT | None = None,
) -> Self

Ensure that the input is a scaler instance, resolving it if necessary.

Parameters:

  • scaler
    (str | type[Self] | Self | None, default: None ) –

    Scaler identifier (string, class, or instance).

  • func_except
    (FuncExceptT | None, default: None ) –

    Function returned for custom error handling.

Returns:

  • Self

    Scaler instance.

Source code
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
@classmethod
def ensure_obj(
    cls,
    scaler: str | type[Self] | Self | None = None,
    /,
    func_except: FuncExceptT | None = None,
) -> Self:
    """
    Ensure that the input is a scaler instance, resolving it if necessary.

    :param scaler:          Scaler identifier (string, class, or instance).
    :param func_except:     Function returned for custom error handling.
    :return:                Scaler instance.
    """
    return _base_ensure_obj(cls, scaler, func_except)

from_param classmethod

from_param(
    scaler: str | type[Self] | Self | None = None,
    /,
    func_except: FuncExceptT | None = None,
) -> type[Self]

Resolve and return a scaler type from a given input (string, type, or instance).

Parameters:

  • scaler
    (str | type[Self] | Self | None, default: None ) –

    Scaler identifier (string, class, or instance).

  • func_except
    (FuncExceptT | None, default: None ) –

    Function returned for custom error handling.

Returns:

  • type[Self]

    Resolved scaler type.

Source code
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
@classmethod
def from_param(
    cls,
    scaler: str | type[Self] | Self | None = None,
    /,
    func_except: FuncExceptT | None = None,
) -> type[Self]:
    """
    Resolve and return a scaler type from a given input (string, type, or instance).

    :param scaler:          Scaler identifier (string, class, or instance).
    :param func_except:     Function returned for custom error handling.
    :return:                Resolved scaler type.
    """
    return _base_from_param(cls, scaler, cls._err_class, func_except)

get_scale_args

get_scale_args(
    clip: VideoNode,
    shift: tuple[TopShift, LeftShift] = (0, 0),
    width: int | None = None,
    height: int | None = None,
    **kwargs: Any
) -> dict[str, Any]

Generate the keyword arguments used for scaling.

Parameters:

  • clip
    (VideoNode) –

    The source clip.

  • shift
    (tuple[TopShift, LeftShift], default: (0, 0) ) –

    Subpixel shift (top, left).

  • width
    (int | None, default: None ) –

    Target width.

  • height
    (int | None, default: None ) –

    Target height.

  • kwargs
    (Any, default: {} ) –

    Extra parameters to merge.

Returns:

  • dict[str, Any]

    Final dictionary of keyword arguments for the scale function.

Source code
466
467
468
469
470
471
472
473
474
475
476
477
478
479
480
481
482
483
484
def get_scale_args(
    self,
    clip: vs.VideoNode,
    shift: tuple[TopShift, LeftShift] = (0, 0),
    width: int | None = None,
    height: int | None = None,
    **kwargs: Any,
) -> dict[str, Any]:
    """
    Generate the keyword arguments used for scaling.

    :param clip:    The source clip.
    :param shift:   Subpixel shift (top, left).
    :param width:   Target width.
    :param height:  Target height.
    :param kwargs:  Extra parameters to merge.
    :return:        Final dictionary of keyword arguments for the scale function.
    """
    return dict(width=width, height=height, src_top=shift[0], src_left=shift[1]) | self.kwargs | kwargs

inference

inference(
    clip: ConstantFormatVideoNode, **kwargs: Any
) -> ConstantFormatVideoNode
Source code
302
303
304
305
306
def inference(self, clip: ConstantFormatVideoNode, **kwargs: Any) -> ConstantFormatVideoNode:
    from vsmlrt import ArtCNN as mlrt_ArtCNN
    from vsmlrt import ArtCNNModel

    return mlrt_ArtCNN(clip, self.tiles, self.tilesize, self.overlap, ArtCNNModel(self._model), self.backend)

kernel_radius

kernel_radius() -> int

Return the effective kernel radius for the scaler.

Returns:

  • int

    Kernel radius.

Raises:

  • CustomNotImplementedError

    If no kernel radius is defined.

Source code
347
348
349
350
351
352
353
354
355
@cached_property
def kernel_radius(self) -> int:
    """
    Return the effective kernel radius for the scaler.

    :raises CustomNotImplementedError:  If no kernel radius is defined.
    :return:                            Kernel radius.
    """
    ...

multi

multi(
    clip: VideoNodeT,
    multi: float = 2.0,
    shift: tuple[TopShift, LeftShift] = (0, 0),
    **kwargs: Any
) -> VideoNodeT

Deprecated alias for supersample.

Parameters:

  • clip
    (VideoNodeT) –

    The source clip.

  • multi
    (float, default: 2.0 ) –

    Supersampling factor.

  • shift
    (tuple[TopShift, LeftShift], default: (0, 0) ) –

    Subpixel shift (top, left) applied during scaling.

  • kwargs
    (Any, default: {} ) –

    Additional arguments forwarded to the scale function.

Returns:

Source code
451
452
453
454
455
456
457
458
459
460
461
462
463
464
@deprecated('The "multi" method is deprecated. Use "supersample" instead.', category=DeprecationWarning)
def multi(
    self, clip: VideoNodeT, multi: float = 2.0, shift: tuple[TopShift, LeftShift] = (0, 0), **kwargs: Any
) -> VideoNodeT:
    """
    Deprecated alias for `supersample`.

    :param clip:    The source clip.
    :param multi:   Supersampling factor.
    :param shift:   Subpixel shift (top, left) applied during scaling.
    :param kwargs:  Additional arguments forwarded to the scale function.
    :return:        The supersampled clip.
    """
    return self.supersample(clip, multi, shift, **kwargs)

postprocess_clip

postprocess_clip(
    clip: VideoNode, input_clip: VideoNode, **kwargs: Any
) -> ConstantFormatVideoNode

Handles postprocessing of the model's output after inference.

Source code
231
232
233
234
235
236
def postprocess_clip(self, clip: vs.VideoNode, input_clip: vs.VideoNode, **kwargs: Any) -> ConstantFormatVideoNode:
    """Handles postprocessing of the model's output after inference."""

    return depth(
        clip, input_clip, dither_type=DitherType.ORDERED if 0 in {clip.width, clip.height} else DitherType.AUTO
    )

preprocess_clip

preprocess_clip(clip: VideoNode, **kwargs: Any) -> ConstantFormatVideoNode
Source code
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
def preprocess_clip(self, clip: vs.VideoNode, **kwargs: Any) -> ConstantFormatVideoNode:
    assert check_variable_format(clip, self.__class__)

    if clip.format.subsampling_h != 0 or clip.format.subsampling_w != 0:
        chroma_scaler = Kernel.ensure_obj(kwargs.pop("chroma_scaler", Bilinear))

        clip = chroma_scaler.resample(
            clip, clip.format.replace(
                subsampling_h=0, subsampling_w=0,
                sample_type=vs.FLOAT, bits_per_sample=16 if self.backend.fp16 else 32
            )
        )
        return limiter(clip, func=self.__class__)

    return super().preprocess_clip(clip, **kwargs)

pretty_string

pretty_string() -> str

Cached property returning a user-friendly string representation.

Returns:

  • str

    Pretty-printed string with arguments.

Source code
368
369
370
371
372
373
374
375
@cached_property
def pretty_string(self) -> str:
    """
    Cached property returning a user-friendly string representation.

    :return: Pretty-printed string with arguments.
    """
    return self._pretty_string()

scale

scale(
    clip: VideoNode,
    width: int | None = None,
    height: int | None = None,
    shift: tuple[float, float] = (0, 0),
    **kwargs: Any
) -> ConstantFormatVideoNode

Scale the given clip using the ONNX model.

Parameters:

  • clip
    (VideoNode) –

    The input clip to be scaled.

  • width
    (int | None, default: None ) –

    The target width for scaling. If None, the width of the input clip will be used.

  • height
    (int | None, default: None ) –

    The target height for scaling. If None, the height of the input clip will be used.

  • shift
    (tuple[float, float], default: (0, 0) ) –

    A tuple representing the shift values for the x and y axes.

  • **kwargs
    (Any, default: {} ) –

    Additional arguments to be passed to the preprocess_clip, postprocess_clip, inference, and _final_scale methods. Use the prefix preprocess_ or postprocess_ to pass an argument to the respective method. Use the prefix inference_ to pass an argument to the inference method.

Returns:

  • ConstantFormatVideoNode

    The scaled clip.

Source code
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
def scale(
    self,
    clip: vs.VideoNode,
    width: int | None = None,
    height: int | None = None,
    shift: tuple[float, float] = (0, 0),
    **kwargs: Any
) -> ConstantFormatVideoNode:
    """
    Scale the given clip using the ONNX model.

    :param clip:        The input clip to be scaled.
    :param width:       The target width for scaling. If None, the width of the input clip will be used.
    :param height:      The target height for scaling. If None, the height of the input clip will be used.
    :param shift:       A tuple representing the shift values for the x and y axes.
    :param **kwargs:    Additional arguments to be passed to the `preprocess_clip`, `postprocess_clip`,
                        `inference`, and `_final_scale` methods.
                        Use the prefix `preprocess_` or `postprocess_` to pass an argument to the respective method.
                        Use the prefix `inference_` to pass an argument to the inference method.

    :return:            The scaled clip.
    """
    from vsmlrt import Backend

    assert check_variable_format(clip, self.__class__)

    width, height = self._wh_norm(clip, width, height)

    preprocess_kwargs = dict[str, Any]()
    postprocess_kwargs = dict[str, Any]()
    inference_kwargs = dict[str, Any]()

    for k in kwargs.copy():
        for prefix, ckwargs in zip(
            ("preprocess_", "postprocess_", "inference_"),
            (preprocess_kwargs, postprocess_kwargs, inference_kwargs)
        ):
            if k.startswith(prefix):
                ckwargs[k.removeprefix(prefix)] = kwargs.pop(k)
                break

    wclip = self.preprocess_clip(clip, **preprocess_kwargs)

    if 0 not in {clip.width, clip.height}:
        scaled = self.inference(wclip, **inference_kwargs)
    else:
        if not isinstance(self.backend, Backend.TRT):
            raise CustomValueError(
                "Variable resolution clips can only be processed with TRT Backend!", self.__class__, self.backend
            )

        warning(f"{self.__class__.__name__}: Variable resolution clip detected!")

        if self.backend.static_shape:
            warning("static_shape is True, setting it to False...")
            self.backend.static_shape = False

        if not self.backend.max_shapes:
            warning("max_shapes is None, setting it to (1936, 1088). You may want to adjust it...")
            self.backend.max_shapes = (1936, 1088)

        if not self.backend.opt_shapes:
            warning("opt_shapes is None, setting it to (64, 64). You may want to adjust it...")
            self.backend.opt_shapes = (64, 64)

        scaled = ProcessVariableResClip[ConstantFormatVideoNode].from_func(
            wclip, lambda c: self.inference(c, **inference_kwargs), False, wclip.format, self.max_instances
        )

    scaled = self.postprocess_clip(scaled, clip, **postprocess_kwargs)

    return self._finish_scale(scaled, clip, width, height, shift, **kwargs)

supersample

supersample(
    clip: VideoNodeT,
    rfactor: float = 2.0,
    shift: tuple[TopShift, LeftShift] = (0, 0),
    **kwargs: Any
) -> VideoNodeT

Supersample a clip by a given scaling factor.

Parameters:

  • clip
    (VideoNodeT) –

    The source clip.

  • rfactor
    (float, default: 2.0 ) –

    Scaling factor for supersampling.

  • shift
    (tuple[TopShift, LeftShift], default: (0, 0) ) –

    Subpixel shift (top, left) applied during scaling.

  • kwargs
    (Any, default: {} ) –

    Additional arguments forwarded to the scale function.

Returns:

Raises:

  • CustomValueError

    If resulting resolution is non-positive.

Source code
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
def supersample(
    self, clip: VideoNodeT, rfactor: float = 2.0, shift: tuple[TopShift, LeftShift] = (0, 0), **kwargs: Any
) -> VideoNodeT:
    """
    Supersample a clip by a given scaling factor.

    :param clip:                The source clip.
    :param rfactor:             Scaling factor for supersampling.
    :param shift:               Subpixel shift (top, left) applied during scaling.
    :param kwargs:              Additional arguments forwarded to the scale function.
    :raises CustomValueError:   If resulting resolution is non-positive.
    :return:                    The supersampled clip.
    """
    assert check_variable_resolution(clip, self.supersample)

    dst_width, dst_height = ceil(clip.width * rfactor), ceil(clip.height * rfactor)

    if max(dst_width, dst_height) <= 0.0:
        raise CustomValueError(
            'Multiplying the resolution by "rfactor" must result in a positive resolution!',
            self.supersample,
            rfactor,
        )

    return self.scale(clip, dst_width, dst_height, shift, **kwargs)  # type: ignore[return-value]

C4F32_DS

C4F32_DS(
    backend: Any | None = None,
    tiles: int | tuple[int, int] | None = None,
    tilesize: int | tuple[int, int] | None = None,
    overlap: int | tuple[int, int] | None = None,
    max_instances: int = 2,
    *,
    kernel: KernelLike = Catrom,
    scaler: ScalerLike | None = None,
    shifter: KernelLike | None = None,
    **kwargs: Any
)

Bases: BaseArtCNNLuma

The same as C4F32 but intended to also sharpen and denoise.

Example usage:

from vsscale import ArtCNN

doubled = ArtCNN.C4F32_DS().scale(clip, clip.width * 2, clip.height * 2)

Parameters:

  • backend

    (Any | None, default: None ) –

    The backend to be used with the vs-mlrt framework. If set to None, the most suitable backend will be automatically selected, prioritizing fp16 support.

  • tiles

    (int | tuple[int, int] | None, default: None ) –

    Whether to split the image into multiple tiles. This can help reduce VRAM usage, but note that the model's behavior may vary when they are used.

  • tilesize

    (int | tuple[int, int] | None, default: None ) –

    The size of each tile when splitting the image (if tiles are enabled).

  • overlap

    (int | tuple[int, int] | None, default: None ) –

    The size of overlap between tiles.

  • max_instances

    (int, default: 2 ) –

    Maximum instances to spawn when scaling a variable resolution clip.

  • kernel

    (KernelLike, default: Catrom ) –

    Base kernel to be used for certain scaling/shifting/resampling operations. Defaults to Catrom.

  • scaler

    (ScalerLike | None, default: None ) –

    Scaler used for scaling operations. Defaults to kernel.

  • shifter

    (KernelLike | None, default: None ) –

    Kernel used for shifting operations. Defaults to kernel.

  • **kwargs

    (Any, default: {} ) –

    Additional arguments to pass to the backend. See the vsmlrt backend's docstring for more details.

Classes:

Methods:

  • calc_tilesize

    Reimplementation of vsmlrt.calc_tilesize helper function

  • ensure_obj

    Ensure that the input is a scaler instance, resolving it if necessary.

  • from_param

    Resolve and return a scaler type from a given input (string, type, or instance).

  • get_scale_args

    Generate the keyword arguments used for scaling.

  • inference
  • kernel_radius

    Return the effective kernel radius for the scaler.

  • multi

    Deprecated alias for supersample.

  • postprocess_clip

    Handles postprocessing of the model's output after inference.

  • preprocess_clip
  • pretty_string

    Cached property returning a user-friendly string representation.

  • scale

    Scale the given clip using the ONNX model.

  • supersample

    Supersample a clip by a given scaling factor.

Attributes:

Source code
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
def __init__(
    self,
    backend: Any | None = None,
    tiles: int | tuple[int, int] | None = None,
    tilesize: int | tuple[int, int] | None = None,
    overlap: int | tuple[int, int] | None = None,
    max_instances: int = 2,
    *,
    kernel: KernelLike = Catrom,
    scaler: ScalerLike | None = None,
    shifter: KernelLike | None = None,
    **kwargs: Any
) -> None:
    """
    :param backend:         The backend to be used with the vs-mlrt framework.
                            If set to None, the most suitable backend will be automatically selected, prioritizing fp16 support.
    :param tiles:           Whether to split the image into multiple tiles.
                            This can help reduce VRAM usage, but note that the model's behavior may vary when they are used.
    :param tilesize:        The size of each tile when splitting the image (if tiles are enabled).
    :param overlap:         The size of overlap between tiles.
    :param max_instances:   Maximum instances to spawn when scaling a variable resolution clip.
    :param kernel:          Base kernel to be used for certain scaling/shifting/resampling operations.
                            Defaults to Catrom.
    :param scaler:          Scaler used for scaling operations. Defaults to kernel.
    :param shifter:         Kernel used for shifting operations. Defaults to kernel.
    :param **kwargs:        Additional arguments to pass to the backend.
                            See the vsmlrt backend's docstring for more details.
    """
    super().__init__(
        None, backend, tiles, tilesize, overlap, max_instances, kernel=kernel, scaler=scaler, shifter=shifter, **kwargs
    )

backend instance-attribute

backend = autoselect_backend(**_default_args | kwargs)

kernel instance-attribute

kernel = ensure_obj(kernel, __class__)

kwargs instance-attribute

kwargs: dict[str, Any] = kwargs

Arguments passed to the implemented funcs or internal scale function.

max_instances instance-attribute

max_instances = max_instances

model instance-attribute

model = str(resolve())

overlap instance-attribute

overlap = overlap

overlap_h instance-attribute

overlap_h = 8

overlap_w instance-attribute

overlap_w = 8

scale_function instance-attribute

scale_function: Callable[..., VideoNode]

Scale function called internally when performing scaling operations.

scaler instance-attribute

scaler = ensure_obj(scaler or kernel, __class__)

shifter instance-attribute

shifter = ensure_obj(shifter or kernel, __class__)

tiles instance-attribute

tiles = tiles

tilesize instance-attribute

tilesize = tilesize

cached_property

cached_property(func: Callable[Concatenate[_BaseScalerT, P], T_co])

Bases: cached_property[T_co]

Read only version of functools.cached_property.

Source code
265
def __init__(self, func: Callable[Concatenate[_BaseScalerT, P], T_co]) -> None: ...

calc_tilesize

calc_tilesize(
    clip: VideoNode, **kwargs: Any
) -> tuple[tuple[int, int], tuple[int, int]]

Reimplementation of vsmlrt.calc_tilesize helper function

Source code
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
def calc_tilesize(self, clip: vs.VideoNode, **kwargs: Any) -> tuple[tuple[int, int], tuple[int, int]]:
    """Reimplementation of vsmlrt.calc_tilesize helper function"""

    from vsmlrt import calc_tilesize

    kwargs = dict(
        tiles=self.tiles,
        tilesize=self.tilesize,
        width=clip.width,
        height=clip.height,
        multiple=1,
        overlap_w=self.overlap_w,
        overlap_h=self.overlap_h,
    ) | kwargs

    return calc_tilesize(**kwargs)

ensure_obj classmethod

ensure_obj(
    scaler: str | type[Self] | Self | None = None,
    /,
    func_except: FuncExceptT | None = None,
) -> Self

Ensure that the input is a scaler instance, resolving it if necessary.

Parameters:

  • scaler
    (str | type[Self] | Self | None, default: None ) –

    Scaler identifier (string, class, or instance).

  • func_except
    (FuncExceptT | None, default: None ) –

    Function returned for custom error handling.

Returns:

  • Self

    Scaler instance.

Source code
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
@classmethod
def ensure_obj(
    cls,
    scaler: str | type[Self] | Self | None = None,
    /,
    func_except: FuncExceptT | None = None,
) -> Self:
    """
    Ensure that the input is a scaler instance, resolving it if necessary.

    :param scaler:          Scaler identifier (string, class, or instance).
    :param func_except:     Function returned for custom error handling.
    :return:                Scaler instance.
    """
    return _base_ensure_obj(cls, scaler, func_except)

from_param classmethod

from_param(
    scaler: str | type[Self] | Self | None = None,
    /,
    func_except: FuncExceptT | None = None,
) -> type[Self]

Resolve and return a scaler type from a given input (string, type, or instance).

Parameters:

  • scaler
    (str | type[Self] | Self | None, default: None ) –

    Scaler identifier (string, class, or instance).

  • func_except
    (FuncExceptT | None, default: None ) –

    Function returned for custom error handling.

Returns:

  • type[Self]

    Resolved scaler type.

Source code
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
@classmethod
def from_param(
    cls,
    scaler: str | type[Self] | Self | None = None,
    /,
    func_except: FuncExceptT | None = None,
) -> type[Self]:
    """
    Resolve and return a scaler type from a given input (string, type, or instance).

    :param scaler:          Scaler identifier (string, class, or instance).
    :param func_except:     Function returned for custom error handling.
    :return:                Resolved scaler type.
    """
    return _base_from_param(cls, scaler, cls._err_class, func_except)

get_scale_args

get_scale_args(
    clip: VideoNode,
    shift: tuple[TopShift, LeftShift] = (0, 0),
    width: int | None = None,
    height: int | None = None,
    **kwargs: Any
) -> dict[str, Any]

Generate the keyword arguments used for scaling.

Parameters:

  • clip
    (VideoNode) –

    The source clip.

  • shift
    (tuple[TopShift, LeftShift], default: (0, 0) ) –

    Subpixel shift (top, left).

  • width
    (int | None, default: None ) –

    Target width.

  • height
    (int | None, default: None ) –

    Target height.

  • kwargs
    (Any, default: {} ) –

    Extra parameters to merge.

Returns:

  • dict[str, Any]

    Final dictionary of keyword arguments for the scale function.

Source code
466
467
468
469
470
471
472
473
474
475
476
477
478
479
480
481
482
483
484
def get_scale_args(
    self,
    clip: vs.VideoNode,
    shift: tuple[TopShift, LeftShift] = (0, 0),
    width: int | None = None,
    height: int | None = None,
    **kwargs: Any,
) -> dict[str, Any]:
    """
    Generate the keyword arguments used for scaling.

    :param clip:    The source clip.
    :param shift:   Subpixel shift (top, left).
    :param width:   Target width.
    :param height:  Target height.
    :param kwargs:  Extra parameters to merge.
    :return:        Final dictionary of keyword arguments for the scale function.
    """
    return dict(width=width, height=height, src_top=shift[0], src_left=shift[1]) | self.kwargs | kwargs

inference

inference(
    clip: ConstantFormatVideoNode, **kwargs: Any
) -> ConstantFormatVideoNode
Source code
302
303
304
305
306
def inference(self, clip: ConstantFormatVideoNode, **kwargs: Any) -> ConstantFormatVideoNode:
    from vsmlrt import ArtCNN as mlrt_ArtCNN
    from vsmlrt import ArtCNNModel

    return mlrt_ArtCNN(clip, self.tiles, self.tilesize, self.overlap, ArtCNNModel(self._model), self.backend)

kernel_radius

kernel_radius() -> int

Return the effective kernel radius for the scaler.

Returns:

  • int

    Kernel radius.

Raises:

  • CustomNotImplementedError

    If no kernel radius is defined.

Source code
347
348
349
350
351
352
353
354
355
@cached_property
def kernel_radius(self) -> int:
    """
    Return the effective kernel radius for the scaler.

    :raises CustomNotImplementedError:  If no kernel radius is defined.
    :return:                            Kernel radius.
    """
    ...

multi

multi(
    clip: VideoNodeT,
    multi: float = 2.0,
    shift: tuple[TopShift, LeftShift] = (0, 0),
    **kwargs: Any
) -> VideoNodeT

Deprecated alias for supersample.

Parameters:

  • clip
    (VideoNodeT) –

    The source clip.

  • multi
    (float, default: 2.0 ) –

    Supersampling factor.

  • shift
    (tuple[TopShift, LeftShift], default: (0, 0) ) –

    Subpixel shift (top, left) applied during scaling.

  • kwargs
    (Any, default: {} ) –

    Additional arguments forwarded to the scale function.

Returns:

Source code
451
452
453
454
455
456
457
458
459
460
461
462
463
464
@deprecated('The "multi" method is deprecated. Use "supersample" instead.', category=DeprecationWarning)
def multi(
    self, clip: VideoNodeT, multi: float = 2.0, shift: tuple[TopShift, LeftShift] = (0, 0), **kwargs: Any
) -> VideoNodeT:
    """
    Deprecated alias for `supersample`.

    :param clip:    The source clip.
    :param multi:   Supersampling factor.
    :param shift:   Subpixel shift (top, left) applied during scaling.
    :param kwargs:  Additional arguments forwarded to the scale function.
    :return:        The supersampled clip.
    """
    return self.supersample(clip, multi, shift, **kwargs)

postprocess_clip

postprocess_clip(
    clip: VideoNode, input_clip: VideoNode, **kwargs: Any
) -> ConstantFormatVideoNode

Handles postprocessing of the model's output after inference.

Source code
231
232
233
234
235
236
def postprocess_clip(self, clip: vs.VideoNode, input_clip: vs.VideoNode, **kwargs: Any) -> ConstantFormatVideoNode:
    """Handles postprocessing of the model's output after inference."""

    return depth(
        clip, input_clip, dither_type=DitherType.ORDERED if 0 in {clip.width, clip.height} else DitherType.AUTO
    )

preprocess_clip

preprocess_clip(clip: VideoNode, **kwargs: Any) -> ConstantFormatVideoNode
Source code
310
311
def preprocess_clip(self, clip: vs.VideoNode, **kwargs: Any) -> ConstantFormatVideoNode:
    return super().preprocess_clip(get_y(clip), **kwargs)

pretty_string

pretty_string() -> str

Cached property returning a user-friendly string representation.

Returns:

  • str

    Pretty-printed string with arguments.

Source code
368
369
370
371
372
373
374
375
@cached_property
def pretty_string(self) -> str:
    """
    Cached property returning a user-friendly string representation.

    :return: Pretty-printed string with arguments.
    """
    return self._pretty_string()

scale

scale(
    clip: VideoNode,
    width: int | None = None,
    height: int | None = None,
    shift: tuple[float, float] = (0, 0),
    **kwargs: Any
) -> ConstantFormatVideoNode

Scale the given clip using the ONNX model.

Parameters:

  • clip
    (VideoNode) –

    The input clip to be scaled.

  • width
    (int | None, default: None ) –

    The target width for scaling. If None, the width of the input clip will be used.

  • height
    (int | None, default: None ) –

    The target height for scaling. If None, the height of the input clip will be used.

  • shift
    (tuple[float, float], default: (0, 0) ) –

    A tuple representing the shift values for the x and y axes.

  • **kwargs
    (Any, default: {} ) –

    Additional arguments to be passed to the preprocess_clip, postprocess_clip, inference, and _final_scale methods. Use the prefix preprocess_ or postprocess_ to pass an argument to the respective method. Use the prefix inference_ to pass an argument to the inference method.

Returns:

  • ConstantFormatVideoNode

    The scaled clip.

Source code
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
def scale(
    self,
    clip: vs.VideoNode,
    width: int | None = None,
    height: int | None = None,
    shift: tuple[float, float] = (0, 0),
    **kwargs: Any
) -> ConstantFormatVideoNode:
    """
    Scale the given clip using the ONNX model.

    :param clip:        The input clip to be scaled.
    :param width:       The target width for scaling. If None, the width of the input clip will be used.
    :param height:      The target height for scaling. If None, the height of the input clip will be used.
    :param shift:       A tuple representing the shift values for the x and y axes.
    :param **kwargs:    Additional arguments to be passed to the `preprocess_clip`, `postprocess_clip`,
                        `inference`, and `_final_scale` methods.
                        Use the prefix `preprocess_` or `postprocess_` to pass an argument to the respective method.
                        Use the prefix `inference_` to pass an argument to the inference method.

    :return:            The scaled clip.
    """
    from vsmlrt import Backend

    assert check_variable_format(clip, self.__class__)

    width, height = self._wh_norm(clip, width, height)

    preprocess_kwargs = dict[str, Any]()
    postprocess_kwargs = dict[str, Any]()
    inference_kwargs = dict[str, Any]()

    for k in kwargs.copy():
        for prefix, ckwargs in zip(
            ("preprocess_", "postprocess_", "inference_"),
            (preprocess_kwargs, postprocess_kwargs, inference_kwargs)
        ):
            if k.startswith(prefix):
                ckwargs[k.removeprefix(prefix)] = kwargs.pop(k)
                break

    wclip = self.preprocess_clip(clip, **preprocess_kwargs)

    if 0 not in {clip.width, clip.height}:
        scaled = self.inference(wclip, **inference_kwargs)
    else:
        if not isinstance(self.backend, Backend.TRT):
            raise CustomValueError(
                "Variable resolution clips can only be processed with TRT Backend!", self.__class__, self.backend
            )

        warning(f"{self.__class__.__name__}: Variable resolution clip detected!")

        if self.backend.static_shape:
            warning("static_shape is True, setting it to False...")
            self.backend.static_shape = False

        if not self.backend.max_shapes:
            warning("max_shapes is None, setting it to (1936, 1088). You may want to adjust it...")
            self.backend.max_shapes = (1936, 1088)

        if not self.backend.opt_shapes:
            warning("opt_shapes is None, setting it to (64, 64). You may want to adjust it...")
            self.backend.opt_shapes = (64, 64)

        scaled = ProcessVariableResClip[ConstantFormatVideoNode].from_func(
            wclip, lambda c: self.inference(c, **inference_kwargs), False, wclip.format, self.max_instances
        )

    scaled = self.postprocess_clip(scaled, clip, **postprocess_kwargs)

    return self._finish_scale(scaled, clip, width, height, shift, **kwargs)

supersample

supersample(
    clip: VideoNodeT,
    rfactor: float = 2.0,
    shift: tuple[TopShift, LeftShift] = (0, 0),
    **kwargs: Any
) -> VideoNodeT

Supersample a clip by a given scaling factor.

Parameters:

  • clip
    (VideoNodeT) –

    The source clip.

  • rfactor
    (float, default: 2.0 ) –

    Scaling factor for supersampling.

  • shift
    (tuple[TopShift, LeftShift], default: (0, 0) ) –

    Subpixel shift (top, left) applied during scaling.

  • kwargs
    (Any, default: {} ) –

    Additional arguments forwarded to the scale function.

Returns:

Raises:

  • CustomValueError

    If resulting resolution is non-positive.

Source code
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
def supersample(
    self, clip: VideoNodeT, rfactor: float = 2.0, shift: tuple[TopShift, LeftShift] = (0, 0), **kwargs: Any
) -> VideoNodeT:
    """
    Supersample a clip by a given scaling factor.

    :param clip:                The source clip.
    :param rfactor:             Scaling factor for supersampling.
    :param shift:               Subpixel shift (top, left) applied during scaling.
    :param kwargs:              Additional arguments forwarded to the scale function.
    :raises CustomValueError:   If resulting resolution is non-positive.
    :return:                    The supersampled clip.
    """
    assert check_variable_resolution(clip, self.supersample)

    dst_width, dst_height = ceil(clip.width * rfactor), ceil(clip.height * rfactor)

    if max(dst_width, dst_height) <= 0.0:
        raise CustomValueError(
            'Multiplying the resolution by "rfactor" must result in a positive resolution!',
            self.supersample,
            rfactor,
        )

    return self.scale(clip, dst_width, dst_height, shift, **kwargs)  # type: ignore[return-value]

R16F96

R16F96(
    backend: Any | None = None,
    tiles: int | tuple[int, int] | None = None,
    tilesize: int | tuple[int, int] | None = None,
    overlap: int | tuple[int, int] | None = None,
    max_instances: int = 2,
    *,
    kernel: KernelLike = Catrom,
    scaler: ScalerLike | None = None,
    shifter: KernelLike | None = None,
    **kwargs: Any
)

Bases: BaseArtCNNLuma

The biggest model. Can compete with or outperform Waifu2x Cunet.

Also quite a bit slower but is less heavy on vram.

Example usage:

from vsscale import ArtCNN

doubled = ArtCNN.R16F96().scale(clip, clip.width * 2, clip.height * 2)

Parameters:

  • backend

    (Any | None, default: None ) –

    The backend to be used with the vs-mlrt framework. If set to None, the most suitable backend will be automatically selected, prioritizing fp16 support.

  • tiles

    (int | tuple[int, int] | None, default: None ) –

    Whether to split the image into multiple tiles. This can help reduce VRAM usage, but note that the model's behavior may vary when they are used.

  • tilesize

    (int | tuple[int, int] | None, default: None ) –

    The size of each tile when splitting the image (if tiles are enabled).

  • overlap

    (int | tuple[int, int] | None, default: None ) –

    The size of overlap between tiles.

  • max_instances

    (int, default: 2 ) –

    Maximum instances to spawn when scaling a variable resolution clip.

  • kernel

    (KernelLike, default: Catrom ) –

    Base kernel to be used for certain scaling/shifting/resampling operations. Defaults to Catrom.

  • scaler

    (ScalerLike | None, default: None ) –

    Scaler used for scaling operations. Defaults to kernel.

  • shifter

    (KernelLike | None, default: None ) –

    Kernel used for shifting operations. Defaults to kernel.

  • **kwargs

    (Any, default: {} ) –

    Additional arguments to pass to the backend. See the vsmlrt backend's docstring for more details.

Classes:

Methods:

  • calc_tilesize

    Reimplementation of vsmlrt.calc_tilesize helper function

  • ensure_obj

    Ensure that the input is a scaler instance, resolving it if necessary.

  • from_param

    Resolve and return a scaler type from a given input (string, type, or instance).

  • get_scale_args

    Generate the keyword arguments used for scaling.

  • inference
  • kernel_radius

    Return the effective kernel radius for the scaler.

  • multi

    Deprecated alias for supersample.

  • postprocess_clip

    Handles postprocessing of the model's output after inference.

  • preprocess_clip
  • pretty_string

    Cached property returning a user-friendly string representation.

  • scale

    Scale the given clip using the ONNX model.

  • supersample

    Supersample a clip by a given scaling factor.

Attributes:

Source code
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
def __init__(
    self,
    backend: Any | None = None,
    tiles: int | tuple[int, int] | None = None,
    tilesize: int | tuple[int, int] | None = None,
    overlap: int | tuple[int, int] | None = None,
    max_instances: int = 2,
    *,
    kernel: KernelLike = Catrom,
    scaler: ScalerLike | None = None,
    shifter: KernelLike | None = None,
    **kwargs: Any
) -> None:
    """
    :param backend:         The backend to be used with the vs-mlrt framework.
                            If set to None, the most suitable backend will be automatically selected, prioritizing fp16 support.
    :param tiles:           Whether to split the image into multiple tiles.
                            This can help reduce VRAM usage, but note that the model's behavior may vary when they are used.
    :param tilesize:        The size of each tile when splitting the image (if tiles are enabled).
    :param overlap:         The size of overlap between tiles.
    :param max_instances:   Maximum instances to spawn when scaling a variable resolution clip.
    :param kernel:          Base kernel to be used for certain scaling/shifting/resampling operations.
                            Defaults to Catrom.
    :param scaler:          Scaler used for scaling operations. Defaults to kernel.
    :param shifter:         Kernel used for shifting operations. Defaults to kernel.
    :param **kwargs:        Additional arguments to pass to the backend.
                            See the vsmlrt backend's docstring for more details.
    """
    super().__init__(
        None, backend, tiles, tilesize, overlap, max_instances, kernel=kernel, scaler=scaler, shifter=shifter, **kwargs
    )

backend instance-attribute

backend = autoselect_backend(**_default_args | kwargs)

kernel instance-attribute

kernel = ensure_obj(kernel, __class__)

kwargs instance-attribute

kwargs: dict[str, Any] = kwargs

Arguments passed to the implemented funcs or internal scale function.

max_instances instance-attribute

max_instances = max_instances

model instance-attribute

model = str(resolve())

overlap instance-attribute

overlap = overlap

overlap_h instance-attribute

overlap_h = 8

overlap_w instance-attribute

overlap_w = 8

scale_function instance-attribute

scale_function: Callable[..., VideoNode]

Scale function called internally when performing scaling operations.

scaler instance-attribute

scaler = ensure_obj(scaler or kernel, __class__)

shifter instance-attribute

shifter = ensure_obj(shifter or kernel, __class__)

tiles instance-attribute

tiles = tiles

tilesize instance-attribute

tilesize = tilesize

cached_property

cached_property(func: Callable[Concatenate[_BaseScalerT, P], T_co])

Bases: cached_property[T_co]

Read only version of functools.cached_property.

Source code
265
def __init__(self, func: Callable[Concatenate[_BaseScalerT, P], T_co]) -> None: ...

calc_tilesize

calc_tilesize(
    clip: VideoNode, **kwargs: Any
) -> tuple[tuple[int, int], tuple[int, int]]

Reimplementation of vsmlrt.calc_tilesize helper function

Source code
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
def calc_tilesize(self, clip: vs.VideoNode, **kwargs: Any) -> tuple[tuple[int, int], tuple[int, int]]:
    """Reimplementation of vsmlrt.calc_tilesize helper function"""

    from vsmlrt import calc_tilesize

    kwargs = dict(
        tiles=self.tiles,
        tilesize=self.tilesize,
        width=clip.width,
        height=clip.height,
        multiple=1,
        overlap_w=self.overlap_w,
        overlap_h=self.overlap_h,
    ) | kwargs

    return calc_tilesize(**kwargs)

ensure_obj classmethod

ensure_obj(
    scaler: str | type[Self] | Self | None = None,
    /,
    func_except: FuncExceptT | None = None,
) -> Self

Ensure that the input is a scaler instance, resolving it if necessary.

Parameters:

  • scaler
    (str | type[Self] | Self | None, default: None ) –

    Scaler identifier (string, class, or instance).

  • func_except
    (FuncExceptT | None, default: None ) –

    Function returned for custom error handling.

Returns:

  • Self

    Scaler instance.

Source code
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
@classmethod
def ensure_obj(
    cls,
    scaler: str | type[Self] | Self | None = None,
    /,
    func_except: FuncExceptT | None = None,
) -> Self:
    """
    Ensure that the input is a scaler instance, resolving it if necessary.

    :param scaler:          Scaler identifier (string, class, or instance).
    :param func_except:     Function returned for custom error handling.
    :return:                Scaler instance.
    """
    return _base_ensure_obj(cls, scaler, func_except)

from_param classmethod

from_param(
    scaler: str | type[Self] | Self | None = None,
    /,
    func_except: FuncExceptT | None = None,
) -> type[Self]

Resolve and return a scaler type from a given input (string, type, or instance).

Parameters:

  • scaler
    (str | type[Self] | Self | None, default: None ) –

    Scaler identifier (string, class, or instance).

  • func_except
    (FuncExceptT | None, default: None ) –

    Function returned for custom error handling.

Returns:

  • type[Self]

    Resolved scaler type.

Source code
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
@classmethod
def from_param(
    cls,
    scaler: str | type[Self] | Self | None = None,
    /,
    func_except: FuncExceptT | None = None,
) -> type[Self]:
    """
    Resolve and return a scaler type from a given input (string, type, or instance).

    :param scaler:          Scaler identifier (string, class, or instance).
    :param func_except:     Function returned for custom error handling.
    :return:                Resolved scaler type.
    """
    return _base_from_param(cls, scaler, cls._err_class, func_except)

get_scale_args

get_scale_args(
    clip: VideoNode,
    shift: tuple[TopShift, LeftShift] = (0, 0),
    width: int | None = None,
    height: int | None = None,
    **kwargs: Any
) -> dict[str, Any]

Generate the keyword arguments used for scaling.

Parameters:

  • clip
    (VideoNode) –

    The source clip.

  • shift
    (tuple[TopShift, LeftShift], default: (0, 0) ) –

    Subpixel shift (top, left).

  • width
    (int | None, default: None ) –

    Target width.

  • height
    (int | None, default: None ) –

    Target height.

  • kwargs
    (Any, default: {} ) –

    Extra parameters to merge.

Returns:

  • dict[str, Any]

    Final dictionary of keyword arguments for the scale function.

Source code
466
467
468
469
470
471
472
473
474
475
476
477
478
479
480
481
482
483
484
def get_scale_args(
    self,
    clip: vs.VideoNode,
    shift: tuple[TopShift, LeftShift] = (0, 0),
    width: int | None = None,
    height: int | None = None,
    **kwargs: Any,
) -> dict[str, Any]:
    """
    Generate the keyword arguments used for scaling.

    :param clip:    The source clip.
    :param shift:   Subpixel shift (top, left).
    :param width:   Target width.
    :param height:  Target height.
    :param kwargs:  Extra parameters to merge.
    :return:        Final dictionary of keyword arguments for the scale function.
    """
    return dict(width=width, height=height, src_top=shift[0], src_left=shift[1]) | self.kwargs | kwargs

inference

inference(
    clip: ConstantFormatVideoNode, **kwargs: Any
) -> ConstantFormatVideoNode
Source code
302
303
304
305
306
def inference(self, clip: ConstantFormatVideoNode, **kwargs: Any) -> ConstantFormatVideoNode:
    from vsmlrt import ArtCNN as mlrt_ArtCNN
    from vsmlrt import ArtCNNModel

    return mlrt_ArtCNN(clip, self.tiles, self.tilesize, self.overlap, ArtCNNModel(self._model), self.backend)

kernel_radius

kernel_radius() -> int

Return the effective kernel radius for the scaler.

Returns:

  • int

    Kernel radius.

Raises:

  • CustomNotImplementedError

    If no kernel radius is defined.

Source code
347
348
349
350
351
352
353
354
355
@cached_property
def kernel_radius(self) -> int:
    """
    Return the effective kernel radius for the scaler.

    :raises CustomNotImplementedError:  If no kernel radius is defined.
    :return:                            Kernel radius.
    """
    ...

multi

multi(
    clip: VideoNodeT,
    multi: float = 2.0,
    shift: tuple[TopShift, LeftShift] = (0, 0),
    **kwargs: Any
) -> VideoNodeT

Deprecated alias for supersample.

Parameters:

  • clip
    (VideoNodeT) –

    The source clip.

  • multi
    (float, default: 2.0 ) –

    Supersampling factor.

  • shift
    (tuple[TopShift, LeftShift], default: (0, 0) ) –

    Subpixel shift (top, left) applied during scaling.

  • kwargs
    (Any, default: {} ) –

    Additional arguments forwarded to the scale function.

Returns:

Source code
451
452
453
454
455
456
457
458
459
460
461
462
463
464
@deprecated('The "multi" method is deprecated. Use "supersample" instead.', category=DeprecationWarning)
def multi(
    self, clip: VideoNodeT, multi: float = 2.0, shift: tuple[TopShift, LeftShift] = (0, 0), **kwargs: Any
) -> VideoNodeT:
    """
    Deprecated alias for `supersample`.

    :param clip:    The source clip.
    :param multi:   Supersampling factor.
    :param shift:   Subpixel shift (top, left) applied during scaling.
    :param kwargs:  Additional arguments forwarded to the scale function.
    :return:        The supersampled clip.
    """
    return self.supersample(clip, multi, shift, **kwargs)

postprocess_clip

postprocess_clip(
    clip: VideoNode, input_clip: VideoNode, **kwargs: Any
) -> ConstantFormatVideoNode

Handles postprocessing of the model's output after inference.

Source code
231
232
233
234
235
236
def postprocess_clip(self, clip: vs.VideoNode, input_clip: vs.VideoNode, **kwargs: Any) -> ConstantFormatVideoNode:
    """Handles postprocessing of the model's output after inference."""

    return depth(
        clip, input_clip, dither_type=DitherType.ORDERED if 0 in {clip.width, clip.height} else DitherType.AUTO
    )

preprocess_clip

preprocess_clip(clip: VideoNode, **kwargs: Any) -> ConstantFormatVideoNode
Source code
310
311
def preprocess_clip(self, clip: vs.VideoNode, **kwargs: Any) -> ConstantFormatVideoNode:
    return super().preprocess_clip(get_y(clip), **kwargs)

pretty_string

pretty_string() -> str

Cached property returning a user-friendly string representation.

Returns:

  • str

    Pretty-printed string with arguments.

Source code
368
369
370
371
372
373
374
375
@cached_property
def pretty_string(self) -> str:
    """
    Cached property returning a user-friendly string representation.

    :return: Pretty-printed string with arguments.
    """
    return self._pretty_string()

scale

scale(
    clip: VideoNode,
    width: int | None = None,
    height: int | None = None,
    shift: tuple[float, float] = (0, 0),
    **kwargs: Any
) -> ConstantFormatVideoNode

Scale the given clip using the ONNX model.

Parameters:

  • clip
    (VideoNode) –

    The input clip to be scaled.

  • width
    (int | None, default: None ) –

    The target width for scaling. If None, the width of the input clip will be used.

  • height
    (int | None, default: None ) –

    The target height for scaling. If None, the height of the input clip will be used.

  • shift
    (tuple[float, float], default: (0, 0) ) –

    A tuple representing the shift values for the x and y axes.

  • **kwargs
    (Any, default: {} ) –

    Additional arguments to be passed to the preprocess_clip, postprocess_clip, inference, and _final_scale methods. Use the prefix preprocess_ or postprocess_ to pass an argument to the respective method. Use the prefix inference_ to pass an argument to the inference method.

Returns:

  • ConstantFormatVideoNode

    The scaled clip.

Source code
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
def scale(
    self,
    clip: vs.VideoNode,
    width: int | None = None,
    height: int | None = None,
    shift: tuple[float, float] = (0, 0),
    **kwargs: Any
) -> ConstantFormatVideoNode:
    """
    Scale the given clip using the ONNX model.

    :param clip:        The input clip to be scaled.
    :param width:       The target width for scaling. If None, the width of the input clip will be used.
    :param height:      The target height for scaling. If None, the height of the input clip will be used.
    :param shift:       A tuple representing the shift values for the x and y axes.
    :param **kwargs:    Additional arguments to be passed to the `preprocess_clip`, `postprocess_clip`,
                        `inference`, and `_final_scale` methods.
                        Use the prefix `preprocess_` or `postprocess_` to pass an argument to the respective method.
                        Use the prefix `inference_` to pass an argument to the inference method.

    :return:            The scaled clip.
    """
    from vsmlrt import Backend

    assert check_variable_format(clip, self.__class__)

    width, height = self._wh_norm(clip, width, height)

    preprocess_kwargs = dict[str, Any]()
    postprocess_kwargs = dict[str, Any]()
    inference_kwargs = dict[str, Any]()

    for k in kwargs.copy():
        for prefix, ckwargs in zip(
            ("preprocess_", "postprocess_", "inference_"),
            (preprocess_kwargs, postprocess_kwargs, inference_kwargs)
        ):
            if k.startswith(prefix):
                ckwargs[k.removeprefix(prefix)] = kwargs.pop(k)
                break

    wclip = self.preprocess_clip(clip, **preprocess_kwargs)

    if 0 not in {clip.width, clip.height}:
        scaled = self.inference(wclip, **inference_kwargs)
    else:
        if not isinstance(self.backend, Backend.TRT):
            raise CustomValueError(
                "Variable resolution clips can only be processed with TRT Backend!", self.__class__, self.backend
            )

        warning(f"{self.__class__.__name__}: Variable resolution clip detected!")

        if self.backend.static_shape:
            warning("static_shape is True, setting it to False...")
            self.backend.static_shape = False

        if not self.backend.max_shapes:
            warning("max_shapes is None, setting it to (1936, 1088). You may want to adjust it...")
            self.backend.max_shapes = (1936, 1088)

        if not self.backend.opt_shapes:
            warning("opt_shapes is None, setting it to (64, 64). You may want to adjust it...")
            self.backend.opt_shapes = (64, 64)

        scaled = ProcessVariableResClip[ConstantFormatVideoNode].from_func(
            wclip, lambda c: self.inference(c, **inference_kwargs), False, wclip.format, self.max_instances
        )

    scaled = self.postprocess_clip(scaled, clip, **postprocess_kwargs)

    return self._finish_scale(scaled, clip, width, height, shift, **kwargs)

supersample

supersample(
    clip: VideoNodeT,
    rfactor: float = 2.0,
    shift: tuple[TopShift, LeftShift] = (0, 0),
    **kwargs: Any
) -> VideoNodeT

Supersample a clip by a given scaling factor.

Parameters:

  • clip
    (VideoNodeT) –

    The source clip.

  • rfactor
    (float, default: 2.0 ) –

    Scaling factor for supersampling.

  • shift
    (tuple[TopShift, LeftShift], default: (0, 0) ) –

    Subpixel shift (top, left) applied during scaling.

  • kwargs
    (Any, default: {} ) –

    Additional arguments forwarded to the scale function.

Returns:

Raises:

  • CustomValueError

    If resulting resolution is non-positive.

Source code
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
def supersample(
    self, clip: VideoNodeT, rfactor: float = 2.0, shift: tuple[TopShift, LeftShift] = (0, 0), **kwargs: Any
) -> VideoNodeT:
    """
    Supersample a clip by a given scaling factor.

    :param clip:                The source clip.
    :param rfactor:             Scaling factor for supersampling.
    :param shift:               Subpixel shift (top, left) applied during scaling.
    :param kwargs:              Additional arguments forwarded to the scale function.
    :raises CustomValueError:   If resulting resolution is non-positive.
    :return:                    The supersampled clip.
    """
    assert check_variable_resolution(clip, self.supersample)

    dst_width, dst_height = ceil(clip.width * rfactor), ceil(clip.height * rfactor)

    if max(dst_width, dst_height) <= 0.0:
        raise CustomValueError(
            'Multiplying the resolution by "rfactor" must result in a positive resolution!',
            self.supersample,
            rfactor,
        )

    return self.scale(clip, dst_width, dst_height, shift, **kwargs)  # type: ignore[return-value]

R8F64

R8F64(
    backend: Any | None = None,
    tiles: int | tuple[int, int] | None = None,
    tilesize: int | tuple[int, int] | None = None,
    overlap: int | tuple[int, int] | None = None,
    max_instances: int = 2,
    *,
    kernel: KernelLike = Catrom,
    scaler: ScalerLike | None = None,
    shifter: KernelLike | None = None,
    **kwargs: Any
)

Bases: BaseArtCNNLuma

A smaller and faster version of R16F96 but very competitive.

Example usage:

from vsscale import ArtCNN

doubled = ArtCNN.R8F64().scale(clip, clip.width * 2, clip.height * 2)

Parameters:

  • backend

    (Any | None, default: None ) –

    The backend to be used with the vs-mlrt framework. If set to None, the most suitable backend will be automatically selected, prioritizing fp16 support.

  • tiles

    (int | tuple[int, int] | None, default: None ) –

    Whether to split the image into multiple tiles. This can help reduce VRAM usage, but note that the model's behavior may vary when they are used.

  • tilesize

    (int | tuple[int, int] | None, default: None ) –

    The size of each tile when splitting the image (if tiles are enabled).

  • overlap

    (int | tuple[int, int] | None, default: None ) –

    The size of overlap between tiles.

  • max_instances

    (int, default: 2 ) –

    Maximum instances to spawn when scaling a variable resolution clip.

  • kernel

    (KernelLike, default: Catrom ) –

    Base kernel to be used for certain scaling/shifting/resampling operations. Defaults to Catrom.

  • scaler

    (ScalerLike | None, default: None ) –

    Scaler used for scaling operations. Defaults to kernel.

  • shifter

    (KernelLike | None, default: None ) –

    Kernel used for shifting operations. Defaults to kernel.

  • **kwargs

    (Any, default: {} ) –

    Additional arguments to pass to the backend. See the vsmlrt backend's docstring for more details.

Classes:

Methods:

  • calc_tilesize

    Reimplementation of vsmlrt.calc_tilesize helper function

  • ensure_obj

    Ensure that the input is a scaler instance, resolving it if necessary.

  • from_param

    Resolve and return a scaler type from a given input (string, type, or instance).

  • get_scale_args

    Generate the keyword arguments used for scaling.

  • inference
  • kernel_radius

    Return the effective kernel radius for the scaler.

  • multi

    Deprecated alias for supersample.

  • postprocess_clip

    Handles postprocessing of the model's output after inference.

  • preprocess_clip
  • pretty_string

    Cached property returning a user-friendly string representation.

  • scale

    Scale the given clip using the ONNX model.

  • supersample

    Supersample a clip by a given scaling factor.

Attributes:

Source code
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
def __init__(
    self,
    backend: Any | None = None,
    tiles: int | tuple[int, int] | None = None,
    tilesize: int | tuple[int, int] | None = None,
    overlap: int | tuple[int, int] | None = None,
    max_instances: int = 2,
    *,
    kernel: KernelLike = Catrom,
    scaler: ScalerLike | None = None,
    shifter: KernelLike | None = None,
    **kwargs: Any
) -> None:
    """
    :param backend:         The backend to be used with the vs-mlrt framework.
                            If set to None, the most suitable backend will be automatically selected, prioritizing fp16 support.
    :param tiles:           Whether to split the image into multiple tiles.
                            This can help reduce VRAM usage, but note that the model's behavior may vary when they are used.
    :param tilesize:        The size of each tile when splitting the image (if tiles are enabled).
    :param overlap:         The size of overlap between tiles.
    :param max_instances:   Maximum instances to spawn when scaling a variable resolution clip.
    :param kernel:          Base kernel to be used for certain scaling/shifting/resampling operations.
                            Defaults to Catrom.
    :param scaler:          Scaler used for scaling operations. Defaults to kernel.
    :param shifter:         Kernel used for shifting operations. Defaults to kernel.
    :param **kwargs:        Additional arguments to pass to the backend.
                            See the vsmlrt backend's docstring for more details.
    """
    super().__init__(
        None, backend, tiles, tilesize, overlap, max_instances, kernel=kernel, scaler=scaler, shifter=shifter, **kwargs
    )

backend instance-attribute

backend = autoselect_backend(**_default_args | kwargs)

kernel instance-attribute

kernel = ensure_obj(kernel, __class__)

kwargs instance-attribute

kwargs: dict[str, Any] = kwargs

Arguments passed to the implemented funcs or internal scale function.

max_instances instance-attribute

max_instances = max_instances

model instance-attribute

model = str(resolve())

overlap instance-attribute

overlap = overlap

overlap_h instance-attribute

overlap_h = 8

overlap_w instance-attribute

overlap_w = 8

scale_function instance-attribute

scale_function: Callable[..., VideoNode]

Scale function called internally when performing scaling operations.

scaler instance-attribute

scaler = ensure_obj(scaler or kernel, __class__)

shifter instance-attribute

shifter = ensure_obj(shifter or kernel, __class__)

tiles instance-attribute

tiles = tiles

tilesize instance-attribute

tilesize = tilesize

cached_property

cached_property(func: Callable[Concatenate[_BaseScalerT, P], T_co])

Bases: cached_property[T_co]

Read only version of functools.cached_property.

Source code
265
def __init__(self, func: Callable[Concatenate[_BaseScalerT, P], T_co]) -> None: ...

calc_tilesize

calc_tilesize(
    clip: VideoNode, **kwargs: Any
) -> tuple[tuple[int, int], tuple[int, int]]

Reimplementation of vsmlrt.calc_tilesize helper function

Source code
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
def calc_tilesize(self, clip: vs.VideoNode, **kwargs: Any) -> tuple[tuple[int, int], tuple[int, int]]:
    """Reimplementation of vsmlrt.calc_tilesize helper function"""

    from vsmlrt import calc_tilesize

    kwargs = dict(
        tiles=self.tiles,
        tilesize=self.tilesize,
        width=clip.width,
        height=clip.height,
        multiple=1,
        overlap_w=self.overlap_w,
        overlap_h=self.overlap_h,
    ) | kwargs

    return calc_tilesize(**kwargs)

ensure_obj classmethod

ensure_obj(
    scaler: str | type[Self] | Self | None = None,
    /,
    func_except: FuncExceptT | None = None,
) -> Self

Ensure that the input is a scaler instance, resolving it if necessary.

Parameters:

  • scaler
    (str | type[Self] | Self | None, default: None ) –

    Scaler identifier (string, class, or instance).

  • func_except
    (FuncExceptT | None, default: None ) –

    Function returned for custom error handling.

Returns:

  • Self

    Scaler instance.

Source code
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
@classmethod
def ensure_obj(
    cls,
    scaler: str | type[Self] | Self | None = None,
    /,
    func_except: FuncExceptT | None = None,
) -> Self:
    """
    Ensure that the input is a scaler instance, resolving it if necessary.

    :param scaler:          Scaler identifier (string, class, or instance).
    :param func_except:     Function returned for custom error handling.
    :return:                Scaler instance.
    """
    return _base_ensure_obj(cls, scaler, func_except)

from_param classmethod

from_param(
    scaler: str | type[Self] | Self | None = None,
    /,
    func_except: FuncExceptT | None = None,
) -> type[Self]

Resolve and return a scaler type from a given input (string, type, or instance).

Parameters:

  • scaler
    (str | type[Self] | Self | None, default: None ) –

    Scaler identifier (string, class, or instance).

  • func_except
    (FuncExceptT | None, default: None ) –

    Function returned for custom error handling.

Returns:

  • type[Self]

    Resolved scaler type.

Source code
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
@classmethod
def from_param(
    cls,
    scaler: str | type[Self] | Self | None = None,
    /,
    func_except: FuncExceptT | None = None,
) -> type[Self]:
    """
    Resolve and return a scaler type from a given input (string, type, or instance).

    :param scaler:          Scaler identifier (string, class, or instance).
    :param func_except:     Function returned for custom error handling.
    :return:                Resolved scaler type.
    """
    return _base_from_param(cls, scaler, cls._err_class, func_except)

get_scale_args

get_scale_args(
    clip: VideoNode,
    shift: tuple[TopShift, LeftShift] = (0, 0),
    width: int | None = None,
    height: int | None = None,
    **kwargs: Any
) -> dict[str, Any]

Generate the keyword arguments used for scaling.

Parameters:

  • clip
    (VideoNode) –

    The source clip.

  • shift
    (tuple[TopShift, LeftShift], default: (0, 0) ) –

    Subpixel shift (top, left).

  • width
    (int | None, default: None ) –

    Target width.

  • height
    (int | None, default: None ) –

    Target height.

  • kwargs
    (Any, default: {} ) –

    Extra parameters to merge.

Returns:

  • dict[str, Any]

    Final dictionary of keyword arguments for the scale function.

Source code
466
467
468
469
470
471
472
473
474
475
476
477
478
479
480
481
482
483
484
def get_scale_args(
    self,
    clip: vs.VideoNode,
    shift: tuple[TopShift, LeftShift] = (0, 0),
    width: int | None = None,
    height: int | None = None,
    **kwargs: Any,
) -> dict[str, Any]:
    """
    Generate the keyword arguments used for scaling.

    :param clip:    The source clip.
    :param shift:   Subpixel shift (top, left).
    :param width:   Target width.
    :param height:  Target height.
    :param kwargs:  Extra parameters to merge.
    :return:        Final dictionary of keyword arguments for the scale function.
    """
    return dict(width=width, height=height, src_top=shift[0], src_left=shift[1]) | self.kwargs | kwargs

inference

inference(
    clip: ConstantFormatVideoNode, **kwargs: Any
) -> ConstantFormatVideoNode
Source code
302
303
304
305
306
def inference(self, clip: ConstantFormatVideoNode, **kwargs: Any) -> ConstantFormatVideoNode:
    from vsmlrt import ArtCNN as mlrt_ArtCNN
    from vsmlrt import ArtCNNModel

    return mlrt_ArtCNN(clip, self.tiles, self.tilesize, self.overlap, ArtCNNModel(self._model), self.backend)

kernel_radius

kernel_radius() -> int

Return the effective kernel radius for the scaler.

Returns:

  • int

    Kernel radius.

Raises:

  • CustomNotImplementedError

    If no kernel radius is defined.

Source code
347
348
349
350
351
352
353
354
355
@cached_property
def kernel_radius(self) -> int:
    """
    Return the effective kernel radius for the scaler.

    :raises CustomNotImplementedError:  If no kernel radius is defined.
    :return:                            Kernel radius.
    """
    ...

multi

multi(
    clip: VideoNodeT,
    multi: float = 2.0,
    shift: tuple[TopShift, LeftShift] = (0, 0),
    **kwargs: Any
) -> VideoNodeT

Deprecated alias for supersample.

Parameters:

  • clip
    (VideoNodeT) –

    The source clip.

  • multi
    (float, default: 2.0 ) –

    Supersampling factor.

  • shift
    (tuple[TopShift, LeftShift], default: (0, 0) ) –

    Subpixel shift (top, left) applied during scaling.

  • kwargs
    (Any, default: {} ) –

    Additional arguments forwarded to the scale function.

Returns:

Source code
451
452
453
454
455
456
457
458
459
460
461
462
463
464
@deprecated('The "multi" method is deprecated. Use "supersample" instead.', category=DeprecationWarning)
def multi(
    self, clip: VideoNodeT, multi: float = 2.0, shift: tuple[TopShift, LeftShift] = (0, 0), **kwargs: Any
) -> VideoNodeT:
    """
    Deprecated alias for `supersample`.

    :param clip:    The source clip.
    :param multi:   Supersampling factor.
    :param shift:   Subpixel shift (top, left) applied during scaling.
    :param kwargs:  Additional arguments forwarded to the scale function.
    :return:        The supersampled clip.
    """
    return self.supersample(clip, multi, shift, **kwargs)

postprocess_clip

postprocess_clip(
    clip: VideoNode, input_clip: VideoNode, **kwargs: Any
) -> ConstantFormatVideoNode

Handles postprocessing of the model's output after inference.

Source code
231
232
233
234
235
236
def postprocess_clip(self, clip: vs.VideoNode, input_clip: vs.VideoNode, **kwargs: Any) -> ConstantFormatVideoNode:
    """Handles postprocessing of the model's output after inference."""

    return depth(
        clip, input_clip, dither_type=DitherType.ORDERED if 0 in {clip.width, clip.height} else DitherType.AUTO
    )

preprocess_clip

preprocess_clip(clip: VideoNode, **kwargs: Any) -> ConstantFormatVideoNode
Source code
310
311
def preprocess_clip(self, clip: vs.VideoNode, **kwargs: Any) -> ConstantFormatVideoNode:
    return super().preprocess_clip(get_y(clip), **kwargs)

pretty_string

pretty_string() -> str

Cached property returning a user-friendly string representation.

Returns:

  • str

    Pretty-printed string with arguments.

Source code
368
369
370
371
372
373
374
375
@cached_property
def pretty_string(self) -> str:
    """
    Cached property returning a user-friendly string representation.

    :return: Pretty-printed string with arguments.
    """
    return self._pretty_string()

scale

scale(
    clip: VideoNode,
    width: int | None = None,
    height: int | None = None,
    shift: tuple[float, float] = (0, 0),
    **kwargs: Any
) -> ConstantFormatVideoNode

Scale the given clip using the ONNX model.

Parameters:

  • clip
    (VideoNode) –

    The input clip to be scaled.

  • width
    (int | None, default: None ) –

    The target width for scaling. If None, the width of the input clip will be used.

  • height
    (int | None, default: None ) –

    The target height for scaling. If None, the height of the input clip will be used.

  • shift
    (tuple[float, float], default: (0, 0) ) –

    A tuple representing the shift values for the x and y axes.

  • **kwargs
    (Any, default: {} ) –

    Additional arguments to be passed to the preprocess_clip, postprocess_clip, inference, and _final_scale methods. Use the prefix preprocess_ or postprocess_ to pass an argument to the respective method. Use the prefix inference_ to pass an argument to the inference method.

Returns:

  • ConstantFormatVideoNode

    The scaled clip.

Source code
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
def scale(
    self,
    clip: vs.VideoNode,
    width: int | None = None,
    height: int | None = None,
    shift: tuple[float, float] = (0, 0),
    **kwargs: Any
) -> ConstantFormatVideoNode:
    """
    Scale the given clip using the ONNX model.

    :param clip:        The input clip to be scaled.
    :param width:       The target width for scaling. If None, the width of the input clip will be used.
    :param height:      The target height for scaling. If None, the height of the input clip will be used.
    :param shift:       A tuple representing the shift values for the x and y axes.
    :param **kwargs:    Additional arguments to be passed to the `preprocess_clip`, `postprocess_clip`,
                        `inference`, and `_final_scale` methods.
                        Use the prefix `preprocess_` or `postprocess_` to pass an argument to the respective method.
                        Use the prefix `inference_` to pass an argument to the inference method.

    :return:            The scaled clip.
    """
    from vsmlrt import Backend

    assert check_variable_format(clip, self.__class__)

    width, height = self._wh_norm(clip, width, height)

    preprocess_kwargs = dict[str, Any]()
    postprocess_kwargs = dict[str, Any]()
    inference_kwargs = dict[str, Any]()

    for k in kwargs.copy():
        for prefix, ckwargs in zip(
            ("preprocess_", "postprocess_", "inference_"),
            (preprocess_kwargs, postprocess_kwargs, inference_kwargs)
        ):
            if k.startswith(prefix):
                ckwargs[k.removeprefix(prefix)] = kwargs.pop(k)
                break

    wclip = self.preprocess_clip(clip, **preprocess_kwargs)

    if 0 not in {clip.width, clip.height}:
        scaled = self.inference(wclip, **inference_kwargs)
    else:
        if not isinstance(self.backend, Backend.TRT):
            raise CustomValueError(
                "Variable resolution clips can only be processed with TRT Backend!", self.__class__, self.backend
            )

        warning(f"{self.__class__.__name__}: Variable resolution clip detected!")

        if self.backend.static_shape:
            warning("static_shape is True, setting it to False...")
            self.backend.static_shape = False

        if not self.backend.max_shapes:
            warning("max_shapes is None, setting it to (1936, 1088). You may want to adjust it...")
            self.backend.max_shapes = (1936, 1088)

        if not self.backend.opt_shapes:
            warning("opt_shapes is None, setting it to (64, 64). You may want to adjust it...")
            self.backend.opt_shapes = (64, 64)

        scaled = ProcessVariableResClip[ConstantFormatVideoNode].from_func(
            wclip, lambda c: self.inference(c, **inference_kwargs), False, wclip.format, self.max_instances
        )

    scaled = self.postprocess_clip(scaled, clip, **postprocess_kwargs)

    return self._finish_scale(scaled, clip, width, height, shift, **kwargs)

supersample

supersample(
    clip: VideoNodeT,
    rfactor: float = 2.0,
    shift: tuple[TopShift, LeftShift] = (0, 0),
    **kwargs: Any
) -> VideoNodeT

Supersample a clip by a given scaling factor.

Parameters:

  • clip
    (VideoNodeT) –

    The source clip.

  • rfactor
    (float, default: 2.0 ) –

    Scaling factor for supersampling.

  • shift
    (tuple[TopShift, LeftShift], default: (0, 0) ) –

    Subpixel shift (top, left) applied during scaling.

  • kwargs
    (Any, default: {} ) –

    Additional arguments forwarded to the scale function.

Returns:

Raises:

  • CustomValueError

    If resulting resolution is non-positive.

Source code
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
def supersample(
    self, clip: VideoNodeT, rfactor: float = 2.0, shift: tuple[TopShift, LeftShift] = (0, 0), **kwargs: Any
) -> VideoNodeT:
    """
    Supersample a clip by a given scaling factor.

    :param clip:                The source clip.
    :param rfactor:             Scaling factor for supersampling.
    :param shift:               Subpixel shift (top, left) applied during scaling.
    :param kwargs:              Additional arguments forwarded to the scale function.
    :raises CustomValueError:   If resulting resolution is non-positive.
    :return:                    The supersampled clip.
    """
    assert check_variable_resolution(clip, self.supersample)

    dst_width, dst_height = ceil(clip.width * rfactor), ceil(clip.height * rfactor)

    if max(dst_width, dst_height) <= 0.0:
        raise CustomValueError(
            'Multiplying the resolution by "rfactor" must result in a positive resolution!',
            self.supersample,
            rfactor,
        )

    return self.scale(clip, dst_width, dst_height, shift, **kwargs)  # type: ignore[return-value]

R8F64_Chroma

R8F64_Chroma(
    backend: Any | None = None,
    tiles: int | tuple[int, int] | None = None,
    tilesize: int | tuple[int, int] | None = None,
    overlap: int | tuple[int, int] | None = None,
    max_instances: int = 2,
    *,
    kernel: KernelLike = Catrom,
    scaler: ScalerLike | None = None,
    shifter: KernelLike | None = None,
    **kwargs: Any
)

Bases: BaseArtCNNChroma

The new and fancy big chroma model.

These don't double the input clip and rather just try to enhance the chroma using luma information.

Example usage:

from vsscale import ArtCNN

chroma_upscaled = ArtCNN.R8F64_Chroma().scale(clip)

Parameters:

  • backend

    (Any | None, default: None ) –

    The backend to be used with the vs-mlrt framework. If set to None, the most suitable backend will be automatically selected, prioritizing fp16 support.

  • tiles

    (int | tuple[int, int] | None, default: None ) –

    Whether to split the image into multiple tiles. This can help reduce VRAM usage, but note that the model's behavior may vary when they are used.

  • tilesize

    (int | tuple[int, int] | None, default: None ) –

    The size of each tile when splitting the image (if tiles are enabled).

  • overlap

    (int | tuple[int, int] | None, default: None ) –

    The size of overlap between tiles.

  • max_instances

    (int, default: 2 ) –

    Maximum instances to spawn when scaling a variable resolution clip.

  • kernel

    (KernelLike, default: Catrom ) –

    Base kernel to be used for certain scaling/shifting/resampling operations. Defaults to Catrom.

  • scaler

    (ScalerLike | None, default: None ) –

    Scaler used for scaling operations. Defaults to kernel.

  • shifter

    (KernelLike | None, default: None ) –

    Kernel used for shifting operations. Defaults to kernel.

  • **kwargs

    (Any, default: {} ) –

    Additional arguments to pass to the backend. See the vsmlrt backend's docstring for more details.

Classes:

Methods:

  • calc_tilesize

    Reimplementation of vsmlrt.calc_tilesize helper function

  • ensure_obj

    Ensure that the input is a scaler instance, resolving it if necessary.

  • from_param

    Resolve and return a scaler type from a given input (string, type, or instance).

  • get_scale_args

    Generate the keyword arguments used for scaling.

  • inference
  • kernel_radius

    Return the effective kernel radius for the scaler.

  • multi

    Deprecated alias for supersample.

  • postprocess_clip

    Handles postprocessing of the model's output after inference.

  • preprocess_clip
  • pretty_string

    Cached property returning a user-friendly string representation.

  • scale

    Scale the given clip using the ONNX model.

  • supersample

    Supersample a clip by a given scaling factor.

Attributes:

Source code
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
def __init__(
    self,
    backend: Any | None = None,
    tiles: int | tuple[int, int] | None = None,
    tilesize: int | tuple[int, int] | None = None,
    overlap: int | tuple[int, int] | None = None,
    max_instances: int = 2,
    *,
    kernel: KernelLike = Catrom,
    scaler: ScalerLike | None = None,
    shifter: KernelLike | None = None,
    **kwargs: Any
) -> None:
    """
    :param backend:         The backend to be used with the vs-mlrt framework.
                            If set to None, the most suitable backend will be automatically selected, prioritizing fp16 support.
    :param tiles:           Whether to split the image into multiple tiles.
                            This can help reduce VRAM usage, but note that the model's behavior may vary when they are used.
    :param tilesize:        The size of each tile when splitting the image (if tiles are enabled).
    :param overlap:         The size of overlap between tiles.
    :param max_instances:   Maximum instances to spawn when scaling a variable resolution clip.
    :param kernel:          Base kernel to be used for certain scaling/shifting/resampling operations.
                            Defaults to Catrom.
    :param scaler:          Scaler used for scaling operations. Defaults to kernel.
    :param shifter:         Kernel used for shifting operations. Defaults to kernel.
    :param **kwargs:        Additional arguments to pass to the backend.
                            See the vsmlrt backend's docstring for more details.
    """
    super().__init__(
        None, backend, tiles, tilesize, overlap, max_instances, kernel=kernel, scaler=scaler, shifter=shifter, **kwargs
    )

backend instance-attribute

backend = autoselect_backend(**_default_args | kwargs)

kernel instance-attribute

kernel = ensure_obj(kernel, __class__)

kwargs instance-attribute

kwargs: dict[str, Any] = kwargs

Arguments passed to the implemented funcs or internal scale function.

max_instances instance-attribute

max_instances = max_instances

model instance-attribute

model = str(resolve())

overlap instance-attribute

overlap = overlap

overlap_h instance-attribute

overlap_h = 8

overlap_w instance-attribute

overlap_w = 8

scale_function instance-attribute

scale_function: Callable[..., VideoNode]

Scale function called internally when performing scaling operations.

scaler instance-attribute

scaler = ensure_obj(scaler or kernel, __class__)

shifter instance-attribute

shifter = ensure_obj(shifter or kernel, __class__)

tiles instance-attribute

tiles = tiles

tilesize instance-attribute

tilesize = tilesize

cached_property

cached_property(func: Callable[Concatenate[_BaseScalerT, P], T_co])

Bases: cached_property[T_co]

Read only version of functools.cached_property.

Source code
265
def __init__(self, func: Callable[Concatenate[_BaseScalerT, P], T_co]) -> None: ...

calc_tilesize

calc_tilesize(
    clip: VideoNode, **kwargs: Any
) -> tuple[tuple[int, int], tuple[int, int]]

Reimplementation of vsmlrt.calc_tilesize helper function

Source code
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
def calc_tilesize(self, clip: vs.VideoNode, **kwargs: Any) -> tuple[tuple[int, int], tuple[int, int]]:
    """Reimplementation of vsmlrt.calc_tilesize helper function"""

    from vsmlrt import calc_tilesize

    kwargs = dict(
        tiles=self.tiles,
        tilesize=self.tilesize,
        width=clip.width,
        height=clip.height,
        multiple=1,
        overlap_w=self.overlap_w,
        overlap_h=self.overlap_h,
    ) | kwargs

    return calc_tilesize(**kwargs)

ensure_obj classmethod

ensure_obj(
    scaler: str | type[Self] | Self | None = None,
    /,
    func_except: FuncExceptT | None = None,
) -> Self

Ensure that the input is a scaler instance, resolving it if necessary.

Parameters:

  • scaler
    (str | type[Self] | Self | None, default: None ) –

    Scaler identifier (string, class, or instance).

  • func_except
    (FuncExceptT | None, default: None ) –

    Function returned for custom error handling.

Returns:

  • Self

    Scaler instance.

Source code
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
@classmethod
def ensure_obj(
    cls,
    scaler: str | type[Self] | Self | None = None,
    /,
    func_except: FuncExceptT | None = None,
) -> Self:
    """
    Ensure that the input is a scaler instance, resolving it if necessary.

    :param scaler:          Scaler identifier (string, class, or instance).
    :param func_except:     Function returned for custom error handling.
    :return:                Scaler instance.
    """
    return _base_ensure_obj(cls, scaler, func_except)

from_param classmethod

from_param(
    scaler: str | type[Self] | Self | None = None,
    /,
    func_except: FuncExceptT | None = None,
) -> type[Self]

Resolve and return a scaler type from a given input (string, type, or instance).

Parameters:

  • scaler
    (str | type[Self] | Self | None, default: None ) –

    Scaler identifier (string, class, or instance).

  • func_except
    (FuncExceptT | None, default: None ) –

    Function returned for custom error handling.

Returns:

  • type[Self]

    Resolved scaler type.

Source code
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
@classmethod
def from_param(
    cls,
    scaler: str | type[Self] | Self | None = None,
    /,
    func_except: FuncExceptT | None = None,
) -> type[Self]:
    """
    Resolve and return a scaler type from a given input (string, type, or instance).

    :param scaler:          Scaler identifier (string, class, or instance).
    :param func_except:     Function returned for custom error handling.
    :return:                Resolved scaler type.
    """
    return _base_from_param(cls, scaler, cls._err_class, func_except)

get_scale_args

get_scale_args(
    clip: VideoNode,
    shift: tuple[TopShift, LeftShift] = (0, 0),
    width: int | None = None,
    height: int | None = None,
    **kwargs: Any
) -> dict[str, Any]

Generate the keyword arguments used for scaling.

Parameters:

  • clip
    (VideoNode) –

    The source clip.

  • shift
    (tuple[TopShift, LeftShift], default: (0, 0) ) –

    Subpixel shift (top, left).

  • width
    (int | None, default: None ) –

    Target width.

  • height
    (int | None, default: None ) –

    Target height.

  • kwargs
    (Any, default: {} ) –

    Extra parameters to merge.

Returns:

  • dict[str, Any]

    Final dictionary of keyword arguments for the scale function.

Source code
466
467
468
469
470
471
472
473
474
475
476
477
478
479
480
481
482
483
484
def get_scale_args(
    self,
    clip: vs.VideoNode,
    shift: tuple[TopShift, LeftShift] = (0, 0),
    width: int | None = None,
    height: int | None = None,
    **kwargs: Any,
) -> dict[str, Any]:
    """
    Generate the keyword arguments used for scaling.

    :param clip:    The source clip.
    :param shift:   Subpixel shift (top, left).
    :param width:   Target width.
    :param height:  Target height.
    :param kwargs:  Extra parameters to merge.
    :return:        Final dictionary of keyword arguments for the scale function.
    """
    return dict(width=width, height=height, src_top=shift[0], src_left=shift[1]) | self.kwargs | kwargs

inference

inference(
    clip: ConstantFormatVideoNode, **kwargs: Any
) -> ConstantFormatVideoNode
Source code
302
303
304
305
306
def inference(self, clip: ConstantFormatVideoNode, **kwargs: Any) -> ConstantFormatVideoNode:
    from vsmlrt import ArtCNN as mlrt_ArtCNN
    from vsmlrt import ArtCNNModel

    return mlrt_ArtCNN(clip, self.tiles, self.tilesize, self.overlap, ArtCNNModel(self._model), self.backend)

kernel_radius

kernel_radius() -> int

Return the effective kernel radius for the scaler.

Returns:

  • int

    Kernel radius.

Raises:

  • CustomNotImplementedError

    If no kernel radius is defined.

Source code
347
348
349
350
351
352
353
354
355
@cached_property
def kernel_radius(self) -> int:
    """
    Return the effective kernel radius for the scaler.

    :raises CustomNotImplementedError:  If no kernel radius is defined.
    :return:                            Kernel radius.
    """
    ...

multi

multi(
    clip: VideoNodeT,
    multi: float = 2.0,
    shift: tuple[TopShift, LeftShift] = (0, 0),
    **kwargs: Any
) -> VideoNodeT

Deprecated alias for supersample.

Parameters:

  • clip
    (VideoNodeT) –

    The source clip.

  • multi
    (float, default: 2.0 ) –

    Supersampling factor.

  • shift
    (tuple[TopShift, LeftShift], default: (0, 0) ) –

    Subpixel shift (top, left) applied during scaling.

  • kwargs
    (Any, default: {} ) –

    Additional arguments forwarded to the scale function.

Returns:

Source code
451
452
453
454
455
456
457
458
459
460
461
462
463
464
@deprecated('The "multi" method is deprecated. Use "supersample" instead.', category=DeprecationWarning)
def multi(
    self, clip: VideoNodeT, multi: float = 2.0, shift: tuple[TopShift, LeftShift] = (0, 0), **kwargs: Any
) -> VideoNodeT:
    """
    Deprecated alias for `supersample`.

    :param clip:    The source clip.
    :param multi:   Supersampling factor.
    :param shift:   Subpixel shift (top, left) applied during scaling.
    :param kwargs:  Additional arguments forwarded to the scale function.
    :return:        The supersampled clip.
    """
    return self.supersample(clip, multi, shift, **kwargs)

postprocess_clip

postprocess_clip(
    clip: VideoNode, input_clip: VideoNode, **kwargs: Any
) -> ConstantFormatVideoNode

Handles postprocessing of the model's output after inference.

Source code
231
232
233
234
235
236
def postprocess_clip(self, clip: vs.VideoNode, input_clip: vs.VideoNode, **kwargs: Any) -> ConstantFormatVideoNode:
    """Handles postprocessing of the model's output after inference."""

    return depth(
        clip, input_clip, dither_type=DitherType.ORDERED if 0 in {clip.width, clip.height} else DitherType.AUTO
    )

preprocess_clip

preprocess_clip(clip: VideoNode, **kwargs: Any) -> ConstantFormatVideoNode
Source code
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
def preprocess_clip(self, clip: vs.VideoNode, **kwargs: Any) -> ConstantFormatVideoNode:
    assert check_variable_format(clip, self.__class__)

    if clip.format.subsampling_h != 0 or clip.format.subsampling_w != 0:
        chroma_scaler = Kernel.ensure_obj(kwargs.pop("chroma_scaler", Bilinear))

        clip = chroma_scaler.resample(
            clip, clip.format.replace(
                subsampling_h=0, subsampling_w=0,
                sample_type=vs.FLOAT, bits_per_sample=16 if self.backend.fp16 else 32
            )
        )
        return limiter(clip, func=self.__class__)

    return super().preprocess_clip(clip, **kwargs)

pretty_string

pretty_string() -> str

Cached property returning a user-friendly string representation.

Returns:

  • str

    Pretty-printed string with arguments.

Source code
368
369
370
371
372
373
374
375
@cached_property
def pretty_string(self) -> str:
    """
    Cached property returning a user-friendly string representation.

    :return: Pretty-printed string with arguments.
    """
    return self._pretty_string()

scale

scale(
    clip: VideoNode,
    width: int | None = None,
    height: int | None = None,
    shift: tuple[float, float] = (0, 0),
    **kwargs: Any
) -> ConstantFormatVideoNode

Scale the given clip using the ONNX model.

Parameters:

  • clip
    (VideoNode) –

    The input clip to be scaled.

  • width
    (int | None, default: None ) –

    The target width for scaling. If None, the width of the input clip will be used.

  • height
    (int | None, default: None ) –

    The target height for scaling. If None, the height of the input clip will be used.

  • shift
    (tuple[float, float], default: (0, 0) ) –

    A tuple representing the shift values for the x and y axes.

  • **kwargs
    (Any, default: {} ) –

    Additional arguments to be passed to the preprocess_clip, postprocess_clip, inference, and _final_scale methods. Use the prefix preprocess_ or postprocess_ to pass an argument to the respective method. Use the prefix inference_ to pass an argument to the inference method.

Returns:

  • ConstantFormatVideoNode

    The scaled clip.

Source code
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
def scale(
    self,
    clip: vs.VideoNode,
    width: int | None = None,
    height: int | None = None,
    shift: tuple[float, float] = (0, 0),
    **kwargs: Any
) -> ConstantFormatVideoNode:
    """
    Scale the given clip using the ONNX model.

    :param clip:        The input clip to be scaled.
    :param width:       The target width for scaling. If None, the width of the input clip will be used.
    :param height:      The target height for scaling. If None, the height of the input clip will be used.
    :param shift:       A tuple representing the shift values for the x and y axes.
    :param **kwargs:    Additional arguments to be passed to the `preprocess_clip`, `postprocess_clip`,
                        `inference`, and `_final_scale` methods.
                        Use the prefix `preprocess_` or `postprocess_` to pass an argument to the respective method.
                        Use the prefix `inference_` to pass an argument to the inference method.

    :return:            The scaled clip.
    """
    from vsmlrt import Backend

    assert check_variable_format(clip, self.__class__)

    width, height = self._wh_norm(clip, width, height)

    preprocess_kwargs = dict[str, Any]()
    postprocess_kwargs = dict[str, Any]()
    inference_kwargs = dict[str, Any]()

    for k in kwargs.copy():
        for prefix, ckwargs in zip(
            ("preprocess_", "postprocess_", "inference_"),
            (preprocess_kwargs, postprocess_kwargs, inference_kwargs)
        ):
            if k.startswith(prefix):
                ckwargs[k.removeprefix(prefix)] = kwargs.pop(k)
                break

    wclip = self.preprocess_clip(clip, **preprocess_kwargs)

    if 0 not in {clip.width, clip.height}:
        scaled = self.inference(wclip, **inference_kwargs)
    else:
        if not isinstance(self.backend, Backend.TRT):
            raise CustomValueError(
                "Variable resolution clips can only be processed with TRT Backend!", self.__class__, self.backend
            )

        warning(f"{self.__class__.__name__}: Variable resolution clip detected!")

        if self.backend.static_shape:
            warning("static_shape is True, setting it to False...")
            self.backend.static_shape = False

        if not self.backend.max_shapes:
            warning("max_shapes is None, setting it to (1936, 1088). You may want to adjust it...")
            self.backend.max_shapes = (1936, 1088)

        if not self.backend.opt_shapes:
            warning("opt_shapes is None, setting it to (64, 64). You may want to adjust it...")
            self.backend.opt_shapes = (64, 64)

        scaled = ProcessVariableResClip[ConstantFormatVideoNode].from_func(
            wclip, lambda c: self.inference(c, **inference_kwargs), False, wclip.format, self.max_instances
        )

    scaled = self.postprocess_clip(scaled, clip, **postprocess_kwargs)

    return self._finish_scale(scaled, clip, width, height, shift, **kwargs)

supersample

supersample(
    clip: VideoNodeT,
    rfactor: float = 2.0,
    shift: tuple[TopShift, LeftShift] = (0, 0),
    **kwargs: Any
) -> VideoNodeT

Supersample a clip by a given scaling factor.

Parameters:

  • clip
    (VideoNodeT) –

    The source clip.

  • rfactor
    (float, default: 2.0 ) –

    Scaling factor for supersampling.

  • shift
    (tuple[TopShift, LeftShift], default: (0, 0) ) –

    Subpixel shift (top, left) applied during scaling.

  • kwargs
    (Any, default: {} ) –

    Additional arguments forwarded to the scale function.

Returns:

Raises:

  • CustomValueError

    If resulting resolution is non-positive.

Source code
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
def supersample(
    self, clip: VideoNodeT, rfactor: float = 2.0, shift: tuple[TopShift, LeftShift] = (0, 0), **kwargs: Any
) -> VideoNodeT:
    """
    Supersample a clip by a given scaling factor.

    :param clip:                The source clip.
    :param rfactor:             Scaling factor for supersampling.
    :param shift:               Subpixel shift (top, left) applied during scaling.
    :param kwargs:              Additional arguments forwarded to the scale function.
    :raises CustomValueError:   If resulting resolution is non-positive.
    :return:                    The supersampled clip.
    """
    assert check_variable_resolution(clip, self.supersample)

    dst_width, dst_height = ceil(clip.width * rfactor), ceil(clip.height * rfactor)

    if max(dst_width, dst_height) <= 0.0:
        raise CustomValueError(
            'Multiplying the resolution by "rfactor" must result in a positive resolution!',
            self.supersample,
            rfactor,
        )

    return self.scale(clip, dst_width, dst_height, shift, **kwargs)  # type: ignore[return-value]

R8F64_DS

R8F64_DS(
    backend: Any | None = None,
    tiles: int | tuple[int, int] | None = None,
    tilesize: int | tuple[int, int] | None = None,
    overlap: int | tuple[int, int] | None = None,
    max_instances: int = 2,
    *,
    kernel: KernelLike = Catrom,
    scaler: ScalerLike | None = None,
    shifter: KernelLike | None = None,
    **kwargs: Any
)

Bases: BaseArtCNNLuma

The same as R8F64 but intended to also sharpen and denoise.

Example usage:

from vsscale import ArtCNN

doubled = ArtCNN.R8F64_DS().scale(clip, clip.width * 2, clip.height * 2)

Parameters:

  • backend

    (Any | None, default: None ) –

    The backend to be used with the vs-mlrt framework. If set to None, the most suitable backend will be automatically selected, prioritizing fp16 support.

  • tiles

    (int | tuple[int, int] | None, default: None ) –

    Whether to split the image into multiple tiles. This can help reduce VRAM usage, but note that the model's behavior may vary when they are used.

  • tilesize

    (int | tuple[int, int] | None, default: None ) –

    The size of each tile when splitting the image (if tiles are enabled).

  • overlap

    (int | tuple[int, int] | None, default: None ) –

    The size of overlap between tiles.

  • max_instances

    (int, default: 2 ) –

    Maximum instances to spawn when scaling a variable resolution clip.

  • kernel

    (KernelLike, default: Catrom ) –

    Base kernel to be used for certain scaling/shifting/resampling operations. Defaults to Catrom.

  • scaler

    (ScalerLike | None, default: None ) –

    Scaler used for scaling operations. Defaults to kernel.

  • shifter

    (KernelLike | None, default: None ) –

    Kernel used for shifting operations. Defaults to kernel.

  • **kwargs

    (Any, default: {} ) –

    Additional arguments to pass to the backend. See the vsmlrt backend's docstring for more details.

Classes:

Methods:

  • calc_tilesize

    Reimplementation of vsmlrt.calc_tilesize helper function

  • ensure_obj

    Ensure that the input is a scaler instance, resolving it if necessary.

  • from_param

    Resolve and return a scaler type from a given input (string, type, or instance).

  • get_scale_args

    Generate the keyword arguments used for scaling.

  • inference
  • kernel_radius

    Return the effective kernel radius for the scaler.

  • multi

    Deprecated alias for supersample.

  • postprocess_clip

    Handles postprocessing of the model's output after inference.

  • preprocess_clip
  • pretty_string

    Cached property returning a user-friendly string representation.

  • scale

    Scale the given clip using the ONNX model.

  • supersample

    Supersample a clip by a given scaling factor.

Attributes:

Source code
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
def __init__(
    self,
    backend: Any | None = None,
    tiles: int | tuple[int, int] | None = None,
    tilesize: int | tuple[int, int] | None = None,
    overlap: int | tuple[int, int] | None = None,
    max_instances: int = 2,
    *,
    kernel: KernelLike = Catrom,
    scaler: ScalerLike | None = None,
    shifter: KernelLike | None = None,
    **kwargs: Any
) -> None:
    """
    :param backend:         The backend to be used with the vs-mlrt framework.
                            If set to None, the most suitable backend will be automatically selected, prioritizing fp16 support.
    :param tiles:           Whether to split the image into multiple tiles.
                            This can help reduce VRAM usage, but note that the model's behavior may vary when they are used.
    :param tilesize:        The size of each tile when splitting the image (if tiles are enabled).
    :param overlap:         The size of overlap between tiles.
    :param max_instances:   Maximum instances to spawn when scaling a variable resolution clip.
    :param kernel:          Base kernel to be used for certain scaling/shifting/resampling operations.
                            Defaults to Catrom.
    :param scaler:          Scaler used for scaling operations. Defaults to kernel.
    :param shifter:         Kernel used for shifting operations. Defaults to kernel.
    :param **kwargs:        Additional arguments to pass to the backend.
                            See the vsmlrt backend's docstring for more details.
    """
    super().__init__(
        None, backend, tiles, tilesize, overlap, max_instances, kernel=kernel, scaler=scaler, shifter=shifter, **kwargs
    )

backend instance-attribute

backend = autoselect_backend(**_default_args | kwargs)

kernel instance-attribute

kernel = ensure_obj(kernel, __class__)

kwargs instance-attribute

kwargs: dict[str, Any] = kwargs

Arguments passed to the implemented funcs or internal scale function.

max_instances instance-attribute

max_instances = max_instances

model instance-attribute

model = str(resolve())

overlap instance-attribute

overlap = overlap

overlap_h instance-attribute

overlap_h = 8

overlap_w instance-attribute

overlap_w = 8

scale_function instance-attribute

scale_function: Callable[..., VideoNode]

Scale function called internally when performing scaling operations.

scaler instance-attribute

scaler = ensure_obj(scaler or kernel, __class__)

shifter instance-attribute

shifter = ensure_obj(shifter or kernel, __class__)

tiles instance-attribute

tiles = tiles

tilesize instance-attribute

tilesize = tilesize

cached_property

cached_property(func: Callable[Concatenate[_BaseScalerT, P], T_co])

Bases: cached_property[T_co]

Read only version of functools.cached_property.

Source code
265
def __init__(self, func: Callable[Concatenate[_BaseScalerT, P], T_co]) -> None: ...

calc_tilesize

calc_tilesize(
    clip: VideoNode, **kwargs: Any
) -> tuple[tuple[int, int], tuple[int, int]]

Reimplementation of vsmlrt.calc_tilesize helper function

Source code
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
def calc_tilesize(self, clip: vs.VideoNode, **kwargs: Any) -> tuple[tuple[int, int], tuple[int, int]]:
    """Reimplementation of vsmlrt.calc_tilesize helper function"""

    from vsmlrt import calc_tilesize

    kwargs = dict(
        tiles=self.tiles,
        tilesize=self.tilesize,
        width=clip.width,
        height=clip.height,
        multiple=1,
        overlap_w=self.overlap_w,
        overlap_h=self.overlap_h,
    ) | kwargs

    return calc_tilesize(**kwargs)

ensure_obj classmethod

ensure_obj(
    scaler: str | type[Self] | Self | None = None,
    /,
    func_except: FuncExceptT | None = None,
) -> Self

Ensure that the input is a scaler instance, resolving it if necessary.

Parameters:

  • scaler
    (str | type[Self] | Self | None, default: None ) –

    Scaler identifier (string, class, or instance).

  • func_except
    (FuncExceptT | None, default: None ) –

    Function returned for custom error handling.

Returns:

  • Self

    Scaler instance.

Source code
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
@classmethod
def ensure_obj(
    cls,
    scaler: str | type[Self] | Self | None = None,
    /,
    func_except: FuncExceptT | None = None,
) -> Self:
    """
    Ensure that the input is a scaler instance, resolving it if necessary.

    :param scaler:          Scaler identifier (string, class, or instance).
    :param func_except:     Function returned for custom error handling.
    :return:                Scaler instance.
    """
    return _base_ensure_obj(cls, scaler, func_except)

from_param classmethod

from_param(
    scaler: str | type[Self] | Self | None = None,
    /,
    func_except: FuncExceptT | None = None,
) -> type[Self]

Resolve and return a scaler type from a given input (string, type, or instance).

Parameters:

  • scaler
    (str | type[Self] | Self | None, default: None ) –

    Scaler identifier (string, class, or instance).

  • func_except
    (FuncExceptT | None, default: None ) –

    Function returned for custom error handling.

Returns:

  • type[Self]

    Resolved scaler type.

Source code
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
@classmethod
def from_param(
    cls,
    scaler: str | type[Self] | Self | None = None,
    /,
    func_except: FuncExceptT | None = None,
) -> type[Self]:
    """
    Resolve and return a scaler type from a given input (string, type, or instance).

    :param scaler:          Scaler identifier (string, class, or instance).
    :param func_except:     Function returned for custom error handling.
    :return:                Resolved scaler type.
    """
    return _base_from_param(cls, scaler, cls._err_class, func_except)

get_scale_args

get_scale_args(
    clip: VideoNode,
    shift: tuple[TopShift, LeftShift] = (0, 0),
    width: int | None = None,
    height: int | None = None,
    **kwargs: Any
) -> dict[str, Any]

Generate the keyword arguments used for scaling.

Parameters:

  • clip
    (VideoNode) –

    The source clip.

  • shift
    (tuple[TopShift, LeftShift], default: (0, 0) ) –

    Subpixel shift (top, left).

  • width
    (int | None, default: None ) –

    Target width.

  • height
    (int | None, default: None ) –

    Target height.

  • kwargs
    (Any, default: {} ) –

    Extra parameters to merge.

Returns:

  • dict[str, Any]

    Final dictionary of keyword arguments for the scale function.

Source code
466
467
468
469
470
471
472
473
474
475
476
477
478
479
480
481
482
483
484
def get_scale_args(
    self,
    clip: vs.VideoNode,
    shift: tuple[TopShift, LeftShift] = (0, 0),
    width: int | None = None,
    height: int | None = None,
    **kwargs: Any,
) -> dict[str, Any]:
    """
    Generate the keyword arguments used for scaling.

    :param clip:    The source clip.
    :param shift:   Subpixel shift (top, left).
    :param width:   Target width.
    :param height:  Target height.
    :param kwargs:  Extra parameters to merge.
    :return:        Final dictionary of keyword arguments for the scale function.
    """
    return dict(width=width, height=height, src_top=shift[0], src_left=shift[1]) | self.kwargs | kwargs

inference

inference(
    clip: ConstantFormatVideoNode, **kwargs: Any
) -> ConstantFormatVideoNode
Source code
302
303
304
305
306
def inference(self, clip: ConstantFormatVideoNode, **kwargs: Any) -> ConstantFormatVideoNode:
    from vsmlrt import ArtCNN as mlrt_ArtCNN
    from vsmlrt import ArtCNNModel

    return mlrt_ArtCNN(clip, self.tiles, self.tilesize, self.overlap, ArtCNNModel(self._model), self.backend)

kernel_radius

kernel_radius() -> int

Return the effective kernel radius for the scaler.

Returns:

  • int

    Kernel radius.

Raises:

  • CustomNotImplementedError

    If no kernel radius is defined.

Source code
347
348
349
350
351
352
353
354
355
@cached_property
def kernel_radius(self) -> int:
    """
    Return the effective kernel radius for the scaler.

    :raises CustomNotImplementedError:  If no kernel radius is defined.
    :return:                            Kernel radius.
    """
    ...

multi

multi(
    clip: VideoNodeT,
    multi: float = 2.0,
    shift: tuple[TopShift, LeftShift] = (0, 0),
    **kwargs: Any
) -> VideoNodeT

Deprecated alias for supersample.

Parameters:

  • clip
    (VideoNodeT) –

    The source clip.

  • multi
    (float, default: 2.0 ) –

    Supersampling factor.

  • shift
    (tuple[TopShift, LeftShift], default: (0, 0) ) –

    Subpixel shift (top, left) applied during scaling.

  • kwargs
    (Any, default: {} ) –

    Additional arguments forwarded to the scale function.

Returns:

Source code
451
452
453
454
455
456
457
458
459
460
461
462
463
464
@deprecated('The "multi" method is deprecated. Use "supersample" instead.', category=DeprecationWarning)
def multi(
    self, clip: VideoNodeT, multi: float = 2.0, shift: tuple[TopShift, LeftShift] = (0, 0), **kwargs: Any
) -> VideoNodeT:
    """
    Deprecated alias for `supersample`.

    :param clip:    The source clip.
    :param multi:   Supersampling factor.
    :param shift:   Subpixel shift (top, left) applied during scaling.
    :param kwargs:  Additional arguments forwarded to the scale function.
    :return:        The supersampled clip.
    """
    return self.supersample(clip, multi, shift, **kwargs)

postprocess_clip

postprocess_clip(
    clip: VideoNode, input_clip: VideoNode, **kwargs: Any
) -> ConstantFormatVideoNode

Handles postprocessing of the model's output after inference.

Source code
231
232
233
234
235
236
def postprocess_clip(self, clip: vs.VideoNode, input_clip: vs.VideoNode, **kwargs: Any) -> ConstantFormatVideoNode:
    """Handles postprocessing of the model's output after inference."""

    return depth(
        clip, input_clip, dither_type=DitherType.ORDERED if 0 in {clip.width, clip.height} else DitherType.AUTO
    )

preprocess_clip

preprocess_clip(clip: VideoNode, **kwargs: Any) -> ConstantFormatVideoNode
Source code
310
311
def preprocess_clip(self, clip: vs.VideoNode, **kwargs: Any) -> ConstantFormatVideoNode:
    return super().preprocess_clip(get_y(clip), **kwargs)

pretty_string

pretty_string() -> str

Cached property returning a user-friendly string representation.

Returns:

  • str

    Pretty-printed string with arguments.

Source code
368
369
370
371
372
373
374
375
@cached_property
def pretty_string(self) -> str:
    """
    Cached property returning a user-friendly string representation.

    :return: Pretty-printed string with arguments.
    """
    return self._pretty_string()

scale

scale(
    clip: VideoNode,
    width: int | None = None,
    height: int | None = None,
    shift: tuple[float, float] = (0, 0),
    **kwargs: Any
) -> ConstantFormatVideoNode

Scale the given clip using the ONNX model.

Parameters:

  • clip
    (VideoNode) –

    The input clip to be scaled.

  • width
    (int | None, default: None ) –

    The target width for scaling. If None, the width of the input clip will be used.

  • height
    (int | None, default: None ) –

    The target height for scaling. If None, the height of the input clip will be used.

  • shift
    (tuple[float, float], default: (0, 0) ) –

    A tuple representing the shift values for the x and y axes.

  • **kwargs
    (Any, default: {} ) –

    Additional arguments to be passed to the preprocess_clip, postprocess_clip, inference, and _final_scale methods. Use the prefix preprocess_ or postprocess_ to pass an argument to the respective method. Use the prefix inference_ to pass an argument to the inference method.

Returns:

  • ConstantFormatVideoNode

    The scaled clip.

Source code
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
def scale(
    self,
    clip: vs.VideoNode,
    width: int | None = None,
    height: int | None = None,
    shift: tuple[float, float] = (0, 0),
    **kwargs: Any
) -> ConstantFormatVideoNode:
    """
    Scale the given clip using the ONNX model.

    :param clip:        The input clip to be scaled.
    :param width:       The target width for scaling. If None, the width of the input clip will be used.
    :param height:      The target height for scaling. If None, the height of the input clip will be used.
    :param shift:       A tuple representing the shift values for the x and y axes.
    :param **kwargs:    Additional arguments to be passed to the `preprocess_clip`, `postprocess_clip`,
                        `inference`, and `_final_scale` methods.
                        Use the prefix `preprocess_` or `postprocess_` to pass an argument to the respective method.
                        Use the prefix `inference_` to pass an argument to the inference method.

    :return:            The scaled clip.
    """
    from vsmlrt import Backend

    assert check_variable_format(clip, self.__class__)

    width, height = self._wh_norm(clip, width, height)

    preprocess_kwargs = dict[str, Any]()
    postprocess_kwargs = dict[str, Any]()
    inference_kwargs = dict[str, Any]()

    for k in kwargs.copy():
        for prefix, ckwargs in zip(
            ("preprocess_", "postprocess_", "inference_"),
            (preprocess_kwargs, postprocess_kwargs, inference_kwargs)
        ):
            if k.startswith(prefix):
                ckwargs[k.removeprefix(prefix)] = kwargs.pop(k)
                break

    wclip = self.preprocess_clip(clip, **preprocess_kwargs)

    if 0 not in {clip.width, clip.height}:
        scaled = self.inference(wclip, **inference_kwargs)
    else:
        if not isinstance(self.backend, Backend.TRT):
            raise CustomValueError(
                "Variable resolution clips can only be processed with TRT Backend!", self.__class__, self.backend
            )

        warning(f"{self.__class__.__name__}: Variable resolution clip detected!")

        if self.backend.static_shape:
            warning("static_shape is True, setting it to False...")
            self.backend.static_shape = False

        if not self.backend.max_shapes:
            warning("max_shapes is None, setting it to (1936, 1088). You may want to adjust it...")
            self.backend.max_shapes = (1936, 1088)

        if not self.backend.opt_shapes:
            warning("opt_shapes is None, setting it to (64, 64). You may want to adjust it...")
            self.backend.opt_shapes = (64, 64)

        scaled = ProcessVariableResClip[ConstantFormatVideoNode].from_func(
            wclip, lambda c: self.inference(c, **inference_kwargs), False, wclip.format, self.max_instances
        )

    scaled = self.postprocess_clip(scaled, clip, **postprocess_kwargs)

    return self._finish_scale(scaled, clip, width, height, shift, **kwargs)

supersample

supersample(
    clip: VideoNodeT,
    rfactor: float = 2.0,
    shift: tuple[TopShift, LeftShift] = (0, 0),
    **kwargs: Any
) -> VideoNodeT

Supersample a clip by a given scaling factor.

Parameters:

  • clip
    (VideoNodeT) –

    The source clip.

  • rfactor
    (float, default: 2.0 ) –

    Scaling factor for supersampling.

  • shift
    (tuple[TopShift, LeftShift], default: (0, 0) ) –

    Subpixel shift (top, left) applied during scaling.

  • kwargs
    (Any, default: {} ) –

    Additional arguments forwarded to the scale function.

Returns:

Raises:

  • CustomValueError

    If resulting resolution is non-positive.

Source code
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
def supersample(
    self, clip: VideoNodeT, rfactor: float = 2.0, shift: tuple[TopShift, LeftShift] = (0, 0), **kwargs: Any
) -> VideoNodeT:
    """
    Supersample a clip by a given scaling factor.

    :param clip:                The source clip.
    :param rfactor:             Scaling factor for supersampling.
    :param shift:               Subpixel shift (top, left) applied during scaling.
    :param kwargs:              Additional arguments forwarded to the scale function.
    :raises CustomValueError:   If resulting resolution is non-positive.
    :return:                    The supersampled clip.
    """
    assert check_variable_resolution(clip, self.supersample)

    dst_width, dst_height = ceil(clip.width * rfactor), ceil(clip.height * rfactor)

    if max(dst_width, dst_height) <= 0.0:
        raise CustomValueError(
            'Multiplying the resolution by "rfactor" must result in a positive resolution!',
            self.supersample,
            rfactor,
        )

    return self.scale(clip, dst_width, dst_height, shift, **kwargs)  # type: ignore[return-value]

cached_property

cached_property(func: Callable[Concatenate[_BaseScalerT, P], T_co])

Bases: cached_property[T_co]

Read only version of functools.cached_property.

Source code
265
def __init__(self, func: Callable[Concatenate[_BaseScalerT, P], T_co]) -> None: ...

calc_tilesize

calc_tilesize(
    clip: VideoNode, **kwargs: Any
) -> tuple[tuple[int, int], tuple[int, int]]

Reimplementation of vsmlrt.calc_tilesize helper function

Source code
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
def calc_tilesize(self, clip: vs.VideoNode, **kwargs: Any) -> tuple[tuple[int, int], tuple[int, int]]:
    """Reimplementation of vsmlrt.calc_tilesize helper function"""

    from vsmlrt import calc_tilesize

    kwargs = dict(
        tiles=self.tiles,
        tilesize=self.tilesize,
        width=clip.width,
        height=clip.height,
        multiple=1,
        overlap_w=self.overlap_w,
        overlap_h=self.overlap_h,
    ) | kwargs

    return calc_tilesize(**kwargs)

ensure_obj classmethod

ensure_obj(
    scaler: str | type[Self] | Self | None = None,
    /,
    func_except: FuncExceptT | None = None,
) -> Self

Ensure that the input is a scaler instance, resolving it if necessary.

Parameters:

  • scaler

    (str | type[Self] | Self | None, default: None ) –

    Scaler identifier (string, class, or instance).

  • func_except

    (FuncExceptT | None, default: None ) –

    Function returned for custom error handling.

Returns:

  • Self

    Scaler instance.

Source code
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
@classmethod
def ensure_obj(
    cls,
    scaler: str | type[Self] | Self | None = None,
    /,
    func_except: FuncExceptT | None = None,
) -> Self:
    """
    Ensure that the input is a scaler instance, resolving it if necessary.

    :param scaler:          Scaler identifier (string, class, or instance).
    :param func_except:     Function returned for custom error handling.
    :return:                Scaler instance.
    """
    return _base_ensure_obj(cls, scaler, func_except)

from_param classmethod

from_param(
    scaler: str | type[Self] | Self | None = None,
    /,
    func_except: FuncExceptT | None = None,
) -> type[Self]

Resolve and return a scaler type from a given input (string, type, or instance).

Parameters:

  • scaler

    (str | type[Self] | Self | None, default: None ) –

    Scaler identifier (string, class, or instance).

  • func_except

    (FuncExceptT | None, default: None ) –

    Function returned for custom error handling.

Returns:

  • type[Self]

    Resolved scaler type.

Source code
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
@classmethod
def from_param(
    cls,
    scaler: str | type[Self] | Self | None = None,
    /,
    func_except: FuncExceptT | None = None,
) -> type[Self]:
    """
    Resolve and return a scaler type from a given input (string, type, or instance).

    :param scaler:          Scaler identifier (string, class, or instance).
    :param func_except:     Function returned for custom error handling.
    :return:                Resolved scaler type.
    """
    return _base_from_param(cls, scaler, cls._err_class, func_except)

get_scale_args

get_scale_args(
    clip: VideoNode,
    shift: tuple[TopShift, LeftShift] = (0, 0),
    width: int | None = None,
    height: int | None = None,
    **kwargs: Any
) -> dict[str, Any]

Generate the keyword arguments used for scaling.

Parameters:

  • clip

    (VideoNode) –

    The source clip.

  • shift

    (tuple[TopShift, LeftShift], default: (0, 0) ) –

    Subpixel shift (top, left).

  • width

    (int | None, default: None ) –

    Target width.

  • height

    (int | None, default: None ) –

    Target height.

  • kwargs

    (Any, default: {} ) –

    Extra parameters to merge.

Returns:

  • dict[str, Any]

    Final dictionary of keyword arguments for the scale function.

Source code
466
467
468
469
470
471
472
473
474
475
476
477
478
479
480
481
482
483
484
def get_scale_args(
    self,
    clip: vs.VideoNode,
    shift: tuple[TopShift, LeftShift] = (0, 0),
    width: int | None = None,
    height: int | None = None,
    **kwargs: Any,
) -> dict[str, Any]:
    """
    Generate the keyword arguments used for scaling.

    :param clip:    The source clip.
    :param shift:   Subpixel shift (top, left).
    :param width:   Target width.
    :param height:  Target height.
    :param kwargs:  Extra parameters to merge.
    :return:        Final dictionary of keyword arguments for the scale function.
    """
    return dict(width=width, height=height, src_top=shift[0], src_left=shift[1]) | self.kwargs | kwargs

inference

inference(
    clip: ConstantFormatVideoNode, **kwargs: Any
) -> ConstantFormatVideoNode
Source code
302
303
304
305
306
def inference(self, clip: ConstantFormatVideoNode, **kwargs: Any) -> ConstantFormatVideoNode:
    from vsmlrt import ArtCNN as mlrt_ArtCNN
    from vsmlrt import ArtCNNModel

    return mlrt_ArtCNN(clip, self.tiles, self.tilesize, self.overlap, ArtCNNModel(self._model), self.backend)

kernel_radius

kernel_radius() -> int

Return the effective kernel radius for the scaler.

Returns:

  • int

    Kernel radius.

Raises:

  • CustomNotImplementedError

    If no kernel radius is defined.

Source code
347
348
349
350
351
352
353
354
355
@cached_property
def kernel_radius(self) -> int:
    """
    Return the effective kernel radius for the scaler.

    :raises CustomNotImplementedError:  If no kernel radius is defined.
    :return:                            Kernel radius.
    """
    ...

multi

multi(
    clip: VideoNodeT,
    multi: float = 2.0,
    shift: tuple[TopShift, LeftShift] = (0, 0),
    **kwargs: Any
) -> VideoNodeT

Deprecated alias for supersample.

Parameters:

  • clip

    (VideoNodeT) –

    The source clip.

  • multi

    (float, default: 2.0 ) –

    Supersampling factor.

  • shift

    (tuple[TopShift, LeftShift], default: (0, 0) ) –

    Subpixel shift (top, left) applied during scaling.

  • kwargs

    (Any, default: {} ) –

    Additional arguments forwarded to the scale function.

Returns:

Source code
451
452
453
454
455
456
457
458
459
460
461
462
463
464
@deprecated('The "multi" method is deprecated. Use "supersample" instead.', category=DeprecationWarning)
def multi(
    self, clip: VideoNodeT, multi: float = 2.0, shift: tuple[TopShift, LeftShift] = (0, 0), **kwargs: Any
) -> VideoNodeT:
    """
    Deprecated alias for `supersample`.

    :param clip:    The source clip.
    :param multi:   Supersampling factor.
    :param shift:   Subpixel shift (top, left) applied during scaling.
    :param kwargs:  Additional arguments forwarded to the scale function.
    :return:        The supersampled clip.
    """
    return self.supersample(clip, multi, shift, **kwargs)

postprocess_clip

postprocess_clip(
    clip: VideoNode, input_clip: VideoNode, **kwargs: Any
) -> ConstantFormatVideoNode

Handles postprocessing of the model's output after inference.

Source code
231
232
233
234
235
236
def postprocess_clip(self, clip: vs.VideoNode, input_clip: vs.VideoNode, **kwargs: Any) -> ConstantFormatVideoNode:
    """Handles postprocessing of the model's output after inference."""

    return depth(
        clip, input_clip, dither_type=DitherType.ORDERED if 0 in {clip.width, clip.height} else DitherType.AUTO
    )

preprocess_clip

preprocess_clip(clip: VideoNode, **kwargs: Any) -> ConstantFormatVideoNode
Source code
310
311
def preprocess_clip(self, clip: vs.VideoNode, **kwargs: Any) -> ConstantFormatVideoNode:
    return super().preprocess_clip(get_y(clip), **kwargs)

pretty_string

pretty_string() -> str

Cached property returning a user-friendly string representation.

Returns:

  • str

    Pretty-printed string with arguments.

Source code
368
369
370
371
372
373
374
375
@cached_property
def pretty_string(self) -> str:
    """
    Cached property returning a user-friendly string representation.

    :return: Pretty-printed string with arguments.
    """
    return self._pretty_string()

scale

scale(
    clip: VideoNode,
    width: int | None = None,
    height: int | None = None,
    shift: tuple[float, float] = (0, 0),
    **kwargs: Any
) -> ConstantFormatVideoNode

Scale the given clip using the ONNX model.

Parameters:

  • clip

    (VideoNode) –

    The input clip to be scaled.

  • width

    (int | None, default: None ) –

    The target width for scaling. If None, the width of the input clip will be used.

  • height

    (int | None, default: None ) –

    The target height for scaling. If None, the height of the input clip will be used.

  • shift

    (tuple[float, float], default: (0, 0) ) –

    A tuple representing the shift values for the x and y axes.

  • **kwargs

    (Any, default: {} ) –

    Additional arguments to be passed to the preprocess_clip, postprocess_clip, inference, and _final_scale methods. Use the prefix preprocess_ or postprocess_ to pass an argument to the respective method. Use the prefix inference_ to pass an argument to the inference method.

Returns:

  • ConstantFormatVideoNode

    The scaled clip.

Source code
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
def scale(
    self,
    clip: vs.VideoNode,
    width: int | None = None,
    height: int | None = None,
    shift: tuple[float, float] = (0, 0),
    **kwargs: Any
) -> ConstantFormatVideoNode:
    """
    Scale the given clip using the ONNX model.

    :param clip:        The input clip to be scaled.
    :param width:       The target width for scaling. If None, the width of the input clip will be used.
    :param height:      The target height for scaling. If None, the height of the input clip will be used.
    :param shift:       A tuple representing the shift values for the x and y axes.
    :param **kwargs:    Additional arguments to be passed to the `preprocess_clip`, `postprocess_clip`,
                        `inference`, and `_final_scale` methods.
                        Use the prefix `preprocess_` or `postprocess_` to pass an argument to the respective method.
                        Use the prefix `inference_` to pass an argument to the inference method.

    :return:            The scaled clip.
    """
    from vsmlrt import Backend

    assert check_variable_format(clip, self.__class__)

    width, height = self._wh_norm(clip, width, height)

    preprocess_kwargs = dict[str, Any]()
    postprocess_kwargs = dict[str, Any]()
    inference_kwargs = dict[str, Any]()

    for k in kwargs.copy():
        for prefix, ckwargs in zip(
            ("preprocess_", "postprocess_", "inference_"),
            (preprocess_kwargs, postprocess_kwargs, inference_kwargs)
        ):
            if k.startswith(prefix):
                ckwargs[k.removeprefix(prefix)] = kwargs.pop(k)
                break

    wclip = self.preprocess_clip(clip, **preprocess_kwargs)

    if 0 not in {clip.width, clip.height}:
        scaled = self.inference(wclip, **inference_kwargs)
    else:
        if not isinstance(self.backend, Backend.TRT):
            raise CustomValueError(
                "Variable resolution clips can only be processed with TRT Backend!", self.__class__, self.backend
            )

        warning(f"{self.__class__.__name__}: Variable resolution clip detected!")

        if self.backend.static_shape:
            warning("static_shape is True, setting it to False...")
            self.backend.static_shape = False

        if not self.backend.max_shapes:
            warning("max_shapes is None, setting it to (1936, 1088). You may want to adjust it...")
            self.backend.max_shapes = (1936, 1088)

        if not self.backend.opt_shapes:
            warning("opt_shapes is None, setting it to (64, 64). You may want to adjust it...")
            self.backend.opt_shapes = (64, 64)

        scaled = ProcessVariableResClip[ConstantFormatVideoNode].from_func(
            wclip, lambda c: self.inference(c, **inference_kwargs), False, wclip.format, self.max_instances
        )

    scaled = self.postprocess_clip(scaled, clip, **postprocess_kwargs)

    return self._finish_scale(scaled, clip, width, height, shift, **kwargs)

supersample

supersample(
    clip: VideoNodeT,
    rfactor: float = 2.0,
    shift: tuple[TopShift, LeftShift] = (0, 0),
    **kwargs: Any
) -> VideoNodeT

Supersample a clip by a given scaling factor.

Parameters:

  • clip

    (VideoNodeT) –

    The source clip.

  • rfactor

    (float, default: 2.0 ) –

    Scaling factor for supersampling.

  • shift

    (tuple[TopShift, LeftShift], default: (0, 0) ) –

    Subpixel shift (top, left) applied during scaling.

  • kwargs

    (Any, default: {} ) –

    Additional arguments forwarded to the scale function.

Returns:

Raises:

  • CustomValueError

    If resulting resolution is non-positive.

Source code
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
def supersample(
    self, clip: VideoNodeT, rfactor: float = 2.0, shift: tuple[TopShift, LeftShift] = (0, 0), **kwargs: Any
) -> VideoNodeT:
    """
    Supersample a clip by a given scaling factor.

    :param clip:                The source clip.
    :param rfactor:             Scaling factor for supersampling.
    :param shift:               Subpixel shift (top, left) applied during scaling.
    :param kwargs:              Additional arguments forwarded to the scale function.
    :raises CustomValueError:   If resulting resolution is non-positive.
    :return:                    The supersampled clip.
    """
    assert check_variable_resolution(clip, self.supersample)

    dst_width, dst_height = ceil(clip.width * rfactor), ceil(clip.height * rfactor)

    if max(dst_width, dst_height) <= 0.0:
        raise CustomValueError(
            'Multiplying the resolution by "rfactor" must result in a positive resolution!',
            self.supersample,
            rfactor,
        )

    return self.scale(clip, dst_width, dst_height, shift, **kwargs)  # type: ignore[return-value]

BaseArtCNN

BaseArtCNN(
    backend: Any | None = None,
    tiles: int | tuple[int, int] | None = None,
    tilesize: int | tuple[int, int] | None = None,
    overlap: int | tuple[int, int] | None = None,
    max_instances: int = 2,
    *,
    kernel: KernelLike = Catrom,
    scaler: ScalerLike | None = None,
    shifter: KernelLike | None = None,
    **kwargs: Any
)

Bases: BaseOnnxScaler

Parameters:

  • backend

    (Any | None, default: None ) –

    The backend to be used with the vs-mlrt framework. If set to None, the most suitable backend will be automatically selected, prioritizing fp16 support.

  • tiles

    (int | tuple[int, int] | None, default: None ) –

    Whether to split the image into multiple tiles. This can help reduce VRAM usage, but note that the model's behavior may vary when they are used.

  • tilesize

    (int | tuple[int, int] | None, default: None ) –

    The size of each tile when splitting the image (if tiles are enabled).

  • overlap

    (int | tuple[int, int] | None, default: None ) –

    The size of overlap between tiles.

  • max_instances

    (int, default: 2 ) –

    Maximum instances to spawn when scaling a variable resolution clip.

  • kernel

    (KernelLike, default: Catrom ) –

    Base kernel to be used for certain scaling/shifting/resampling operations. Defaults to Catrom.

  • scaler

    (ScalerLike | None, default: None ) –

    Scaler used for scaling operations. Defaults to kernel.

  • shifter

    (KernelLike | None, default: None ) –

    Kernel used for shifting operations. Defaults to kernel.

  • **kwargs

    (Any, default: {} ) –

    Additional arguments to pass to the backend. See the vsmlrt backend's docstring for more details.

Classes:

Methods:

  • calc_tilesize

    Reimplementation of vsmlrt.calc_tilesize helper function

  • ensure_obj

    Ensure that the input is a scaler instance, resolving it if necessary.

  • from_param

    Resolve and return a scaler type from a given input (string, type, or instance).

  • get_scale_args

    Generate the keyword arguments used for scaling.

  • inference
  • kernel_radius

    Return the effective kernel radius for the scaler.

  • multi

    Deprecated alias for supersample.

  • postprocess_clip

    Handles postprocessing of the model's output after inference.

  • preprocess_clip

    Performs preprocessing on the clip prior to inference.

  • pretty_string

    Cached property returning a user-friendly string representation.

  • scale

    Scale the given clip using the ONNX model.

  • supersample

    Supersample a clip by a given scaling factor.

Attributes:

Source code
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
def __init__(
    self,
    backend: Any | None = None,
    tiles: int | tuple[int, int] | None = None,
    tilesize: int | tuple[int, int] | None = None,
    overlap: int | tuple[int, int] | None = None,
    max_instances: int = 2,
    *,
    kernel: KernelLike = Catrom,
    scaler: ScalerLike | None = None,
    shifter: KernelLike | None = None,
    **kwargs: Any
) -> None:
    """
    :param backend:         The backend to be used with the vs-mlrt framework.
                            If set to None, the most suitable backend will be automatically selected, prioritizing fp16 support.
    :param tiles:           Whether to split the image into multiple tiles.
                            This can help reduce VRAM usage, but note that the model's behavior may vary when they are used.
    :param tilesize:        The size of each tile when splitting the image (if tiles are enabled).
    :param overlap:         The size of overlap between tiles.
    :param max_instances:   Maximum instances to spawn when scaling a variable resolution clip.
    :param kernel:          Base kernel to be used for certain scaling/shifting/resampling operations.
                            Defaults to Catrom.
    :param scaler:          Scaler used for scaling operations. Defaults to kernel.
    :param shifter:         Kernel used for shifting operations. Defaults to kernel.
    :param **kwargs:        Additional arguments to pass to the backend.
                            See the vsmlrt backend's docstring for more details.
    """
    super().__init__(
        None, backend, tiles, tilesize, overlap, max_instances, kernel=kernel, scaler=scaler, shifter=shifter, **kwargs
    )

backend instance-attribute

backend = autoselect_backend(**_default_args | kwargs)

kernel instance-attribute

kernel = ensure_obj(kernel, __class__)

kwargs instance-attribute

kwargs: dict[str, Any] = kwargs

Arguments passed to the implemented funcs or internal scale function.

max_instances instance-attribute

max_instances = max_instances

model instance-attribute

model = str(resolve())

overlap instance-attribute

overlap = overlap

overlap_h instance-attribute

overlap_h = 8

overlap_w instance-attribute

overlap_w = 8

scale_function instance-attribute

scale_function: Callable[..., VideoNode]

Scale function called internally when performing scaling operations.

scaler instance-attribute

scaler = ensure_obj(scaler or kernel, __class__)

shifter instance-attribute

shifter = ensure_obj(shifter or kernel, __class__)

tiles instance-attribute

tiles = tiles

tilesize instance-attribute

tilesize = tilesize

cached_property

cached_property(func: Callable[Concatenate[_BaseScalerT, P], T_co])

Bases: cached_property[T_co]

Read only version of functools.cached_property.

Source code
265
def __init__(self, func: Callable[Concatenate[_BaseScalerT, P], T_co]) -> None: ...

calc_tilesize

calc_tilesize(
    clip: VideoNode, **kwargs: Any
) -> tuple[tuple[int, int], tuple[int, int]]

Reimplementation of vsmlrt.calc_tilesize helper function

Source code
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
def calc_tilesize(self, clip: vs.VideoNode, **kwargs: Any) -> tuple[tuple[int, int], tuple[int, int]]:
    """Reimplementation of vsmlrt.calc_tilesize helper function"""

    from vsmlrt import calc_tilesize

    kwargs = dict(
        tiles=self.tiles,
        tilesize=self.tilesize,
        width=clip.width,
        height=clip.height,
        multiple=1,
        overlap_w=self.overlap_w,
        overlap_h=self.overlap_h,
    ) | kwargs

    return calc_tilesize(**kwargs)

ensure_obj classmethod

ensure_obj(
    scaler: str | type[Self] | Self | None = None,
    /,
    func_except: FuncExceptT | None = None,
) -> Self

Ensure that the input is a scaler instance, resolving it if necessary.

Parameters:

  • scaler

    (str | type[Self] | Self | None, default: None ) –

    Scaler identifier (string, class, or instance).

  • func_except

    (FuncExceptT | None, default: None ) –

    Function returned for custom error handling.

Returns:

  • Self

    Scaler instance.

Source code
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
@classmethod
def ensure_obj(
    cls,
    scaler: str | type[Self] | Self | None = None,
    /,
    func_except: FuncExceptT | None = None,
) -> Self:
    """
    Ensure that the input is a scaler instance, resolving it if necessary.

    :param scaler:          Scaler identifier (string, class, or instance).
    :param func_except:     Function returned for custom error handling.
    :return:                Scaler instance.
    """
    return _base_ensure_obj(cls, scaler, func_except)

from_param classmethod

from_param(
    scaler: str | type[Self] | Self | None = None,
    /,
    func_except: FuncExceptT | None = None,
) -> type[Self]

Resolve and return a scaler type from a given input (string, type, or instance).

Parameters:

  • scaler

    (str | type[Self] | Self | None, default: None ) –

    Scaler identifier (string, class, or instance).

  • func_except

    (FuncExceptT | None, default: None ) –

    Function returned for custom error handling.

Returns:

  • type[Self]

    Resolved scaler type.

Source code
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
@classmethod
def from_param(
    cls,
    scaler: str | type[Self] | Self | None = None,
    /,
    func_except: FuncExceptT | None = None,
) -> type[Self]:
    """
    Resolve and return a scaler type from a given input (string, type, or instance).

    :param scaler:          Scaler identifier (string, class, or instance).
    :param func_except:     Function returned for custom error handling.
    :return:                Resolved scaler type.
    """
    return _base_from_param(cls, scaler, cls._err_class, func_except)

get_scale_args

get_scale_args(
    clip: VideoNode,
    shift: tuple[TopShift, LeftShift] = (0, 0),
    width: int | None = None,
    height: int | None = None,
    **kwargs: Any
) -> dict[str, Any]

Generate the keyword arguments used for scaling.

Parameters:

  • clip

    (VideoNode) –

    The source clip.

  • shift

    (tuple[TopShift, LeftShift], default: (0, 0) ) –

    Subpixel shift (top, left).

  • width

    (int | None, default: None ) –

    Target width.

  • height

    (int | None, default: None ) –

    Target height.

  • kwargs

    (Any, default: {} ) –

    Extra parameters to merge.

Returns:

  • dict[str, Any]

    Final dictionary of keyword arguments for the scale function.

Source code
466
467
468
469
470
471
472
473
474
475
476
477
478
479
480
481
482
483
484
def get_scale_args(
    self,
    clip: vs.VideoNode,
    shift: tuple[TopShift, LeftShift] = (0, 0),
    width: int | None = None,
    height: int | None = None,
    **kwargs: Any,
) -> dict[str, Any]:
    """
    Generate the keyword arguments used for scaling.

    :param clip:    The source clip.
    :param shift:   Subpixel shift (top, left).
    :param width:   Target width.
    :param height:  Target height.
    :param kwargs:  Extra parameters to merge.
    :return:        Final dictionary of keyword arguments for the scale function.
    """
    return dict(width=width, height=height, src_top=shift[0], src_left=shift[1]) | self.kwargs | kwargs

inference

inference(
    clip: ConstantFormatVideoNode, **kwargs: Any
) -> ConstantFormatVideoNode
Source code
302
303
304
305
306
def inference(self, clip: ConstantFormatVideoNode, **kwargs: Any) -> ConstantFormatVideoNode:
    from vsmlrt import ArtCNN as mlrt_ArtCNN
    from vsmlrt import ArtCNNModel

    return mlrt_ArtCNN(clip, self.tiles, self.tilesize, self.overlap, ArtCNNModel(self._model), self.backend)

kernel_radius

kernel_radius() -> int

Return the effective kernel radius for the scaler.

Returns:

  • int

    Kernel radius.

Raises:

  • CustomNotImplementedError

    If no kernel radius is defined.

Source code
347
348
349
350
351
352
353
354
355
@cached_property
def kernel_radius(self) -> int:
    """
    Return the effective kernel radius for the scaler.

    :raises CustomNotImplementedError:  If no kernel radius is defined.
    :return:                            Kernel radius.
    """
    ...

multi

multi(
    clip: VideoNodeT,
    multi: float = 2.0,
    shift: tuple[TopShift, LeftShift] = (0, 0),
    **kwargs: Any
) -> VideoNodeT

Deprecated alias for supersample.

Parameters:

  • clip

    (VideoNodeT) –

    The source clip.

  • multi

    (float, default: 2.0 ) –

    Supersampling factor.

  • shift

    (tuple[TopShift, LeftShift], default: (0, 0) ) –

    Subpixel shift (top, left) applied during scaling.

  • kwargs

    (Any, default: {} ) –

    Additional arguments forwarded to the scale function.

Returns:

Source code
451
452
453
454
455
456
457
458
459
460
461
462
463
464
@deprecated('The "multi" method is deprecated. Use "supersample" instead.', category=DeprecationWarning)
def multi(
    self, clip: VideoNodeT, multi: float = 2.0, shift: tuple[TopShift, LeftShift] = (0, 0), **kwargs: Any
) -> VideoNodeT:
    """
    Deprecated alias for `supersample`.

    :param clip:    The source clip.
    :param multi:   Supersampling factor.
    :param shift:   Subpixel shift (top, left) applied during scaling.
    :param kwargs:  Additional arguments forwarded to the scale function.
    :return:        The supersampled clip.
    """
    return self.supersample(clip, multi, shift, **kwargs)

postprocess_clip

postprocess_clip(
    clip: VideoNode, input_clip: VideoNode, **kwargs: Any
) -> ConstantFormatVideoNode

Handles postprocessing of the model's output after inference.

Source code
231
232
233
234
235
236
def postprocess_clip(self, clip: vs.VideoNode, input_clip: vs.VideoNode, **kwargs: Any) -> ConstantFormatVideoNode:
    """Handles postprocessing of the model's output after inference."""

    return depth(
        clip, input_clip, dither_type=DitherType.ORDERED if 0 in {clip.width, clip.height} else DitherType.AUTO
    )

preprocess_clip

preprocess_clip(clip: VideoNode, **kwargs: Any) -> ConstantFormatVideoNode

Performs preprocessing on the clip prior to inference.

Source code
225
226
227
228
229
def preprocess_clip(self, clip: vs.VideoNode, **kwargs: Any) -> ConstantFormatVideoNode:
    """Performs preprocessing on the clip prior to inference."""

    clip = depth(clip, 16 if self.backend.fp16 else 32, vs.FLOAT)
    return limiter(clip, func=self.__class__)

pretty_string

pretty_string() -> str

Cached property returning a user-friendly string representation.

Returns:

  • str

    Pretty-printed string with arguments.

Source code
368
369
370
371
372
373
374
375
@cached_property
def pretty_string(self) -> str:
    """
    Cached property returning a user-friendly string representation.

    :return: Pretty-printed string with arguments.
    """
    return self._pretty_string()

scale

scale(
    clip: VideoNode,
    width: int | None = None,
    height: int | None = None,
    shift: tuple[float, float] = (0, 0),
    **kwargs: Any
) -> ConstantFormatVideoNode

Scale the given clip using the ONNX model.

Parameters:

  • clip

    (VideoNode) –

    The input clip to be scaled.

  • width

    (int | None, default: None ) –

    The target width for scaling. If None, the width of the input clip will be used.

  • height

    (int | None, default: None ) –

    The target height for scaling. If None, the height of the input clip will be used.

  • shift

    (tuple[float, float], default: (0, 0) ) –

    A tuple representing the shift values for the x and y axes.

  • **kwargs

    (Any, default: {} ) –

    Additional arguments to be passed to the preprocess_clip, postprocess_clip, inference, and _final_scale methods. Use the prefix preprocess_ or postprocess_ to pass an argument to the respective method. Use the prefix inference_ to pass an argument to the inference method.

Returns:

  • ConstantFormatVideoNode

    The scaled clip.

Source code
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
def scale(
    self,
    clip: vs.VideoNode,
    width: int | None = None,
    height: int | None = None,
    shift: tuple[float, float] = (0, 0),
    **kwargs: Any
) -> ConstantFormatVideoNode:
    """
    Scale the given clip using the ONNX model.

    :param clip:        The input clip to be scaled.
    :param width:       The target width for scaling. If None, the width of the input clip will be used.
    :param height:      The target height for scaling. If None, the height of the input clip will be used.
    :param shift:       A tuple representing the shift values for the x and y axes.
    :param **kwargs:    Additional arguments to be passed to the `preprocess_clip`, `postprocess_clip`,
                        `inference`, and `_final_scale` methods.
                        Use the prefix `preprocess_` or `postprocess_` to pass an argument to the respective method.
                        Use the prefix `inference_` to pass an argument to the inference method.

    :return:            The scaled clip.
    """
    from vsmlrt import Backend

    assert check_variable_format(clip, self.__class__)

    width, height = self._wh_norm(clip, width, height)

    preprocess_kwargs = dict[str, Any]()
    postprocess_kwargs = dict[str, Any]()
    inference_kwargs = dict[str, Any]()

    for k in kwargs.copy():
        for prefix, ckwargs in zip(
            ("preprocess_", "postprocess_", "inference_"),
            (preprocess_kwargs, postprocess_kwargs, inference_kwargs)
        ):
            if k.startswith(prefix):
                ckwargs[k.removeprefix(prefix)] = kwargs.pop(k)
                break

    wclip = self.preprocess_clip(clip, **preprocess_kwargs)

    if 0 not in {clip.width, clip.height}:
        scaled = self.inference(wclip, **inference_kwargs)
    else:
        if not isinstance(self.backend, Backend.TRT):
            raise CustomValueError(
                "Variable resolution clips can only be processed with TRT Backend!", self.__class__, self.backend
            )

        warning(f"{self.__class__.__name__}: Variable resolution clip detected!")

        if self.backend.static_shape:
            warning("static_shape is True, setting it to False...")
            self.backend.static_shape = False

        if not self.backend.max_shapes:
            warning("max_shapes is None, setting it to (1936, 1088). You may want to adjust it...")
            self.backend.max_shapes = (1936, 1088)

        if not self.backend.opt_shapes:
            warning("opt_shapes is None, setting it to (64, 64). You may want to adjust it...")
            self.backend.opt_shapes = (64, 64)

        scaled = ProcessVariableResClip[ConstantFormatVideoNode].from_func(
            wclip, lambda c: self.inference(c, **inference_kwargs), False, wclip.format, self.max_instances
        )

    scaled = self.postprocess_clip(scaled, clip, **postprocess_kwargs)

    return self._finish_scale(scaled, clip, width, height, shift, **kwargs)

supersample

supersample(
    clip: VideoNodeT,
    rfactor: float = 2.0,
    shift: tuple[TopShift, LeftShift] = (0, 0),
    **kwargs: Any
) -> VideoNodeT

Supersample a clip by a given scaling factor.

Parameters:

  • clip

    (VideoNodeT) –

    The source clip.

  • rfactor

    (float, default: 2.0 ) –

    Scaling factor for supersampling.

  • shift

    (tuple[TopShift, LeftShift], default: (0, 0) ) –

    Subpixel shift (top, left) applied during scaling.

  • kwargs

    (Any, default: {} ) –

    Additional arguments forwarded to the scale function.

Returns:

Raises:

  • CustomValueError

    If resulting resolution is non-positive.

Source code
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
def supersample(
    self, clip: VideoNodeT, rfactor: float = 2.0, shift: tuple[TopShift, LeftShift] = (0, 0), **kwargs: Any
) -> VideoNodeT:
    """
    Supersample a clip by a given scaling factor.

    :param clip:                The source clip.
    :param rfactor:             Scaling factor for supersampling.
    :param shift:               Subpixel shift (top, left) applied during scaling.
    :param kwargs:              Additional arguments forwarded to the scale function.
    :raises CustomValueError:   If resulting resolution is non-positive.
    :return:                    The supersampled clip.
    """
    assert check_variable_resolution(clip, self.supersample)

    dst_width, dst_height = ceil(clip.width * rfactor), ceil(clip.height * rfactor)

    if max(dst_width, dst_height) <= 0.0:
        raise CustomValueError(
            'Multiplying the resolution by "rfactor" must result in a positive resolution!',
            self.supersample,
            rfactor,
        )

    return self.scale(clip, dst_width, dst_height, shift, **kwargs)  # type: ignore[return-value]

BaseArtCNNChroma

BaseArtCNNChroma(
    backend: Any | None = None,
    tiles: int | tuple[int, int] | None = None,
    tilesize: int | tuple[int, int] | None = None,
    overlap: int | tuple[int, int] | None = None,
    max_instances: int = 2,
    *,
    kernel: KernelLike = Catrom,
    scaler: ScalerLike | None = None,
    shifter: KernelLike | None = None,
    **kwargs: Any
)

Bases: BaseArtCNN

Parameters:

  • backend

    (Any | None, default: None ) –

    The backend to be used with the vs-mlrt framework. If set to None, the most suitable backend will be automatically selected, prioritizing fp16 support.

  • tiles

    (int | tuple[int, int] | None, default: None ) –

    Whether to split the image into multiple tiles. This can help reduce VRAM usage, but note that the model's behavior may vary when they are used.

  • tilesize

    (int | tuple[int, int] | None, default: None ) –

    The size of each tile when splitting the image (if tiles are enabled).

  • overlap

    (int | tuple[int, int] | None, default: None ) –

    The size of overlap between tiles.

  • max_instances

    (int, default: 2 ) –

    Maximum instances to spawn when scaling a variable resolution clip.

  • kernel

    (KernelLike, default: Catrom ) –

    Base kernel to be used for certain scaling/shifting/resampling operations. Defaults to Catrom.

  • scaler

    (ScalerLike | None, default: None ) –

    Scaler used for scaling operations. Defaults to kernel.

  • shifter

    (KernelLike | None, default: None ) –

    Kernel used for shifting operations. Defaults to kernel.

  • **kwargs

    (Any, default: {} ) –

    Additional arguments to pass to the backend. See the vsmlrt backend's docstring for more details.

Classes:

Methods:

  • calc_tilesize

    Reimplementation of vsmlrt.calc_tilesize helper function

  • ensure_obj

    Ensure that the input is a scaler instance, resolving it if necessary.

  • from_param

    Resolve and return a scaler type from a given input (string, type, or instance).

  • get_scale_args

    Generate the keyword arguments used for scaling.

  • inference
  • kernel_radius

    Return the effective kernel radius for the scaler.

  • multi

    Deprecated alias for supersample.

  • postprocess_clip

    Handles postprocessing of the model's output after inference.

  • preprocess_clip
  • pretty_string

    Cached property returning a user-friendly string representation.

  • scale

    Scale the given clip using the ONNX model.

  • supersample

    Supersample a clip by a given scaling factor.

Attributes:

Source code
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
def __init__(
    self,
    backend: Any | None = None,
    tiles: int | tuple[int, int] | None = None,
    tilesize: int | tuple[int, int] | None = None,
    overlap: int | tuple[int, int] | None = None,
    max_instances: int = 2,
    *,
    kernel: KernelLike = Catrom,
    scaler: ScalerLike | None = None,
    shifter: KernelLike | None = None,
    **kwargs: Any
) -> None:
    """
    :param backend:         The backend to be used with the vs-mlrt framework.
                            If set to None, the most suitable backend will be automatically selected, prioritizing fp16 support.
    :param tiles:           Whether to split the image into multiple tiles.
                            This can help reduce VRAM usage, but note that the model's behavior may vary when they are used.
    :param tilesize:        The size of each tile when splitting the image (if tiles are enabled).
    :param overlap:         The size of overlap between tiles.
    :param max_instances:   Maximum instances to spawn when scaling a variable resolution clip.
    :param kernel:          Base kernel to be used for certain scaling/shifting/resampling operations.
                            Defaults to Catrom.
    :param scaler:          Scaler used for scaling operations. Defaults to kernel.
    :param shifter:         Kernel used for shifting operations. Defaults to kernel.
    :param **kwargs:        Additional arguments to pass to the backend.
                            See the vsmlrt backend's docstring for more details.
    """
    super().__init__(
        None, backend, tiles, tilesize, overlap, max_instances, kernel=kernel, scaler=scaler, shifter=shifter, **kwargs
    )

backend instance-attribute

backend = autoselect_backend(**_default_args | kwargs)

kernel instance-attribute

kernel = ensure_obj(kernel, __class__)

kwargs instance-attribute

kwargs: dict[str, Any] = kwargs

Arguments passed to the implemented funcs or internal scale function.

max_instances instance-attribute

max_instances = max_instances

model instance-attribute

model = str(resolve())

overlap instance-attribute

overlap = overlap

overlap_h instance-attribute

overlap_h = 8

overlap_w instance-attribute

overlap_w = 8

scale_function instance-attribute

scale_function: Callable[..., VideoNode]

Scale function called internally when performing scaling operations.

scaler instance-attribute

scaler = ensure_obj(scaler or kernel, __class__)

shifter instance-attribute

shifter = ensure_obj(shifter or kernel, __class__)

tiles instance-attribute

tiles = tiles

tilesize instance-attribute

tilesize = tilesize

cached_property

cached_property(func: Callable[Concatenate[_BaseScalerT, P], T_co])

Bases: cached_property[T_co]

Read only version of functools.cached_property.

Source code
265
def __init__(self, func: Callable[Concatenate[_BaseScalerT, P], T_co]) -> None: ...

calc_tilesize

calc_tilesize(
    clip: VideoNode, **kwargs: Any
) -> tuple[tuple[int, int], tuple[int, int]]

Reimplementation of vsmlrt.calc_tilesize helper function

Source code
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
def calc_tilesize(self, clip: vs.VideoNode, **kwargs: Any) -> tuple[tuple[int, int], tuple[int, int]]:
    """Reimplementation of vsmlrt.calc_tilesize helper function"""

    from vsmlrt import calc_tilesize

    kwargs = dict(
        tiles=self.tiles,
        tilesize=self.tilesize,
        width=clip.width,
        height=clip.height,
        multiple=1,
        overlap_w=self.overlap_w,
        overlap_h=self.overlap_h,
    ) | kwargs

    return calc_tilesize(**kwargs)

ensure_obj classmethod

ensure_obj(
    scaler: str | type[Self] | Self | None = None,
    /,
    func_except: FuncExceptT | None = None,
) -> Self

Ensure that the input is a scaler instance, resolving it if necessary.

Parameters:

  • scaler

    (str | type[Self] | Self | None, default: None ) –

    Scaler identifier (string, class, or instance).

  • func_except

    (FuncExceptT | None, default: None ) –

    Function returned for custom error handling.

Returns:

  • Self

    Scaler instance.

Source code
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
@classmethod
def ensure_obj(
    cls,
    scaler: str | type[Self] | Self | None = None,
    /,
    func_except: FuncExceptT | None = None,
) -> Self:
    """
    Ensure that the input is a scaler instance, resolving it if necessary.

    :param scaler:          Scaler identifier (string, class, or instance).
    :param func_except:     Function returned for custom error handling.
    :return:                Scaler instance.
    """
    return _base_ensure_obj(cls, scaler, func_except)

from_param classmethod

from_param(
    scaler: str | type[Self] | Self | None = None,
    /,
    func_except: FuncExceptT | None = None,
) -> type[Self]

Resolve and return a scaler type from a given input (string, type, or instance).

Parameters:

  • scaler

    (str | type[Self] | Self | None, default: None ) –

    Scaler identifier (string, class, or instance).

  • func_except

    (FuncExceptT | None, default: None ) –

    Function returned for custom error handling.

Returns:

  • type[Self]

    Resolved scaler type.

Source code
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
@classmethod
def from_param(
    cls,
    scaler: str | type[Self] | Self | None = None,
    /,
    func_except: FuncExceptT | None = None,
) -> type[Self]:
    """
    Resolve and return a scaler type from a given input (string, type, or instance).

    :param scaler:          Scaler identifier (string, class, or instance).
    :param func_except:     Function returned for custom error handling.
    :return:                Resolved scaler type.
    """
    return _base_from_param(cls, scaler, cls._err_class, func_except)

get_scale_args

get_scale_args(
    clip: VideoNode,
    shift: tuple[TopShift, LeftShift] = (0, 0),
    width: int | None = None,
    height: int | None = None,
    **kwargs: Any
) -> dict[str, Any]

Generate the keyword arguments used for scaling.

Parameters:

  • clip

    (VideoNode) –

    The source clip.

  • shift

    (tuple[TopShift, LeftShift], default: (0, 0) ) –

    Subpixel shift (top, left).

  • width

    (int | None, default: None ) –

    Target width.

  • height

    (int | None, default: None ) –

    Target height.

  • kwargs

    (Any, default: {} ) –

    Extra parameters to merge.

Returns:

  • dict[str, Any]

    Final dictionary of keyword arguments for the scale function.

Source code
466
467
468
469
470
471
472
473
474
475
476
477
478
479
480
481
482
483
484
def get_scale_args(
    self,
    clip: vs.VideoNode,
    shift: tuple[TopShift, LeftShift] = (0, 0),
    width: int | None = None,
    height: int | None = None,
    **kwargs: Any,
) -> dict[str, Any]:
    """
    Generate the keyword arguments used for scaling.

    :param clip:    The source clip.
    :param shift:   Subpixel shift (top, left).
    :param width:   Target width.
    :param height:  Target height.
    :param kwargs:  Extra parameters to merge.
    :return:        Final dictionary of keyword arguments for the scale function.
    """
    return dict(width=width, height=height, src_top=shift[0], src_left=shift[1]) | self.kwargs | kwargs

inference

inference(
    clip: ConstantFormatVideoNode, **kwargs: Any
) -> ConstantFormatVideoNode
Source code
302
303
304
305
306
def inference(self, clip: ConstantFormatVideoNode, **kwargs: Any) -> ConstantFormatVideoNode:
    from vsmlrt import ArtCNN as mlrt_ArtCNN
    from vsmlrt import ArtCNNModel

    return mlrt_ArtCNN(clip, self.tiles, self.tilesize, self.overlap, ArtCNNModel(self._model), self.backend)

kernel_radius

kernel_radius() -> int

Return the effective kernel radius for the scaler.

Returns:

  • int

    Kernel radius.

Raises:

  • CustomNotImplementedError

    If no kernel radius is defined.

Source code
347
348
349
350
351
352
353
354
355
@cached_property
def kernel_radius(self) -> int:
    """
    Return the effective kernel radius for the scaler.

    :raises CustomNotImplementedError:  If no kernel radius is defined.
    :return:                            Kernel radius.
    """
    ...

multi

multi(
    clip: VideoNodeT,
    multi: float = 2.0,
    shift: tuple[TopShift, LeftShift] = (0, 0),
    **kwargs: Any
) -> VideoNodeT

Deprecated alias for supersample.

Parameters:

  • clip

    (VideoNodeT) –

    The source clip.

  • multi

    (float, default: 2.0 ) –

    Supersampling factor.

  • shift

    (tuple[TopShift, LeftShift], default: (0, 0) ) –

    Subpixel shift (top, left) applied during scaling.

  • kwargs

    (Any, default: {} ) –

    Additional arguments forwarded to the scale function.

Returns:

Source code
451
452
453
454
455
456
457
458
459
460
461
462
463
464
@deprecated('The "multi" method is deprecated. Use "supersample" instead.', category=DeprecationWarning)
def multi(
    self, clip: VideoNodeT, multi: float = 2.0, shift: tuple[TopShift, LeftShift] = (0, 0), **kwargs: Any
) -> VideoNodeT:
    """
    Deprecated alias for `supersample`.

    :param clip:    The source clip.
    :param multi:   Supersampling factor.
    :param shift:   Subpixel shift (top, left) applied during scaling.
    :param kwargs:  Additional arguments forwarded to the scale function.
    :return:        The supersampled clip.
    """
    return self.supersample(clip, multi, shift, **kwargs)

postprocess_clip

postprocess_clip(
    clip: VideoNode, input_clip: VideoNode, **kwargs: Any
) -> ConstantFormatVideoNode

Handles postprocessing of the model's output after inference.

Source code
231
232
233
234
235
236
def postprocess_clip(self, clip: vs.VideoNode, input_clip: vs.VideoNode, **kwargs: Any) -> ConstantFormatVideoNode:
    """Handles postprocessing of the model's output after inference."""

    return depth(
        clip, input_clip, dither_type=DitherType.ORDERED if 0 in {clip.width, clip.height} else DitherType.AUTO
    )

preprocess_clip

preprocess_clip(clip: VideoNode, **kwargs: Any) -> ConstantFormatVideoNode
Source code
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
def preprocess_clip(self, clip: vs.VideoNode, **kwargs: Any) -> ConstantFormatVideoNode:
    assert check_variable_format(clip, self.__class__)

    if clip.format.subsampling_h != 0 or clip.format.subsampling_w != 0:
        chroma_scaler = Kernel.ensure_obj(kwargs.pop("chroma_scaler", Bilinear))

        clip = chroma_scaler.resample(
            clip, clip.format.replace(
                subsampling_h=0, subsampling_w=0,
                sample_type=vs.FLOAT, bits_per_sample=16 if self.backend.fp16 else 32
            )
        )
        return limiter(clip, func=self.__class__)

    return super().preprocess_clip(clip, **kwargs)

pretty_string

pretty_string() -> str

Cached property returning a user-friendly string representation.

Returns:

  • str

    Pretty-printed string with arguments.

Source code
368
369
370
371
372
373
374
375
@cached_property
def pretty_string(self) -> str:
    """
    Cached property returning a user-friendly string representation.

    :return: Pretty-printed string with arguments.
    """
    return self._pretty_string()

scale

scale(
    clip: VideoNode,
    width: int | None = None,
    height: int | None = None,
    shift: tuple[float, float] = (0, 0),
    **kwargs: Any
) -> ConstantFormatVideoNode

Scale the given clip using the ONNX model.

Parameters:

  • clip

    (VideoNode) –

    The input clip to be scaled.

  • width

    (int | None, default: None ) –

    The target width for scaling. If None, the width of the input clip will be used.

  • height

    (int | None, default: None ) –

    The target height for scaling. If None, the height of the input clip will be used.

  • shift

    (tuple[float, float], default: (0, 0) ) –

    A tuple representing the shift values for the x and y axes.

  • **kwargs

    (Any, default: {} ) –

    Additional arguments to be passed to the preprocess_clip, postprocess_clip, inference, and _final_scale methods. Use the prefix preprocess_ or postprocess_ to pass an argument to the respective method. Use the prefix inference_ to pass an argument to the inference method.

Returns:

  • ConstantFormatVideoNode

    The scaled clip.

Source code
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
def scale(
    self,
    clip: vs.VideoNode,
    width: int | None = None,
    height: int | None = None,
    shift: tuple[float, float] = (0, 0),
    **kwargs: Any
) -> ConstantFormatVideoNode:
    """
    Scale the given clip using the ONNX model.

    :param clip:        The input clip to be scaled.
    :param width:       The target width for scaling. If None, the width of the input clip will be used.
    :param height:      The target height for scaling. If None, the height of the input clip will be used.
    :param shift:       A tuple representing the shift values for the x and y axes.
    :param **kwargs:    Additional arguments to be passed to the `preprocess_clip`, `postprocess_clip`,
                        `inference`, and `_final_scale` methods.
                        Use the prefix `preprocess_` or `postprocess_` to pass an argument to the respective method.
                        Use the prefix `inference_` to pass an argument to the inference method.

    :return:            The scaled clip.
    """
    from vsmlrt import Backend

    assert check_variable_format(clip, self.__class__)

    width, height = self._wh_norm(clip, width, height)

    preprocess_kwargs = dict[str, Any]()
    postprocess_kwargs = dict[str, Any]()
    inference_kwargs = dict[str, Any]()

    for k in kwargs.copy():
        for prefix, ckwargs in zip(
            ("preprocess_", "postprocess_", "inference_"),
            (preprocess_kwargs, postprocess_kwargs, inference_kwargs)
        ):
            if k.startswith(prefix):
                ckwargs[k.removeprefix(prefix)] = kwargs.pop(k)
                break

    wclip = self.preprocess_clip(clip, **preprocess_kwargs)

    if 0 not in {clip.width, clip.height}:
        scaled = self.inference(wclip, **inference_kwargs)
    else:
        if not isinstance(self.backend, Backend.TRT):
            raise CustomValueError(
                "Variable resolution clips can only be processed with TRT Backend!", self.__class__, self.backend
            )

        warning(f"{self.__class__.__name__}: Variable resolution clip detected!")

        if self.backend.static_shape:
            warning("static_shape is True, setting it to False...")
            self.backend.static_shape = False

        if not self.backend.max_shapes:
            warning("max_shapes is None, setting it to (1936, 1088). You may want to adjust it...")
            self.backend.max_shapes = (1936, 1088)

        if not self.backend.opt_shapes:
            warning("opt_shapes is None, setting it to (64, 64). You may want to adjust it...")
            self.backend.opt_shapes = (64, 64)

        scaled = ProcessVariableResClip[ConstantFormatVideoNode].from_func(
            wclip, lambda c: self.inference(c, **inference_kwargs), False, wclip.format, self.max_instances
        )

    scaled = self.postprocess_clip(scaled, clip, **postprocess_kwargs)

    return self._finish_scale(scaled, clip, width, height, shift, **kwargs)

supersample

supersample(
    clip: VideoNodeT,
    rfactor: float = 2.0,
    shift: tuple[TopShift, LeftShift] = (0, 0),
    **kwargs: Any
) -> VideoNodeT

Supersample a clip by a given scaling factor.

Parameters:

  • clip

    (VideoNodeT) –

    The source clip.

  • rfactor

    (float, default: 2.0 ) –

    Scaling factor for supersampling.

  • shift

    (tuple[TopShift, LeftShift], default: (0, 0) ) –

    Subpixel shift (top, left) applied during scaling.

  • kwargs

    (Any, default: {} ) –

    Additional arguments forwarded to the scale function.

Returns:

Raises:

  • CustomValueError

    If resulting resolution is non-positive.

Source code
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
def supersample(
    self, clip: VideoNodeT, rfactor: float = 2.0, shift: tuple[TopShift, LeftShift] = (0, 0), **kwargs: Any
) -> VideoNodeT:
    """
    Supersample a clip by a given scaling factor.

    :param clip:                The source clip.
    :param rfactor:             Scaling factor for supersampling.
    :param shift:               Subpixel shift (top, left) applied during scaling.
    :param kwargs:              Additional arguments forwarded to the scale function.
    :raises CustomValueError:   If resulting resolution is non-positive.
    :return:                    The supersampled clip.
    """
    assert check_variable_resolution(clip, self.supersample)

    dst_width, dst_height = ceil(clip.width * rfactor), ceil(clip.height * rfactor)

    if max(dst_width, dst_height) <= 0.0:
        raise CustomValueError(
            'Multiplying the resolution by "rfactor" must result in a positive resolution!',
            self.supersample,
            rfactor,
        )

    return self.scale(clip, dst_width, dst_height, shift, **kwargs)  # type: ignore[return-value]

BaseArtCNNLuma

BaseArtCNNLuma(
    backend: Any | None = None,
    tiles: int | tuple[int, int] | None = None,
    tilesize: int | tuple[int, int] | None = None,
    overlap: int | tuple[int, int] | None = None,
    max_instances: int = 2,
    *,
    kernel: KernelLike = Catrom,
    scaler: ScalerLike | None = None,
    shifter: KernelLike | None = None,
    **kwargs: Any
)

Bases: BaseArtCNN

Parameters:

  • backend

    (Any | None, default: None ) –

    The backend to be used with the vs-mlrt framework. If set to None, the most suitable backend will be automatically selected, prioritizing fp16 support.

  • tiles

    (int | tuple[int, int] | None, default: None ) –

    Whether to split the image into multiple tiles. This can help reduce VRAM usage, but note that the model's behavior may vary when they are used.

  • tilesize

    (int | tuple[int, int] | None, default: None ) –

    The size of each tile when splitting the image (if tiles are enabled).

  • overlap

    (int | tuple[int, int] | None, default: None ) –

    The size of overlap between tiles.

  • max_instances

    (int, default: 2 ) –

    Maximum instances to spawn when scaling a variable resolution clip.

  • kernel

    (KernelLike, default: Catrom ) –

    Base kernel to be used for certain scaling/shifting/resampling operations. Defaults to Catrom.

  • scaler

    (ScalerLike | None, default: None ) –

    Scaler used for scaling operations. Defaults to kernel.

  • shifter

    (KernelLike | None, default: None ) –

    Kernel used for shifting operations. Defaults to kernel.

  • **kwargs

    (Any, default: {} ) –

    Additional arguments to pass to the backend. See the vsmlrt backend's docstring for more details.

Classes:

Methods:

  • calc_tilesize

    Reimplementation of vsmlrt.calc_tilesize helper function

  • ensure_obj

    Ensure that the input is a scaler instance, resolving it if necessary.

  • from_param

    Resolve and return a scaler type from a given input (string, type, or instance).

  • get_scale_args

    Generate the keyword arguments used for scaling.

  • inference
  • kernel_radius

    Return the effective kernel radius for the scaler.

  • multi

    Deprecated alias for supersample.

  • postprocess_clip

    Handles postprocessing of the model's output after inference.

  • preprocess_clip
  • pretty_string

    Cached property returning a user-friendly string representation.

  • scale

    Scale the given clip using the ONNX model.

  • supersample

    Supersample a clip by a given scaling factor.

Attributes:

Source code
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
def __init__(
    self,
    backend: Any | None = None,
    tiles: int | tuple[int, int] | None = None,
    tilesize: int | tuple[int, int] | None = None,
    overlap: int | tuple[int, int] | None = None,
    max_instances: int = 2,
    *,
    kernel: KernelLike = Catrom,
    scaler: ScalerLike | None = None,
    shifter: KernelLike | None = None,
    **kwargs: Any
) -> None:
    """
    :param backend:         The backend to be used with the vs-mlrt framework.
                            If set to None, the most suitable backend will be automatically selected, prioritizing fp16 support.
    :param tiles:           Whether to split the image into multiple tiles.
                            This can help reduce VRAM usage, but note that the model's behavior may vary when they are used.
    :param tilesize:        The size of each tile when splitting the image (if tiles are enabled).
    :param overlap:         The size of overlap between tiles.
    :param max_instances:   Maximum instances to spawn when scaling a variable resolution clip.
    :param kernel:          Base kernel to be used for certain scaling/shifting/resampling operations.
                            Defaults to Catrom.
    :param scaler:          Scaler used for scaling operations. Defaults to kernel.
    :param shifter:         Kernel used for shifting operations. Defaults to kernel.
    :param **kwargs:        Additional arguments to pass to the backend.
                            See the vsmlrt backend's docstring for more details.
    """
    super().__init__(
        None, backend, tiles, tilesize, overlap, max_instances, kernel=kernel, scaler=scaler, shifter=shifter, **kwargs
    )

backend instance-attribute

backend = autoselect_backend(**_default_args | kwargs)

kernel instance-attribute

kernel = ensure_obj(kernel, __class__)

kwargs instance-attribute

kwargs: dict[str, Any] = kwargs

Arguments passed to the implemented funcs or internal scale function.

max_instances instance-attribute

max_instances = max_instances

model instance-attribute

model = str(resolve())

overlap instance-attribute

overlap = overlap

overlap_h instance-attribute

overlap_h = 8

overlap_w instance-attribute

overlap_w = 8

scale_function instance-attribute

scale_function: Callable[..., VideoNode]

Scale function called internally when performing scaling operations.

scaler instance-attribute

scaler = ensure_obj(scaler or kernel, __class__)

shifter instance-attribute

shifter = ensure_obj(shifter or kernel, __class__)

tiles instance-attribute

tiles = tiles

tilesize instance-attribute

tilesize = tilesize

cached_property

cached_property(func: Callable[Concatenate[_BaseScalerT, P], T_co])

Bases: cached_property[T_co]

Read only version of functools.cached_property.

Source code
265
def __init__(self, func: Callable[Concatenate[_BaseScalerT, P], T_co]) -> None: ...

calc_tilesize

calc_tilesize(
    clip: VideoNode, **kwargs: Any
) -> tuple[tuple[int, int], tuple[int, int]]

Reimplementation of vsmlrt.calc_tilesize helper function

Source code
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
def calc_tilesize(self, clip: vs.VideoNode, **kwargs: Any) -> tuple[tuple[int, int], tuple[int, int]]:
    """Reimplementation of vsmlrt.calc_tilesize helper function"""

    from vsmlrt import calc_tilesize

    kwargs = dict(
        tiles=self.tiles,
        tilesize=self.tilesize,
        width=clip.width,
        height=clip.height,
        multiple=1,
        overlap_w=self.overlap_w,
        overlap_h=self.overlap_h,
    ) | kwargs

    return calc_tilesize(**kwargs)

ensure_obj classmethod

ensure_obj(
    scaler: str | type[Self] | Self | None = None,
    /,
    func_except: FuncExceptT | None = None,
) -> Self

Ensure that the input is a scaler instance, resolving it if necessary.

Parameters:

  • scaler

    (str | type[Self] | Self | None, default: None ) –

    Scaler identifier (string, class, or instance).

  • func_except

    (FuncExceptT | None, default: None ) –

    Function returned for custom error handling.

Returns:

  • Self

    Scaler instance.

Source code
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
@classmethod
def ensure_obj(
    cls,
    scaler: str | type[Self] | Self | None = None,
    /,
    func_except: FuncExceptT | None = None,
) -> Self:
    """
    Ensure that the input is a scaler instance, resolving it if necessary.

    :param scaler:          Scaler identifier (string, class, or instance).
    :param func_except:     Function returned for custom error handling.
    :return:                Scaler instance.
    """
    return _base_ensure_obj(cls, scaler, func_except)

from_param classmethod

from_param(
    scaler: str | type[Self] | Self | None = None,
    /,
    func_except: FuncExceptT | None = None,
) -> type[Self]

Resolve and return a scaler type from a given input (string, type, or instance).

Parameters:

  • scaler

    (str | type[Self] | Self | None, default: None ) –

    Scaler identifier (string, class, or instance).

  • func_except

    (FuncExceptT | None, default: None ) –

    Function returned for custom error handling.

Returns:

  • type[Self]

    Resolved scaler type.

Source code
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
@classmethod
def from_param(
    cls,
    scaler: str | type[Self] | Self | None = None,
    /,
    func_except: FuncExceptT | None = None,
) -> type[Self]:
    """
    Resolve and return a scaler type from a given input (string, type, or instance).

    :param scaler:          Scaler identifier (string, class, or instance).
    :param func_except:     Function returned for custom error handling.
    :return:                Resolved scaler type.
    """
    return _base_from_param(cls, scaler, cls._err_class, func_except)

get_scale_args

get_scale_args(
    clip: VideoNode,
    shift: tuple[TopShift, LeftShift] = (0, 0),
    width: int | None = None,
    height: int | None = None,
    **kwargs: Any
) -> dict[str, Any]

Generate the keyword arguments used for scaling.

Parameters:

  • clip

    (VideoNode) –

    The source clip.

  • shift

    (tuple[TopShift, LeftShift], default: (0, 0) ) –

    Subpixel shift (top, left).

  • width

    (int | None, default: None ) –

    Target width.

  • height

    (int | None, default: None ) –

    Target height.

  • kwargs

    (Any, default: {} ) –

    Extra parameters to merge.

Returns:

  • dict[str, Any]

    Final dictionary of keyword arguments for the scale function.

Source code
466
467
468
469
470
471
472
473
474
475
476
477
478
479
480
481
482
483
484
def get_scale_args(
    self,
    clip: vs.VideoNode,
    shift: tuple[TopShift, LeftShift] = (0, 0),
    width: int | None = None,
    height: int | None = None,
    **kwargs: Any,
) -> dict[str, Any]:
    """
    Generate the keyword arguments used for scaling.

    :param clip:    The source clip.
    :param shift:   Subpixel shift (top, left).
    :param width:   Target width.
    :param height:  Target height.
    :param kwargs:  Extra parameters to merge.
    :return:        Final dictionary of keyword arguments for the scale function.
    """
    return dict(width=width, height=height, src_top=shift[0], src_left=shift[1]) | self.kwargs | kwargs

inference

inference(
    clip: ConstantFormatVideoNode, **kwargs: Any
) -> ConstantFormatVideoNode
Source code
302
303
304
305
306
def inference(self, clip: ConstantFormatVideoNode, **kwargs: Any) -> ConstantFormatVideoNode:
    from vsmlrt import ArtCNN as mlrt_ArtCNN
    from vsmlrt import ArtCNNModel

    return mlrt_ArtCNN(clip, self.tiles, self.tilesize, self.overlap, ArtCNNModel(self._model), self.backend)

kernel_radius

kernel_radius() -> int

Return the effective kernel radius for the scaler.

Returns:

  • int

    Kernel radius.

Raises:

  • CustomNotImplementedError

    If no kernel radius is defined.

Source code
347
348
349
350
351
352
353
354
355
@cached_property
def kernel_radius(self) -> int:
    """
    Return the effective kernel radius for the scaler.

    :raises CustomNotImplementedError:  If no kernel radius is defined.
    :return:                            Kernel radius.
    """
    ...

multi

multi(
    clip: VideoNodeT,
    multi: float = 2.0,
    shift: tuple[TopShift, LeftShift] = (0, 0),
    **kwargs: Any
) -> VideoNodeT

Deprecated alias for supersample.

Parameters:

  • clip

    (VideoNodeT) –

    The source clip.

  • multi

    (float, default: 2.0 ) –

    Supersampling factor.

  • shift

    (tuple[TopShift, LeftShift], default: (0, 0) ) –

    Subpixel shift (top, left) applied during scaling.

  • kwargs

    (Any, default: {} ) –

    Additional arguments forwarded to the scale function.

Returns:

Source code
451
452
453
454
455
456
457
458
459
460
461
462
463
464
@deprecated('The "multi" method is deprecated. Use "supersample" instead.', category=DeprecationWarning)
def multi(
    self, clip: VideoNodeT, multi: float = 2.0, shift: tuple[TopShift, LeftShift] = (0, 0), **kwargs: Any
) -> VideoNodeT:
    """
    Deprecated alias for `supersample`.

    :param clip:    The source clip.
    :param multi:   Supersampling factor.
    :param shift:   Subpixel shift (top, left) applied during scaling.
    :param kwargs:  Additional arguments forwarded to the scale function.
    :return:        The supersampled clip.
    """
    return self.supersample(clip, multi, shift, **kwargs)

postprocess_clip

postprocess_clip(
    clip: VideoNode, input_clip: VideoNode, **kwargs: Any
) -> ConstantFormatVideoNode

Handles postprocessing of the model's output after inference.

Source code
231
232
233
234
235
236
def postprocess_clip(self, clip: vs.VideoNode, input_clip: vs.VideoNode, **kwargs: Any) -> ConstantFormatVideoNode:
    """Handles postprocessing of the model's output after inference."""

    return depth(
        clip, input_clip, dither_type=DitherType.ORDERED if 0 in {clip.width, clip.height} else DitherType.AUTO
    )

preprocess_clip

preprocess_clip(clip: VideoNode, **kwargs: Any) -> ConstantFormatVideoNode
Source code
310
311
def preprocess_clip(self, clip: vs.VideoNode, **kwargs: Any) -> ConstantFormatVideoNode:
    return super().preprocess_clip(get_y(clip), **kwargs)

pretty_string

pretty_string() -> str

Cached property returning a user-friendly string representation.

Returns:

  • str

    Pretty-printed string with arguments.

Source code
368
369
370
371
372
373
374
375
@cached_property
def pretty_string(self) -> str:
    """
    Cached property returning a user-friendly string representation.

    :return: Pretty-printed string with arguments.
    """
    return self._pretty_string()

scale

scale(
    clip: VideoNode,
    width: int | None = None,
    height: int | None = None,
    shift: tuple[float, float] = (0, 0),
    **kwargs: Any
) -> ConstantFormatVideoNode

Scale the given clip using the ONNX model.

Parameters:

  • clip

    (VideoNode) –

    The input clip to be scaled.

  • width

    (int | None, default: None ) –

    The target width for scaling. If None, the width of the input clip will be used.

  • height

    (int | None, default: None ) –

    The target height for scaling. If None, the height of the input clip will be used.

  • shift

    (tuple[float, float], default: (0, 0) ) –

    A tuple representing the shift values for the x and y axes.

  • **kwargs

    (Any, default: {} ) –

    Additional arguments to be passed to the preprocess_clip, postprocess_clip, inference, and _final_scale methods. Use the prefix preprocess_ or postprocess_ to pass an argument to the respective method. Use the prefix inference_ to pass an argument to the inference method.

Returns:

  • ConstantFormatVideoNode

    The scaled clip.

Source code
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
def scale(
    self,
    clip: vs.VideoNode,
    width: int | None = None,
    height: int | None = None,
    shift: tuple[float, float] = (0, 0),
    **kwargs: Any
) -> ConstantFormatVideoNode:
    """
    Scale the given clip using the ONNX model.

    :param clip:        The input clip to be scaled.
    :param width:       The target width for scaling. If None, the width of the input clip will be used.
    :param height:      The target height for scaling. If None, the height of the input clip will be used.
    :param shift:       A tuple representing the shift values for the x and y axes.
    :param **kwargs:    Additional arguments to be passed to the `preprocess_clip`, `postprocess_clip`,
                        `inference`, and `_final_scale` methods.
                        Use the prefix `preprocess_` or `postprocess_` to pass an argument to the respective method.
                        Use the prefix `inference_` to pass an argument to the inference method.

    :return:            The scaled clip.
    """
    from vsmlrt import Backend

    assert check_variable_format(clip, self.__class__)

    width, height = self._wh_norm(clip, width, height)

    preprocess_kwargs = dict[str, Any]()
    postprocess_kwargs = dict[str, Any]()
    inference_kwargs = dict[str, Any]()

    for k in kwargs.copy():
        for prefix, ckwargs in zip(
            ("preprocess_", "postprocess_", "inference_"),
            (preprocess_kwargs, postprocess_kwargs, inference_kwargs)
        ):
            if k.startswith(prefix):
                ckwargs[k.removeprefix(prefix)] = kwargs.pop(k)
                break

    wclip = self.preprocess_clip(clip, **preprocess_kwargs)

    if 0 not in {clip.width, clip.height}:
        scaled = self.inference(wclip, **inference_kwargs)
    else:
        if not isinstance(self.backend, Backend.TRT):
            raise CustomValueError(
                "Variable resolution clips can only be processed with TRT Backend!", self.__class__, self.backend
            )

        warning(f"{self.__class__.__name__}: Variable resolution clip detected!")

        if self.backend.static_shape:
            warning("static_shape is True, setting it to False...")
            self.backend.static_shape = False

        if not self.backend.max_shapes:
            warning("max_shapes is None, setting it to (1936, 1088). You may want to adjust it...")
            self.backend.max_shapes = (1936, 1088)

        if not self.backend.opt_shapes:
            warning("opt_shapes is None, setting it to (64, 64). You may want to adjust it...")
            self.backend.opt_shapes = (64, 64)

        scaled = ProcessVariableResClip[ConstantFormatVideoNode].from_func(
            wclip, lambda c: self.inference(c, **inference_kwargs), False, wclip.format, self.max_instances
        )

    scaled = self.postprocess_clip(scaled, clip, **postprocess_kwargs)

    return self._finish_scale(scaled, clip, width, height, shift, **kwargs)

supersample

supersample(
    clip: VideoNodeT,
    rfactor: float = 2.0,
    shift: tuple[TopShift, LeftShift] = (0, 0),
    **kwargs: Any
) -> VideoNodeT

Supersample a clip by a given scaling factor.

Parameters:

  • clip

    (VideoNodeT) –

    The source clip.

  • rfactor

    (float, default: 2.0 ) –

    Scaling factor for supersampling.

  • shift

    (tuple[TopShift, LeftShift], default: (0, 0) ) –

    Subpixel shift (top, left) applied during scaling.

  • kwargs

    (Any, default: {} ) –

    Additional arguments forwarded to the scale function.

Returns:

Raises:

  • CustomValueError

    If resulting resolution is non-positive.

Source code
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
def supersample(
    self, clip: VideoNodeT, rfactor: float = 2.0, shift: tuple[TopShift, LeftShift] = (0, 0), **kwargs: Any
) -> VideoNodeT:
    """
    Supersample a clip by a given scaling factor.

    :param clip:                The source clip.
    :param rfactor:             Scaling factor for supersampling.
    :param shift:               Subpixel shift (top, left) applied during scaling.
    :param kwargs:              Additional arguments forwarded to the scale function.
    :raises CustomValueError:   If resulting resolution is non-positive.
    :return:                    The supersampled clip.
    """
    assert check_variable_resolution(clip, self.supersample)

    dst_width, dst_height = ceil(clip.width * rfactor), ceil(clip.height * rfactor)

    if max(dst_width, dst_height) <= 0.0:
        raise CustomValueError(
            'Multiplying the resolution by "rfactor" must result in a positive resolution!',
            self.supersample,
            rfactor,
        )

    return self.scale(clip, dst_width, dst_height, shift, **kwargs)  # type: ignore[return-value]

BaseDPIR

BaseDPIR(
    strength: SupportsFloat | VideoNode = 10,
    backend: Any | None = None,
    tiles: int | tuple[int, int] | None = None,
    tilesize: int | tuple[int, int] | None = None,
    overlap: int | tuple[int, int] | None = None,
    *,
    kernel: KernelLike = Catrom,
    scaler: ScalerLike | None = None,
    shifter: KernelLike | None = None,
    **kwargs: Any
)

Bases: BaseOnnxScaler

Initializes the class with the specified parameters.

Parameters:

  • strength

    (SupportsFloat | VideoNode, default: 10 ) –

    Threshold (8-bit scale) strength for deblocking/denoising. If a VideoNode is used, it must be in GRAY8, GRAYH, or GRAYS format, with pixel values representing the 8-bit thresholds.

  • backend

    (Any | None, default: None ) –

    The backend to be used with the vs-mlrt framework. If set to None, the most suitable backend will be automatically selected, prioritizing fp16 support.

  • tiles

    (int | tuple[int, int] | None, default: None ) –

    Whether to split the image into multiple tiles. This can help reduce VRAM usage, but note that the model's behavior may vary when they are used.

  • tilesize

    (int | tuple[int, int] | None, default: None ) –

    The size of each tile when splitting the image (if tiles are enabled).

  • overlap

    (int | tuple[int, int] | None, default: None ) –

    The size of overlap between tiles.

  • kernel

    (KernelLike, default: Catrom ) –

    Base kernel to be used for certain scaling/shifting/resampling operations. Defaults to Catrom.

  • scaler

    (ScalerLike | None, default: None ) –

    Scaler used for scaling operations. Defaults to kernel.

  • shifter

    (KernelLike | None, default: None ) –

    Kernel used for shifting operations. Defaults to kernel.

  • **kwargs

    (Any, default: {} ) –

    Additional arguments to pass to the backend. See the vsmlrt backend's docstring for more details.

Classes:

Methods:

Attributes:

Source code
866
867
868
869
870
871
872
873
874
875
876
877
878
879
880
881
882
883
884
885
886
887
888
889
890
891
892
893
894
895
896
897
898
899
900
901
902
903
904
905
906
907
908
909
910
911
912
def __init__(
    self,
    strength: SupportsFloat | vs.VideoNode = 10,
    backend: Any | None = None,
    tiles: int | tuple[int, int] | None = None,
    tilesize: int | tuple[int, int] | None = None,
    overlap: int | tuple[int, int] | None = None,
    *,
    kernel: KernelLike = Catrom,
    scaler: ScalerLike | None = None,
    shifter: KernelLike | None = None,
    **kwargs: Any
) -> None:
    """
    Initializes the class with the specified parameters.

    :param strength:        Threshold (8-bit scale) strength for deblocking/denoising.
                            If a VideoNode is used, it must be in GRAY8, GRAYH, or GRAYS format, with pixel values
                            representing the 8-bit thresholds.
    :param backend:         The backend to be used with the vs-mlrt framework.
                            If set to None, the most suitable backend will be automatically selected, prioritizing fp16 support.
    :param tiles:           Whether to split the image into multiple tiles.
                            This can help reduce VRAM usage, but note that the model's behavior may vary when they are used.
    :param tilesize:        The size of each tile when splitting the image (if tiles are enabled).
    :param overlap:         The size of overlap between tiles.
    :param kernel:          Base kernel to be used for certain scaling/shifting/resampling operations.
                            Defaults to Catrom.
    :param scaler:          Scaler used for scaling operations. Defaults to kernel.
    :param shifter:         Kernel used for shifting operations. Defaults to kernel.
    :param **kwargs:        Additional arguments to pass to the backend.
                            See the vsmlrt backend's docstring for more details.
    """
    self.strength = strength
    self.multiple = 8

    super().__init__(
        None,
        backend,
        tiles,
        tilesize,
        16 if overlap is None else overlap,
        -1,
        kernel=kernel,
        scaler=scaler,
        shifter=shifter,
        **kwargs
    )

backend instance-attribute

backend = autoselect_backend(**_default_args | kwargs)

kernel instance-attribute

kernel = ensure_obj(kernel, __class__)

kwargs instance-attribute

kwargs: dict[str, Any] = kwargs

Arguments passed to the implemented funcs or internal scale function.

max_instances instance-attribute

max_instances = max_instances

model instance-attribute

model = str(resolve())

multiple instance-attribute

multiple = 8

overlap instance-attribute

overlap = overlap

overlap_h instance-attribute

overlap_h = 8

overlap_w instance-attribute

overlap_w = 8

scale_function instance-attribute

scale_function: Callable[..., VideoNode]

Scale function called internally when performing scaling operations.

scaler instance-attribute

scaler = ensure_obj(scaler or kernel, __class__)

shifter instance-attribute

shifter = ensure_obj(shifter or kernel, __class__)

strength instance-attribute

strength = strength

tiles instance-attribute

tiles = tiles

tilesize instance-attribute

tilesize = tilesize

cached_property

cached_property(func: Callable[Concatenate[_BaseScalerT, P], T_co])

Bases: cached_property[T_co]

Read only version of functools.cached_property.

Source code
265
def __init__(self, func: Callable[Concatenate[_BaseScalerT, P], T_co]) -> None: ...

calc_tilesize

calc_tilesize(
    clip: VideoNode, **kwargs: Any
) -> tuple[tuple[int, int], tuple[int, int]]
Source code
926
927
def calc_tilesize(self, clip: vs.VideoNode, **kwargs: Any) -> tuple[tuple[int, int], tuple[int, int]]:
    return super().calc_tilesize(clip, **dict(multiple=self.multiple) | kwargs)

ensure_obj classmethod

ensure_obj(
    scaler: str | type[Self] | Self | None = None,
    /,
    func_except: FuncExceptT | None = None,
) -> Self

Ensure that the input is a scaler instance, resolving it if necessary.

Parameters:

  • scaler

    (str | type[Self] | Self | None, default: None ) –

    Scaler identifier (string, class, or instance).

  • func_except

    (FuncExceptT | None, default: None ) –

    Function returned for custom error handling.

Returns:

  • Self

    Scaler instance.

Source code
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
@classmethod
def ensure_obj(
    cls,
    scaler: str | type[Self] | Self | None = None,
    /,
    func_except: FuncExceptT | None = None,
) -> Self:
    """
    Ensure that the input is a scaler instance, resolving it if necessary.

    :param scaler:          Scaler identifier (string, class, or instance).
    :param func_except:     Function returned for custom error handling.
    :return:                Scaler instance.
    """
    return _base_ensure_obj(cls, scaler, func_except)

from_param classmethod

from_param(
    scaler: str | type[Self] | Self | None = None,
    /,
    func_except: FuncExceptT | None = None,
) -> type[Self]

Resolve and return a scaler type from a given input (string, type, or instance).

Parameters:

  • scaler

    (str | type[Self] | Self | None, default: None ) –

    Scaler identifier (string, class, or instance).

  • func_except

    (FuncExceptT | None, default: None ) –

    Function returned for custom error handling.

Returns:

  • type[Self]

    Resolved scaler type.

Source code
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
@classmethod
def from_param(
    cls,
    scaler: str | type[Self] | Self | None = None,
    /,
    func_except: FuncExceptT | None = None,
) -> type[Self]:
    """
    Resolve and return a scaler type from a given input (string, type, or instance).

    :param scaler:          Scaler identifier (string, class, or instance).
    :param func_except:     Function returned for custom error handling.
    :return:                Resolved scaler type.
    """
    return _base_from_param(cls, scaler, cls._err_class, func_except)

get_scale_args

get_scale_args(
    clip: VideoNode,
    shift: tuple[TopShift, LeftShift] = (0, 0),
    width: int | None = None,
    height: int | None = None,
    **kwargs: Any
) -> dict[str, Any]

Generate the keyword arguments used for scaling.

Parameters:

  • clip

    (VideoNode) –

    The source clip.

  • shift

    (tuple[TopShift, LeftShift], default: (0, 0) ) –

    Subpixel shift (top, left).

  • width

    (int | None, default: None ) –

    Target width.

  • height

    (int | None, default: None ) –

    Target height.

  • kwargs

    (Any, default: {} ) –

    Extra parameters to merge.

Returns:

  • dict[str, Any]

    Final dictionary of keyword arguments for the scale function.

Source code
466
467
468
469
470
471
472
473
474
475
476
477
478
479
480
481
482
483
484
def get_scale_args(
    self,
    clip: vs.VideoNode,
    shift: tuple[TopShift, LeftShift] = (0, 0),
    width: int | None = None,
    height: int | None = None,
    **kwargs: Any,
) -> dict[str, Any]:
    """
    Generate the keyword arguments used for scaling.

    :param clip:    The source clip.
    :param shift:   Subpixel shift (top, left).
    :param width:   Target width.
    :param height:  Target height.
    :param kwargs:  Extra parameters to merge.
    :return:        Final dictionary of keyword arguments for the scale function.
    """
    return dict(width=width, height=height, src_top=shift[0], src_left=shift[1]) | self.kwargs | kwargs

inference

inference(
    clip: ConstantFormatVideoNode, **kwargs: Any
) -> ConstantFormatVideoNode
Source code
946
947
948
949
950
951
952
953
954
955
956
957
958
959
960
961
962
963
964
965
966
967
def inference(self, clip: ConstantFormatVideoNode, **kwargs: Any) -> ConstantFormatVideoNode:
    from vsmlrt import DPIR as mlrt_dpir
    from vsmlrt import DPIRModel

    args = (
        self.tiles,
        self.tilesize,
        self.overlap,
        DPIRModel(self._model[0] if clip.format.color_family == vs.GRAY else self._model[1]),
        self.backend
    )
    padding = padder.mod_padding(clip, self.multiple, 0)

    if not any(padding) or kwargs.pop("no_pad", False):
        return mlrt_dpir(clip, self.strength, *args)

    clip = padder.MIRROR(clip, *padding)
    strength = padder.MIRROR(self.strength, *padding) if isinstance(self.strength, vs.VideoNode) else self.strength

    inferenced = mlrt_dpir(clip, strength, *args)

    return inferenced.std.Crop(*padding)

kernel_radius

kernel_radius() -> int

Return the effective kernel radius for the scaler.

Returns:

  • int

    Kernel radius.

Raises:

  • CustomNotImplementedError

    If no kernel radius is defined.

Source code
347
348
349
350
351
352
353
354
355
@cached_property
def kernel_radius(self) -> int:
    """
    Return the effective kernel radius for the scaler.

    :raises CustomNotImplementedError:  If no kernel radius is defined.
    :return:                            Kernel radius.
    """
    ...

multi

multi(
    clip: VideoNodeT,
    multi: float = 2.0,
    shift: tuple[TopShift, LeftShift] = (0, 0),
    **kwargs: Any
) -> VideoNodeT

Deprecated alias for supersample.

Parameters:

  • clip

    (VideoNodeT) –

    The source clip.

  • multi

    (float, default: 2.0 ) –

    Supersampling factor.

  • shift

    (tuple[TopShift, LeftShift], default: (0, 0) ) –

    Subpixel shift (top, left) applied during scaling.

  • kwargs

    (Any, default: {} ) –

    Additional arguments forwarded to the scale function.

Returns:

Source code
451
452
453
454
455
456
457
458
459
460
461
462
463
464
@deprecated('The "multi" method is deprecated. Use "supersample" instead.', category=DeprecationWarning)
def multi(
    self, clip: VideoNodeT, multi: float = 2.0, shift: tuple[TopShift, LeftShift] = (0, 0), **kwargs: Any
) -> VideoNodeT:
    """
    Deprecated alias for `supersample`.

    :param clip:    The source clip.
    :param multi:   Supersampling factor.
    :param shift:   Subpixel shift (top, left) applied during scaling.
    :param kwargs:  Additional arguments forwarded to the scale function.
    :return:        The supersampled clip.
    """
    return self.supersample(clip, multi, shift, **kwargs)

postprocess_clip

postprocess_clip(
    clip: VideoNode, input_clip: VideoNode, **kwargs: Any
) -> ConstantFormatVideoNode
Source code
937
938
939
940
941
942
943
944
def postprocess_clip(self, clip: vs.VideoNode, input_clip: vs.VideoNode, **kwargs: Any) -> ConstantFormatVideoNode:
    assert check_variable_format(clip, self.__class__)

    if get_video_format(clip) != get_video_format(input_clip):
        kwargs = dict(dither_type=DitherType.ORDERED) | kwargs
        clip = self.kernel.resample(clip, input_clip, Matrix.from_video(input_clip, func=self.__class__), **kwargs)

    return clip

preprocess_clip

preprocess_clip(clip: VideoNode, **kwargs: Any) -> ConstantFormatVideoNode
Source code
929
930
931
932
933
934
935
def preprocess_clip(self, clip: vs.VideoNode, **kwargs: Any) -> ConstantFormatVideoNode:
    if get_color_family(clip) == vs.GRAY:
        return super().preprocess_clip(clip, **kwargs)

    clip = self.kernel.resample(clip, vs.RGBH if self.backend.fp16 else vs.RGBS, Matrix.RGB)

    return limiter(clip, func=self.__class__)

pretty_string

pretty_string() -> str

Cached property returning a user-friendly string representation.

Returns:

  • str

    Pretty-printed string with arguments.

Source code
368
369
370
371
372
373
374
375
@cached_property
def pretty_string(self) -> str:
    """
    Cached property returning a user-friendly string representation.

    :return: Pretty-printed string with arguments.
    """
    return self._pretty_string()

scale

scale(
    clip: VideoNode,
    width: int | None = None,
    height: int | None = None,
    shift: tuple[float, float] = (0, 0),
    **kwargs: Any
) -> ConstantFormatVideoNode
Source code
914
915
916
917
918
919
920
921
922
923
924
def scale(
    self,
    clip: vs.VideoNode,
    width: int | None = None,
    height: int | None = None,
    shift: tuple[float, float] = (0, 0),
    **kwargs: Any
) -> ConstantFormatVideoNode:
    assert check_variable_resolution(clip, self.__class__)

    return super().scale(clip, width, height, shift, **kwargs)

supersample

supersample(
    clip: VideoNodeT,
    rfactor: float = 2.0,
    shift: tuple[TopShift, LeftShift] = (0, 0),
    **kwargs: Any
) -> VideoNodeT

Supersample a clip by a given scaling factor.

Parameters:

  • clip

    (VideoNodeT) –

    The source clip.

  • rfactor

    (float, default: 2.0 ) –

    Scaling factor for supersampling.

  • shift

    (tuple[TopShift, LeftShift], default: (0, 0) ) –

    Subpixel shift (top, left) applied during scaling.

  • kwargs

    (Any, default: {} ) –

    Additional arguments forwarded to the scale function.

Returns:

Raises:

  • CustomValueError

    If resulting resolution is non-positive.

Source code
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
def supersample(
    self, clip: VideoNodeT, rfactor: float = 2.0, shift: tuple[TopShift, LeftShift] = (0, 0), **kwargs: Any
) -> VideoNodeT:
    """
    Supersample a clip by a given scaling factor.

    :param clip:                The source clip.
    :param rfactor:             Scaling factor for supersampling.
    :param shift:               Subpixel shift (top, left) applied during scaling.
    :param kwargs:              Additional arguments forwarded to the scale function.
    :raises CustomValueError:   If resulting resolution is non-positive.
    :return:                    The supersampled clip.
    """
    assert check_variable_resolution(clip, self.supersample)

    dst_width, dst_height = ceil(clip.width * rfactor), ceil(clip.height * rfactor)

    if max(dst_width, dst_height) <= 0.0:
        raise CustomValueError(
            'Multiplying the resolution by "rfactor" must result in a positive resolution!',
            self.supersample,
            rfactor,
        )

    return self.scale(clip, dst_width, dst_height, shift, **kwargs)  # type: ignore[return-value]

BaseOnnxScaler

BaseOnnxScaler(
    model: SPathLike | None = None,
    backend: Any | None = None,
    tiles: int | tuple[int, int] | None = None,
    tilesize: int | tuple[int, int] | None = None,
    overlap: int | tuple[int, int] | None = None,
    max_instances: int = 2,
    *,
    kernel: KernelLike = Catrom,
    scaler: ScalerLike | None = None,
    shifter: KernelLike | None = None,
    **kwargs: Any
)

Bases: BaseGenericScaler, ABC

Abstract generic scaler class for an ONNX model.

Parameters:

  • model

    (SPathLike | None, default: None ) –

    Path to the ONNX model file.

  • backend

    (Any | None, default: None ) –

    The backend to be used with the vs-mlrt framework. If set to None, the most suitable backend will be automatically selected, prioritizing fp16 support.

  • tiles

    (int | tuple[int, int] | None, default: None ) –

    Whether to split the image into multiple tiles. This can help reduce VRAM usage, but note that the model's behavior may vary when they are used.

  • tilesize

    (int | tuple[int, int] | None, default: None ) –

    The size of each tile when splitting the image (if tiles are enabled).

  • overlap

    (int | tuple[int, int] | None, default: None ) –

    The size of overlap between tiles.

  • max_instances

    (int, default: 2 ) –

    Maximum instances to spawn when scaling a variable resolution clip.

  • kernel

    (KernelLike, default: Catrom ) –

    Base kernel to be used for certain scaling/shifting/resampling operations. Defaults to Catrom.

  • scaler

    (ScalerLike | None, default: None ) –

    Scaler used for scaling operations. Defaults to kernel.

  • shifter

    (KernelLike | None, default: None ) –

    Kernel used for shifting operations. Defaults to kernel.

  • **kwargs

    (Any, default: {} ) –

    Additional arguments to pass to the backend. See the vsmlrt backend's docstring for more details.

Classes:

Methods:

  • calc_tilesize

    Reimplementation of vsmlrt.calc_tilesize helper function

  • ensure_obj

    Ensure that the input is a scaler instance, resolving it if necessary.

  • from_param

    Resolve and return a scaler type from a given input (string, type, or instance).

  • get_scale_args

    Generate the keyword arguments used for scaling.

  • inference

    Runs inference on the given video clip using the configured model and backend.

  • kernel_radius

    Return the effective kernel radius for the scaler.

  • multi

    Deprecated alias for supersample.

  • postprocess_clip

    Handles postprocessing of the model's output after inference.

  • preprocess_clip

    Performs preprocessing on the clip prior to inference.

  • pretty_string

    Cached property returning a user-friendly string representation.

  • scale

    Scale the given clip using the ONNX model.

  • supersample

    Supersample a clip by a given scaling factor.

Attributes:

Source code
 80
 81
 82
 83
 84
 85
 86
 87
 88
 89
 90
 91
 92
 93
 94
 95
 96
 97
 98
 99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
def __init__(
    self,
    model: SPathLike | None = None,
    backend: Any | None = None,
    tiles: int | tuple[int, int] | None = None,
    tilesize: int | tuple[int, int] | None = None,
    overlap: int | tuple[int, int] | None = None,
    max_instances: int = 2,
    *,
    kernel: KernelLike = Catrom,
    scaler: ScalerLike | None = None,
    shifter: KernelLike | None = None,
    **kwargs: Any
) -> None:
    """
    :param model:           Path to the ONNX model file.
    :param backend:         The backend to be used with the vs-mlrt framework.
                            If set to None, the most suitable backend will be automatically selected, prioritizing fp16 support.
    :param tiles:           Whether to split the image into multiple tiles.
                            This can help reduce VRAM usage, but note that the model's behavior may vary when they are used.
    :param tilesize:        The size of each tile when splitting the image (if tiles are enabled).
    :param overlap:         The size of overlap between tiles.
    :param max_instances:   Maximum instances to spawn when scaling a variable resolution clip.
    :param kernel:          Base kernel to be used for certain scaling/shifting/resampling operations.
                            Defaults to Catrom.
    :param scaler:          Scaler used for scaling operations. Defaults to kernel.
    :param shifter:         Kernel used for shifting operations. Defaults to kernel.
    :param **kwargs:        Additional arguments to pass to the backend.
                            See the vsmlrt backend's docstring for more details.
    """
    super().__init__(kernel=kernel, scaler=scaler, shifter=shifter, **kwargs)

    if model is not None:
        self.model = str(SPath(model).resolve())

    if backend is None:
        _fp16 = self.kwargs.pop("fp16", True)
        _default_args = KwargsT(fp16=_fp16, output_format=int(_fp16), use_cuda_graph=True, use_cublas=True, heuristic=True)
        self.backend = autoselect_backend(**_default_args | self.kwargs)
    else:
        self.backend = backend

    self.tiles = tiles
    self.tilesize = tilesize
    self.overlap = overlap

    if self.overlap is None:
        self.overlap_w = self.overlap_h = 8
    elif isinstance(self.overlap, int):
        self.overlap_w = self.overlap_h = self.overlap
    else:
        self.overlap_w, self.overlap_h = self.overlap

    self.max_instances = max_instances

backend instance-attribute

backend = autoselect_backend(**_default_args | kwargs)

kernel instance-attribute

kernel = ensure_obj(kernel, __class__)

kwargs instance-attribute

kwargs: dict[str, Any] = kwargs

Arguments passed to the implemented funcs or internal scale function.

max_instances instance-attribute

max_instances = max_instances

model instance-attribute

model = str(resolve())

overlap instance-attribute

overlap = overlap

overlap_h instance-attribute

overlap_h = 8

overlap_w instance-attribute

overlap_w = 8

scale_function instance-attribute

scale_function: Callable[..., VideoNode]

Scale function called internally when performing scaling operations.

scaler instance-attribute

scaler = ensure_obj(scaler or kernel, __class__)

shifter instance-attribute

shifter = ensure_obj(shifter or kernel, __class__)

tiles instance-attribute

tiles = tiles

tilesize instance-attribute

tilesize = tilesize

cached_property

cached_property(func: Callable[Concatenate[_BaseScalerT, P], T_co])

Bases: cached_property[T_co]

Read only version of functools.cached_property.

Source code
265
def __init__(self, func: Callable[Concatenate[_BaseScalerT, P], T_co]) -> None: ...

calc_tilesize

calc_tilesize(
    clip: VideoNode, **kwargs: Any
) -> tuple[tuple[int, int], tuple[int, int]]

Reimplementation of vsmlrt.calc_tilesize helper function

Source code
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
def calc_tilesize(self, clip: vs.VideoNode, **kwargs: Any) -> tuple[tuple[int, int], tuple[int, int]]:
    """Reimplementation of vsmlrt.calc_tilesize helper function"""

    from vsmlrt import calc_tilesize

    kwargs = dict(
        tiles=self.tiles,
        tilesize=self.tilesize,
        width=clip.width,
        height=clip.height,
        multiple=1,
        overlap_w=self.overlap_w,
        overlap_h=self.overlap_h,
    ) | kwargs

    return calc_tilesize(**kwargs)

ensure_obj classmethod

ensure_obj(
    scaler: str | type[Self] | Self | None = None,
    /,
    func_except: FuncExceptT | None = None,
) -> Self

Ensure that the input is a scaler instance, resolving it if necessary.

Parameters:

  • scaler

    (str | type[Self] | Self | None, default: None ) –

    Scaler identifier (string, class, or instance).

  • func_except

    (FuncExceptT | None, default: None ) –

    Function returned for custom error handling.

Returns:

  • Self

    Scaler instance.

Source code
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
@classmethod
def ensure_obj(
    cls,
    scaler: str | type[Self] | Self | None = None,
    /,
    func_except: FuncExceptT | None = None,
) -> Self:
    """
    Ensure that the input is a scaler instance, resolving it if necessary.

    :param scaler:          Scaler identifier (string, class, or instance).
    :param func_except:     Function returned for custom error handling.
    :return:                Scaler instance.
    """
    return _base_ensure_obj(cls, scaler, func_except)

from_param classmethod

from_param(
    scaler: str | type[Self] | Self | None = None,
    /,
    func_except: FuncExceptT | None = None,
) -> type[Self]

Resolve and return a scaler type from a given input (string, type, or instance).

Parameters:

  • scaler

    (str | type[Self] | Self | None, default: None ) –

    Scaler identifier (string, class, or instance).

  • func_except

    (FuncExceptT | None, default: None ) –

    Function returned for custom error handling.

Returns:

  • type[Self]

    Resolved scaler type.

Source code
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
@classmethod
def from_param(
    cls,
    scaler: str | type[Self] | Self | None = None,
    /,
    func_except: FuncExceptT | None = None,
) -> type[Self]:
    """
    Resolve and return a scaler type from a given input (string, type, or instance).

    :param scaler:          Scaler identifier (string, class, or instance).
    :param func_except:     Function returned for custom error handling.
    :return:                Resolved scaler type.
    """
    return _base_from_param(cls, scaler, cls._err_class, func_except)

get_scale_args

get_scale_args(
    clip: VideoNode,
    shift: tuple[TopShift, LeftShift] = (0, 0),
    width: int | None = None,
    height: int | None = None,
    **kwargs: Any
) -> dict[str, Any]

Generate the keyword arguments used for scaling.

Parameters:

  • clip

    (VideoNode) –

    The source clip.

  • shift

    (tuple[TopShift, LeftShift], default: (0, 0) ) –

    Subpixel shift (top, left).

  • width

    (int | None, default: None ) –

    Target width.

  • height

    (int | None, default: None ) –

    Target height.

  • kwargs

    (Any, default: {} ) –

    Extra parameters to merge.

Returns:

  • dict[str, Any]

    Final dictionary of keyword arguments for the scale function.

Source code
466
467
468
469
470
471
472
473
474
475
476
477
478
479
480
481
482
483
484
def get_scale_args(
    self,
    clip: vs.VideoNode,
    shift: tuple[TopShift, LeftShift] = (0, 0),
    width: int | None = None,
    height: int | None = None,
    **kwargs: Any,
) -> dict[str, Any]:
    """
    Generate the keyword arguments used for scaling.

    :param clip:    The source clip.
    :param shift:   Subpixel shift (top, left).
    :param width:   Target width.
    :param height:  Target height.
    :param kwargs:  Extra parameters to merge.
    :return:        Final dictionary of keyword arguments for the scale function.
    """
    return dict(width=width, height=height, src_top=shift[0], src_left=shift[1]) | self.kwargs | kwargs

inference

inference(
    clip: ConstantFormatVideoNode, **kwargs: Any
) -> ConstantFormatVideoNode

Runs inference on the given video clip using the configured model and backend.

Source code
238
239
240
241
242
243
244
245
def inference(self, clip: ConstantFormatVideoNode, **kwargs: Any) -> ConstantFormatVideoNode:
    """Runs inference on the given video clip using the configured model and backend."""

    from vsmlrt import inference

    tiles, overlaps = self.calc_tilesize(clip)

    return inference(clip, self.model, overlaps, tiles, self.backend, **kwargs)

kernel_radius

kernel_radius() -> int

Return the effective kernel radius for the scaler.

Returns:

  • int

    Kernel radius.

Raises:

  • CustomNotImplementedError

    If no kernel radius is defined.

Source code
347
348
349
350
351
352
353
354
355
@cached_property
def kernel_radius(self) -> int:
    """
    Return the effective kernel radius for the scaler.

    :raises CustomNotImplementedError:  If no kernel radius is defined.
    :return:                            Kernel radius.
    """
    ...

multi

multi(
    clip: VideoNodeT,
    multi: float = 2.0,
    shift: tuple[TopShift, LeftShift] = (0, 0),
    **kwargs: Any
) -> VideoNodeT

Deprecated alias for supersample.

Parameters:

  • clip

    (VideoNodeT) –

    The source clip.

  • multi

    (float, default: 2.0 ) –

    Supersampling factor.

  • shift

    (tuple[TopShift, LeftShift], default: (0, 0) ) –

    Subpixel shift (top, left) applied during scaling.

  • kwargs

    (Any, default: {} ) –

    Additional arguments forwarded to the scale function.

Returns:

Source code
451
452
453
454
455
456
457
458
459
460
461
462
463
464
@deprecated('The "multi" method is deprecated. Use "supersample" instead.', category=DeprecationWarning)
def multi(
    self, clip: VideoNodeT, multi: float = 2.0, shift: tuple[TopShift, LeftShift] = (0, 0), **kwargs: Any
) -> VideoNodeT:
    """
    Deprecated alias for `supersample`.

    :param clip:    The source clip.
    :param multi:   Supersampling factor.
    :param shift:   Subpixel shift (top, left) applied during scaling.
    :param kwargs:  Additional arguments forwarded to the scale function.
    :return:        The supersampled clip.
    """
    return self.supersample(clip, multi, shift, **kwargs)

postprocess_clip

postprocess_clip(
    clip: VideoNode, input_clip: VideoNode, **kwargs: Any
) -> ConstantFormatVideoNode

Handles postprocessing of the model's output after inference.

Source code
231
232
233
234
235
236
def postprocess_clip(self, clip: vs.VideoNode, input_clip: vs.VideoNode, **kwargs: Any) -> ConstantFormatVideoNode:
    """Handles postprocessing of the model's output after inference."""

    return depth(
        clip, input_clip, dither_type=DitherType.ORDERED if 0 in {clip.width, clip.height} else DitherType.AUTO
    )

preprocess_clip

preprocess_clip(clip: VideoNode, **kwargs: Any) -> ConstantFormatVideoNode

Performs preprocessing on the clip prior to inference.

Source code
225
226
227
228
229
def preprocess_clip(self, clip: vs.VideoNode, **kwargs: Any) -> ConstantFormatVideoNode:
    """Performs preprocessing on the clip prior to inference."""

    clip = depth(clip, 16 if self.backend.fp16 else 32, vs.FLOAT)
    return limiter(clip, func=self.__class__)

pretty_string

pretty_string() -> str

Cached property returning a user-friendly string representation.

Returns:

  • str

    Pretty-printed string with arguments.

Source code
368
369
370
371
372
373
374
375
@cached_property
def pretty_string(self) -> str:
    """
    Cached property returning a user-friendly string representation.

    :return: Pretty-printed string with arguments.
    """
    return self._pretty_string()

scale

scale(
    clip: VideoNode,
    width: int | None = None,
    height: int | None = None,
    shift: tuple[float, float] = (0, 0),
    **kwargs: Any
) -> ConstantFormatVideoNode

Scale the given clip using the ONNX model.

Parameters:

  • clip

    (VideoNode) –

    The input clip to be scaled.

  • width

    (int | None, default: None ) –

    The target width for scaling. If None, the width of the input clip will be used.

  • height

    (int | None, default: None ) –

    The target height for scaling. If None, the height of the input clip will be used.

  • shift

    (tuple[float, float], default: (0, 0) ) –

    A tuple representing the shift values for the x and y axes.

  • **kwargs

    (Any, default: {} ) –

    Additional arguments to be passed to the preprocess_clip, postprocess_clip, inference, and _final_scale methods. Use the prefix preprocess_ or postprocess_ to pass an argument to the respective method. Use the prefix inference_ to pass an argument to the inference method.

Returns:

  • ConstantFormatVideoNode

    The scaled clip.

Source code
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
def scale(
    self,
    clip: vs.VideoNode,
    width: int | None = None,
    height: int | None = None,
    shift: tuple[float, float] = (0, 0),
    **kwargs: Any
) -> ConstantFormatVideoNode:
    """
    Scale the given clip using the ONNX model.

    :param clip:        The input clip to be scaled.
    :param width:       The target width for scaling. If None, the width of the input clip will be used.
    :param height:      The target height for scaling. If None, the height of the input clip will be used.
    :param shift:       A tuple representing the shift values for the x and y axes.
    :param **kwargs:    Additional arguments to be passed to the `preprocess_clip`, `postprocess_clip`,
                        `inference`, and `_final_scale` methods.
                        Use the prefix `preprocess_` or `postprocess_` to pass an argument to the respective method.
                        Use the prefix `inference_` to pass an argument to the inference method.

    :return:            The scaled clip.
    """
    from vsmlrt import Backend

    assert check_variable_format(clip, self.__class__)

    width, height = self._wh_norm(clip, width, height)

    preprocess_kwargs = dict[str, Any]()
    postprocess_kwargs = dict[str, Any]()
    inference_kwargs = dict[str, Any]()

    for k in kwargs.copy():
        for prefix, ckwargs in zip(
            ("preprocess_", "postprocess_", "inference_"),
            (preprocess_kwargs, postprocess_kwargs, inference_kwargs)
        ):
            if k.startswith(prefix):
                ckwargs[k.removeprefix(prefix)] = kwargs.pop(k)
                break

    wclip = self.preprocess_clip(clip, **preprocess_kwargs)

    if 0 not in {clip.width, clip.height}:
        scaled = self.inference(wclip, **inference_kwargs)
    else:
        if not isinstance(self.backend, Backend.TRT):
            raise CustomValueError(
                "Variable resolution clips can only be processed with TRT Backend!", self.__class__, self.backend
            )

        warning(f"{self.__class__.__name__}: Variable resolution clip detected!")

        if self.backend.static_shape:
            warning("static_shape is True, setting it to False...")
            self.backend.static_shape = False

        if not self.backend.max_shapes:
            warning("max_shapes is None, setting it to (1936, 1088). You may want to adjust it...")
            self.backend.max_shapes = (1936, 1088)

        if not self.backend.opt_shapes:
            warning("opt_shapes is None, setting it to (64, 64). You may want to adjust it...")
            self.backend.opt_shapes = (64, 64)

        scaled = ProcessVariableResClip[ConstantFormatVideoNode].from_func(
            wclip, lambda c: self.inference(c, **inference_kwargs), False, wclip.format, self.max_instances
        )

    scaled = self.postprocess_clip(scaled, clip, **postprocess_kwargs)

    return self._finish_scale(scaled, clip, width, height, shift, **kwargs)

supersample

supersample(
    clip: VideoNodeT,
    rfactor: float = 2.0,
    shift: tuple[TopShift, LeftShift] = (0, 0),
    **kwargs: Any
) -> VideoNodeT

Supersample a clip by a given scaling factor.

Parameters:

  • clip

    (VideoNodeT) –

    The source clip.

  • rfactor

    (float, default: 2.0 ) –

    Scaling factor for supersampling.

  • shift

    (tuple[TopShift, LeftShift], default: (0, 0) ) –

    Subpixel shift (top, left) applied during scaling.

  • kwargs

    (Any, default: {} ) –

    Additional arguments forwarded to the scale function.

Returns:

Raises:

  • CustomValueError

    If resulting resolution is non-positive.

Source code
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
def supersample(
    self, clip: VideoNodeT, rfactor: float = 2.0, shift: tuple[TopShift, LeftShift] = (0, 0), **kwargs: Any
) -> VideoNodeT:
    """
    Supersample a clip by a given scaling factor.

    :param clip:                The source clip.
    :param rfactor:             Scaling factor for supersampling.
    :param shift:               Subpixel shift (top, left) applied during scaling.
    :param kwargs:              Additional arguments forwarded to the scale function.
    :raises CustomValueError:   If resulting resolution is non-positive.
    :return:                    The supersampled clip.
    """
    assert check_variable_resolution(clip, self.supersample)

    dst_width, dst_height = ceil(clip.width * rfactor), ceil(clip.height * rfactor)

    if max(dst_width, dst_height) <= 0.0:
        raise CustomValueError(
            'Multiplying the resolution by "rfactor" must result in a positive resolution!',
            self.supersample,
            rfactor,
        )

    return self.scale(clip, dst_width, dst_height, shift, **kwargs)  # type: ignore[return-value]

BaseWaifu2x

BaseWaifu2x(
    scale: Literal[1, 2, 4] = 2,
    noise: Literal[-1, 0, 1, 2, 3] = -1,
    backend: Any | None = None,
    tiles: int | tuple[int, int] | None = None,
    tilesize: int | tuple[int, int] | None = None,
    overlap: int | tuple[int, int] | None = None,
    max_instances: int = 2,
    *,
    kernel: KernelLike = Catrom,
    scaler: ScalerLike | None = None,
    shifter: KernelLike | None = None,
    **kwargs: Any
)

Bases: BaseOnnxScaler

Parameters:

  • scale

    (Literal[1, 2, 4], default: 2 ) –

    Upscaling factor. 1 = no uspcaling, 2 = 2x, 4 = 4x.

  • noise

    (Literal[-1, 0, 1, 2, 3], default: -1 ) –

    Noise reduction level. -1 = none, 0 = low, 1 = medium, 2 = high, 3 = highest.

  • backend

    (Any | None, default: None ) –

    The backend to be used with the vs-mlrt framework. If set to None, the most suitable backend will be automatically selected, prioritizing fp16 support.

  • tiles

    (int | tuple[int, int] | None, default: None ) –

    Whether to split the image into multiple tiles. This can help reduce VRAM usage, but note that the model's behavior may vary when they are used.

  • tilesize

    (int | tuple[int, int] | None, default: None ) –

    The size of each tile when splitting the image (if tiles are enabled).

  • overlap

    (int | tuple[int, int] | None, default: None ) –

    The size of overlap between tiles.

  • max_instances

    (int, default: 2 ) –

    Maximum instances to spawn when scaling a variable resolution clip.

  • kernel

    (KernelLike, default: Catrom ) –

    Base kernel to be used for certain scaling/shifting/resampling operations. Defaults to Catrom.

  • scaler

    (ScalerLike | None, default: None ) –

    Scaler used for scaling operations. Defaults to kernel.

  • shifter

    (KernelLike | None, default: None ) –

    Kernel used for shifting operations. Defaults to kernel.

  • **kwargs

    (Any, default: {} ) –

    Additional arguments to pass to the backend. See the vsmlrt backend's docstring for more details.

Classes:

Methods:

  • calc_tilesize

    Reimplementation of vsmlrt.calc_tilesize helper function

  • ensure_obj

    Ensure that the input is a scaler instance, resolving it if necessary.

  • from_param

    Resolve and return a scaler type from a given input (string, type, or instance).

  • get_scale_args

    Generate the keyword arguments used for scaling.

  • inference
  • kernel_radius

    Return the effective kernel radius for the scaler.

  • multi

    Deprecated alias for supersample.

  • postprocess_clip

    Handles postprocessing of the model's output after inference.

  • preprocess_clip

    Performs preprocessing on the clip prior to inference.

  • pretty_string

    Cached property returning a user-friendly string representation.

  • scale

    Scale the given clip using the ONNX model.

  • supersample

    Supersample a clip by a given scaling factor.

Attributes:

Source code
568
569
570
571
572
573
574
575
576
577
578
579
580
581
582
583
584
585
586
587
588
589
590
591
592
593
594
595
596
597
598
599
600
601
602
603
604
def __init__(
    self,
    scale: Literal[1, 2, 4] = 2,
    noise: Literal[-1, 0, 1, 2, 3] = -1,
    backend: Any | None = None,
    tiles: int | tuple[int, int] | None = None,
    tilesize: int | tuple[int, int] | None = None,
    overlap: int | tuple[int, int] | None = None,
    max_instances: int = 2,
    *,
    kernel: KernelLike = Catrom,
    scaler: ScalerLike | None = None,
    shifter: KernelLike | None = None,
    **kwargs: Any
) -> None:
    """
    :param scale:           Upscaling factor. 1 = no uspcaling, 2 = 2x, 4 = 4x.
    :param noise:           Noise reduction level. -1 = none, 0 = low, 1 = medium, 2 = high, 3 = highest.
    :param backend:         The backend to be used with the vs-mlrt framework.
                            If set to None, the most suitable backend will be automatically selected, prioritizing fp16 support.
    :param tiles:           Whether to split the image into multiple tiles.
                            This can help reduce VRAM usage, but note that the model's behavior may vary when they are used.
    :param tilesize:        The size of each tile when splitting the image (if tiles are enabled).
    :param overlap:         The size of overlap between tiles.
    :param max_instances:   Maximum instances to spawn when scaling a variable resolution clip.
    :param kernel:          Base kernel to be used for certain scaling/shifting/resampling operations.
                            Defaults to Catrom.
    :param scaler:          Scaler used for scaling operations. Defaults to kernel.
    :param shifter:         Kernel used for shifting operations. Defaults to kernel.
    :param **kwargs:        Additional arguments to pass to the backend.
                            See the vsmlrt backend's docstring for more details.
    """
    self.scale_w2x = scale
    self.noise = noise
    super().__init__(
        None, backend, tiles, tilesize, overlap, max_instances, kernel=kernel, scaler=scaler, shifter=shifter, **kwargs
    )

backend instance-attribute

backend = autoselect_backend(**_default_args | kwargs)

kernel instance-attribute

kernel = ensure_obj(kernel, __class__)

kwargs instance-attribute

kwargs: dict[str, Any] = kwargs

Arguments passed to the implemented funcs or internal scale function.

max_instances instance-attribute

max_instances = max_instances

model instance-attribute

model = str(resolve())

noise instance-attribute

noise: Literal[-1, 0, 1, 2, 3] = noise

overlap instance-attribute

overlap = overlap

overlap_h instance-attribute

overlap_h = 8

overlap_w instance-attribute

overlap_w = 8

scale_function instance-attribute

scale_function: Callable[..., VideoNode]

Scale function called internally when performing scaling operations.

scale_w2x instance-attribute

scale_w2x: Literal[1, 2, 4] = scale

scaler instance-attribute

scaler = ensure_obj(scaler or kernel, __class__)

shifter instance-attribute

shifter = ensure_obj(shifter or kernel, __class__)

tiles instance-attribute

tiles = tiles

tilesize instance-attribute

tilesize = tilesize

cached_property

cached_property(func: Callable[Concatenate[_BaseScalerT, P], T_co])

Bases: cached_property[T_co]

Read only version of functools.cached_property.

Source code
265
def __init__(self, func: Callable[Concatenate[_BaseScalerT, P], T_co]) -> None: ...

calc_tilesize

calc_tilesize(
    clip: VideoNode, **kwargs: Any
) -> tuple[tuple[int, int], tuple[int, int]]

Reimplementation of vsmlrt.calc_tilesize helper function

Source code
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
def calc_tilesize(self, clip: vs.VideoNode, **kwargs: Any) -> tuple[tuple[int, int], tuple[int, int]]:
    """Reimplementation of vsmlrt.calc_tilesize helper function"""

    from vsmlrt import calc_tilesize

    kwargs = dict(
        tiles=self.tiles,
        tilesize=self.tilesize,
        width=clip.width,
        height=clip.height,
        multiple=1,
        overlap_w=self.overlap_w,
        overlap_h=self.overlap_h,
    ) | kwargs

    return calc_tilesize(**kwargs)

ensure_obj classmethod

ensure_obj(
    scaler: str | type[Self] | Self | None = None,
    /,
    func_except: FuncExceptT | None = None,
) -> Self

Ensure that the input is a scaler instance, resolving it if necessary.

Parameters:

  • scaler

    (str | type[Self] | Self | None, default: None ) –

    Scaler identifier (string, class, or instance).

  • func_except

    (FuncExceptT | None, default: None ) –

    Function returned for custom error handling.

Returns:

  • Self

    Scaler instance.

Source code
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
@classmethod
def ensure_obj(
    cls,
    scaler: str | type[Self] | Self | None = None,
    /,
    func_except: FuncExceptT | None = None,
) -> Self:
    """
    Ensure that the input is a scaler instance, resolving it if necessary.

    :param scaler:          Scaler identifier (string, class, or instance).
    :param func_except:     Function returned for custom error handling.
    :return:                Scaler instance.
    """
    return _base_ensure_obj(cls, scaler, func_except)

from_param classmethod

from_param(
    scaler: str | type[Self] | Self | None = None,
    /,
    func_except: FuncExceptT | None = None,
) -> type[Self]

Resolve and return a scaler type from a given input (string, type, or instance).

Parameters:

  • scaler

    (str | type[Self] | Self | None, default: None ) –

    Scaler identifier (string, class, or instance).

  • func_except

    (FuncExceptT | None, default: None ) –

    Function returned for custom error handling.

Returns:

  • type[Self]

    Resolved scaler type.

Source code
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
@classmethod
def from_param(
    cls,
    scaler: str | type[Self] | Self | None = None,
    /,
    func_except: FuncExceptT | None = None,
) -> type[Self]:
    """
    Resolve and return a scaler type from a given input (string, type, or instance).

    :param scaler:          Scaler identifier (string, class, or instance).
    :param func_except:     Function returned for custom error handling.
    :return:                Resolved scaler type.
    """
    return _base_from_param(cls, scaler, cls._err_class, func_except)

get_scale_args

get_scale_args(
    clip: VideoNode,
    shift: tuple[TopShift, LeftShift] = (0, 0),
    width: int | None = None,
    height: int | None = None,
    **kwargs: Any
) -> dict[str, Any]

Generate the keyword arguments used for scaling.

Parameters:

  • clip

    (VideoNode) –

    The source clip.

  • shift

    (tuple[TopShift, LeftShift], default: (0, 0) ) –

    Subpixel shift (top, left).

  • width

    (int | None, default: None ) –

    Target width.

  • height

    (int | None, default: None ) –

    Target height.

  • kwargs

    (Any, default: {} ) –

    Extra parameters to merge.

Returns:

  • dict[str, Any]

    Final dictionary of keyword arguments for the scale function.

Source code
466
467
468
469
470
471
472
473
474
475
476
477
478
479
480
481
482
483
484
def get_scale_args(
    self,
    clip: vs.VideoNode,
    shift: tuple[TopShift, LeftShift] = (0, 0),
    width: int | None = None,
    height: int | None = None,
    **kwargs: Any,
) -> dict[str, Any]:
    """
    Generate the keyword arguments used for scaling.

    :param clip:    The source clip.
    :param shift:   Subpixel shift (top, left).
    :param width:   Target width.
    :param height:  Target height.
    :param kwargs:  Extra parameters to merge.
    :return:        Final dictionary of keyword arguments for the scale function.
    """
    return dict(width=width, height=height, src_top=shift[0], src_left=shift[1]) | self.kwargs | kwargs

inference

inference(
    clip: ConstantFormatVideoNode, **kwargs: Any
) -> ConstantFormatVideoNode
Source code
606
607
608
609
610
611
612
613
614
615
616
617
618
619
620
def inference(self, clip: ConstantFormatVideoNode, **kwargs: Any) -> ConstantFormatVideoNode:
    from vsmlrt import Waifu2x as mlrt_Waifu2x
    from vsmlrt import Waifu2xModel

    return mlrt_Waifu2x(
        clip,
        self.noise,
        self.scale_w2x,
        self.tiles,
        self.tilesize,
        self.overlap,
        Waifu2xModel(self._model),
        self.backend,
        **kwargs
    )

kernel_radius

kernel_radius() -> int

Return the effective kernel radius for the scaler.

Returns:

  • int

    Kernel radius.

Raises:

  • CustomNotImplementedError

    If no kernel radius is defined.

Source code
347
348
349
350
351
352
353
354
355
@cached_property
def kernel_radius(self) -> int:
    """
    Return the effective kernel radius for the scaler.

    :raises CustomNotImplementedError:  If no kernel radius is defined.
    :return:                            Kernel radius.
    """
    ...

multi

multi(
    clip: VideoNodeT,
    multi: float = 2.0,
    shift: tuple[TopShift, LeftShift] = (0, 0),
    **kwargs: Any
) -> VideoNodeT

Deprecated alias for supersample.

Parameters:

  • clip

    (VideoNodeT) –

    The source clip.

  • multi

    (float, default: 2.0 ) –

    Supersampling factor.

  • shift

    (tuple[TopShift, LeftShift], default: (0, 0) ) –

    Subpixel shift (top, left) applied during scaling.

  • kwargs

    (Any, default: {} ) –

    Additional arguments forwarded to the scale function.

Returns:

Source code
451
452
453
454
455
456
457
458
459
460
461
462
463
464
@deprecated('The "multi" method is deprecated. Use "supersample" instead.', category=DeprecationWarning)
def multi(
    self, clip: VideoNodeT, multi: float = 2.0, shift: tuple[TopShift, LeftShift] = (0, 0), **kwargs: Any
) -> VideoNodeT:
    """
    Deprecated alias for `supersample`.

    :param clip:    The source clip.
    :param multi:   Supersampling factor.
    :param shift:   Subpixel shift (top, left) applied during scaling.
    :param kwargs:  Additional arguments forwarded to the scale function.
    :return:        The supersampled clip.
    """
    return self.supersample(clip, multi, shift, **kwargs)

postprocess_clip

postprocess_clip(
    clip: VideoNode, input_clip: VideoNode, **kwargs: Any
) -> ConstantFormatVideoNode

Handles postprocessing of the model's output after inference.

Source code
231
232
233
234
235
236
def postprocess_clip(self, clip: vs.VideoNode, input_clip: vs.VideoNode, **kwargs: Any) -> ConstantFormatVideoNode:
    """Handles postprocessing of the model's output after inference."""

    return depth(
        clip, input_clip, dither_type=DitherType.ORDERED if 0 in {clip.width, clip.height} else DitherType.AUTO
    )

preprocess_clip

preprocess_clip(clip: VideoNode, **kwargs: Any) -> ConstantFormatVideoNode

Performs preprocessing on the clip prior to inference.

Source code
225
226
227
228
229
def preprocess_clip(self, clip: vs.VideoNode, **kwargs: Any) -> ConstantFormatVideoNode:
    """Performs preprocessing on the clip prior to inference."""

    clip = depth(clip, 16 if self.backend.fp16 else 32, vs.FLOAT)
    return limiter(clip, func=self.__class__)

pretty_string

pretty_string() -> str

Cached property returning a user-friendly string representation.

Returns:

  • str

    Pretty-printed string with arguments.

Source code
368
369
370
371
372
373
374
375
@cached_property
def pretty_string(self) -> str:
    """
    Cached property returning a user-friendly string representation.

    :return: Pretty-printed string with arguments.
    """
    return self._pretty_string()

scale

scale(
    clip: VideoNode,
    width: int | None = None,
    height: int | None = None,
    shift: tuple[float, float] = (0, 0),
    **kwargs: Any
) -> ConstantFormatVideoNode

Scale the given clip using the ONNX model.

Parameters:

  • clip

    (VideoNode) –

    The input clip to be scaled.

  • width

    (int | None, default: None ) –

    The target width for scaling. If None, the width of the input clip will be used.

  • height

    (int | None, default: None ) –

    The target height for scaling. If None, the height of the input clip will be used.

  • shift

    (tuple[float, float], default: (0, 0) ) –

    A tuple representing the shift values for the x and y axes.

  • **kwargs

    (Any, default: {} ) –

    Additional arguments to be passed to the preprocess_clip, postprocess_clip, inference, and _final_scale methods. Use the prefix preprocess_ or postprocess_ to pass an argument to the respective method. Use the prefix inference_ to pass an argument to the inference method.

Returns:

  • ConstantFormatVideoNode

    The scaled clip.

Source code
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
def scale(
    self,
    clip: vs.VideoNode,
    width: int | None = None,
    height: int | None = None,
    shift: tuple[float, float] = (0, 0),
    **kwargs: Any
) -> ConstantFormatVideoNode:
    """
    Scale the given clip using the ONNX model.

    :param clip:        The input clip to be scaled.
    :param width:       The target width for scaling. If None, the width of the input clip will be used.
    :param height:      The target height for scaling. If None, the height of the input clip will be used.
    :param shift:       A tuple representing the shift values for the x and y axes.
    :param **kwargs:    Additional arguments to be passed to the `preprocess_clip`, `postprocess_clip`,
                        `inference`, and `_final_scale` methods.
                        Use the prefix `preprocess_` or `postprocess_` to pass an argument to the respective method.
                        Use the prefix `inference_` to pass an argument to the inference method.

    :return:            The scaled clip.
    """
    from vsmlrt import Backend

    assert check_variable_format(clip, self.__class__)

    width, height = self._wh_norm(clip, width, height)

    preprocess_kwargs = dict[str, Any]()
    postprocess_kwargs = dict[str, Any]()
    inference_kwargs = dict[str, Any]()

    for k in kwargs.copy():
        for prefix, ckwargs in zip(
            ("preprocess_", "postprocess_", "inference_"),
            (preprocess_kwargs, postprocess_kwargs, inference_kwargs)
        ):
            if k.startswith(prefix):
                ckwargs[k.removeprefix(prefix)] = kwargs.pop(k)
                break

    wclip = self.preprocess_clip(clip, **preprocess_kwargs)

    if 0 not in {clip.width, clip.height}:
        scaled = self.inference(wclip, **inference_kwargs)
    else:
        if not isinstance(self.backend, Backend.TRT):
            raise CustomValueError(
                "Variable resolution clips can only be processed with TRT Backend!", self.__class__, self.backend
            )

        warning(f"{self.__class__.__name__}: Variable resolution clip detected!")

        if self.backend.static_shape:
            warning("static_shape is True, setting it to False...")
            self.backend.static_shape = False

        if not self.backend.max_shapes:
            warning("max_shapes is None, setting it to (1936, 1088). You may want to adjust it...")
            self.backend.max_shapes = (1936, 1088)

        if not self.backend.opt_shapes:
            warning("opt_shapes is None, setting it to (64, 64). You may want to adjust it...")
            self.backend.opt_shapes = (64, 64)

        scaled = ProcessVariableResClip[ConstantFormatVideoNode].from_func(
            wclip, lambda c: self.inference(c, **inference_kwargs), False, wclip.format, self.max_instances
        )

    scaled = self.postprocess_clip(scaled, clip, **postprocess_kwargs)

    return self._finish_scale(scaled, clip, width, height, shift, **kwargs)

supersample

supersample(
    clip: VideoNodeT,
    rfactor: float = 2.0,
    shift: tuple[TopShift, LeftShift] = (0, 0),
    **kwargs: Any
) -> VideoNodeT

Supersample a clip by a given scaling factor.

Parameters:

  • clip

    (VideoNodeT) –

    The source clip.

  • rfactor

    (float, default: 2.0 ) –

    Scaling factor for supersampling.

  • shift

    (tuple[TopShift, LeftShift], default: (0, 0) ) –

    Subpixel shift (top, left) applied during scaling.

  • kwargs

    (Any, default: {} ) –

    Additional arguments forwarded to the scale function.

Returns:

Raises:

  • CustomValueError

    If resulting resolution is non-positive.

Source code
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
def supersample(
    self, clip: VideoNodeT, rfactor: float = 2.0, shift: tuple[TopShift, LeftShift] = (0, 0), **kwargs: Any
) -> VideoNodeT:
    """
    Supersample a clip by a given scaling factor.

    :param clip:                The source clip.
    :param rfactor:             Scaling factor for supersampling.
    :param shift:               Subpixel shift (top, left) applied during scaling.
    :param kwargs:              Additional arguments forwarded to the scale function.
    :raises CustomValueError:   If resulting resolution is non-positive.
    :return:                    The supersampled clip.
    """
    assert check_variable_resolution(clip, self.supersample)

    dst_width, dst_height = ceil(clip.width * rfactor), ceil(clip.height * rfactor)

    if max(dst_width, dst_height) <= 0.0:
        raise CustomValueError(
            'Multiplying the resolution by "rfactor" must result in a positive resolution!',
            self.supersample,
            rfactor,
        )

    return self.scale(clip, dst_width, dst_height, shift, **kwargs)  # type: ignore[return-value]

BaseWaifu2xRGB

BaseWaifu2xRGB(
    scale: Literal[1, 2, 4] = 2,
    noise: Literal[-1, 0, 1, 2, 3] = -1,
    backend: Any | None = None,
    tiles: int | tuple[int, int] | None = None,
    tilesize: int | tuple[int, int] | None = None,
    overlap: int | tuple[int, int] | None = None,
    max_instances: int = 2,
    *,
    kernel: KernelLike = Catrom,
    scaler: ScalerLike | None = None,
    shifter: KernelLike | None = None,
    **kwargs: Any
)

Bases: BaseWaifu2x

Parameters:

  • scale

    (Literal[1, 2, 4], default: 2 ) –

    Upscaling factor. 1 = no uspcaling, 2 = 2x, 4 = 4x.

  • noise

    (Literal[-1, 0, 1, 2, 3], default: -1 ) –

    Noise reduction level. -1 = none, 0 = low, 1 = medium, 2 = high, 3 = highest.

  • backend

    (Any | None, default: None ) –

    The backend to be used with the vs-mlrt framework. If set to None, the most suitable backend will be automatically selected, prioritizing fp16 support.

  • tiles

    (int | tuple[int, int] | None, default: None ) –

    Whether to split the image into multiple tiles. This can help reduce VRAM usage, but note that the model's behavior may vary when they are used.

  • tilesize

    (int | tuple[int, int] | None, default: None ) –

    The size of each tile when splitting the image (if tiles are enabled).

  • overlap

    (int | tuple[int, int] | None, default: None ) –

    The size of overlap between tiles.

  • max_instances

    (int, default: 2 ) –

    Maximum instances to spawn when scaling a variable resolution clip.

  • kernel

    (KernelLike, default: Catrom ) –

    Base kernel to be used for certain scaling/shifting/resampling operations. Defaults to Catrom.

  • scaler

    (ScalerLike | None, default: None ) –

    Scaler used for scaling operations. Defaults to kernel.

  • shifter

    (KernelLike | None, default: None ) –

    Kernel used for shifting operations. Defaults to kernel.

  • **kwargs

    (Any, default: {} ) –

    Additional arguments to pass to the backend. See the vsmlrt backend's docstring for more details.

Classes:

Methods:

  • calc_tilesize

    Reimplementation of vsmlrt.calc_tilesize helper function

  • ensure_obj

    Ensure that the input is a scaler instance, resolving it if necessary.

  • from_param

    Resolve and return a scaler type from a given input (string, type, or instance).

  • get_scale_args

    Generate the keyword arguments used for scaling.

  • inference
  • kernel_radius

    Return the effective kernel radius for the scaler.

  • multi

    Deprecated alias for supersample.

  • postprocess_clip
  • preprocess_clip
  • pretty_string

    Cached property returning a user-friendly string representation.

  • scale

    Scale the given clip using the ONNX model.

  • supersample

    Supersample a clip by a given scaling factor.

Attributes:

Source code
568
569
570
571
572
573
574
575
576
577
578
579
580
581
582
583
584
585
586
587
588
589
590
591
592
593
594
595
596
597
598
599
600
601
602
603
604
def __init__(
    self,
    scale: Literal[1, 2, 4] = 2,
    noise: Literal[-1, 0, 1, 2, 3] = -1,
    backend: Any | None = None,
    tiles: int | tuple[int, int] | None = None,
    tilesize: int | tuple[int, int] | None = None,
    overlap: int | tuple[int, int] | None = None,
    max_instances: int = 2,
    *,
    kernel: KernelLike = Catrom,
    scaler: ScalerLike | None = None,
    shifter: KernelLike | None = None,
    **kwargs: Any
) -> None:
    """
    :param scale:           Upscaling factor. 1 = no uspcaling, 2 = 2x, 4 = 4x.
    :param noise:           Noise reduction level. -1 = none, 0 = low, 1 = medium, 2 = high, 3 = highest.
    :param backend:         The backend to be used with the vs-mlrt framework.
                            If set to None, the most suitable backend will be automatically selected, prioritizing fp16 support.
    :param tiles:           Whether to split the image into multiple tiles.
                            This can help reduce VRAM usage, but note that the model's behavior may vary when they are used.
    :param tilesize:        The size of each tile when splitting the image (if tiles are enabled).
    :param overlap:         The size of overlap between tiles.
    :param max_instances:   Maximum instances to spawn when scaling a variable resolution clip.
    :param kernel:          Base kernel to be used for certain scaling/shifting/resampling operations.
                            Defaults to Catrom.
    :param scaler:          Scaler used for scaling operations. Defaults to kernel.
    :param shifter:         Kernel used for shifting operations. Defaults to kernel.
    :param **kwargs:        Additional arguments to pass to the backend.
                            See the vsmlrt backend's docstring for more details.
    """
    self.scale_w2x = scale
    self.noise = noise
    super().__init__(
        None, backend, tiles, tilesize, overlap, max_instances, kernel=kernel, scaler=scaler, shifter=shifter, **kwargs
    )

backend instance-attribute

backend = autoselect_backend(**_default_args | kwargs)

kernel instance-attribute

kernel = ensure_obj(kernel, __class__)

kwargs instance-attribute

kwargs: dict[str, Any] = kwargs

Arguments passed to the implemented funcs or internal scale function.

max_instances instance-attribute

max_instances = max_instances

model instance-attribute

model = str(resolve())

noise instance-attribute

noise: Literal[-1, 0, 1, 2, 3] = noise

overlap instance-attribute

overlap = overlap

overlap_h instance-attribute

overlap_h = 8

overlap_w instance-attribute

overlap_w = 8

scale_function instance-attribute

scale_function: Callable[..., VideoNode]

Scale function called internally when performing scaling operations.

scale_w2x instance-attribute

scale_w2x: Literal[1, 2, 4] = scale

scaler instance-attribute

scaler = ensure_obj(scaler or kernel, __class__)

shifter instance-attribute

shifter = ensure_obj(shifter or kernel, __class__)

tiles instance-attribute

tiles = tiles

tilesize instance-attribute

tilesize = tilesize

cached_property

cached_property(func: Callable[Concatenate[_BaseScalerT, P], T_co])

Bases: cached_property[T_co]

Read only version of functools.cached_property.

Source code
265
def __init__(self, func: Callable[Concatenate[_BaseScalerT, P], T_co]) -> None: ...

calc_tilesize

calc_tilesize(
    clip: VideoNode, **kwargs: Any
) -> tuple[tuple[int, int], tuple[int, int]]

Reimplementation of vsmlrt.calc_tilesize helper function

Source code
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
def calc_tilesize(self, clip: vs.VideoNode, **kwargs: Any) -> tuple[tuple[int, int], tuple[int, int]]:
    """Reimplementation of vsmlrt.calc_tilesize helper function"""

    from vsmlrt import calc_tilesize

    kwargs = dict(
        tiles=self.tiles,
        tilesize=self.tilesize,
        width=clip.width,
        height=clip.height,
        multiple=1,
        overlap_w=self.overlap_w,
        overlap_h=self.overlap_h,
    ) | kwargs

    return calc_tilesize(**kwargs)

ensure_obj classmethod

ensure_obj(
    scaler: str | type[Self] | Self | None = None,
    /,
    func_except: FuncExceptT | None = None,
) -> Self

Ensure that the input is a scaler instance, resolving it if necessary.

Parameters:

  • scaler

    (str | type[Self] | Self | None, default: None ) –

    Scaler identifier (string, class, or instance).

  • func_except

    (FuncExceptT | None, default: None ) –

    Function returned for custom error handling.

Returns:

  • Self

    Scaler instance.

Source code
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
@classmethod
def ensure_obj(
    cls,
    scaler: str | type[Self] | Self | None = None,
    /,
    func_except: FuncExceptT | None = None,
) -> Self:
    """
    Ensure that the input is a scaler instance, resolving it if necessary.

    :param scaler:          Scaler identifier (string, class, or instance).
    :param func_except:     Function returned for custom error handling.
    :return:                Scaler instance.
    """
    return _base_ensure_obj(cls, scaler, func_except)

from_param classmethod

from_param(
    scaler: str | type[Self] | Self | None = None,
    /,
    func_except: FuncExceptT | None = None,
) -> type[Self]

Resolve and return a scaler type from a given input (string, type, or instance).

Parameters:

  • scaler

    (str | type[Self] | Self | None, default: None ) –

    Scaler identifier (string, class, or instance).

  • func_except

    (FuncExceptT | None, default: None ) –

    Function returned for custom error handling.

Returns:

  • type[Self]

    Resolved scaler type.

Source code
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
@classmethod
def from_param(
    cls,
    scaler: str | type[Self] | Self | None = None,
    /,
    func_except: FuncExceptT | None = None,
) -> type[Self]:
    """
    Resolve and return a scaler type from a given input (string, type, or instance).

    :param scaler:          Scaler identifier (string, class, or instance).
    :param func_except:     Function returned for custom error handling.
    :return:                Resolved scaler type.
    """
    return _base_from_param(cls, scaler, cls._err_class, func_except)

get_scale_args

get_scale_args(
    clip: VideoNode,
    shift: tuple[TopShift, LeftShift] = (0, 0),
    width: int | None = None,
    height: int | None = None,
    **kwargs: Any
) -> dict[str, Any]

Generate the keyword arguments used for scaling.

Parameters:

  • clip

    (VideoNode) –

    The source clip.

  • shift

    (tuple[TopShift, LeftShift], default: (0, 0) ) –

    Subpixel shift (top, left).

  • width

    (int | None, default: None ) –

    Target width.

  • height

    (int | None, default: None ) –

    Target height.

  • kwargs

    (Any, default: {} ) –

    Extra parameters to merge.

Returns:

  • dict[str, Any]

    Final dictionary of keyword arguments for the scale function.

Source code
466
467
468
469
470
471
472
473
474
475
476
477
478
479
480
481
482
483
484
def get_scale_args(
    self,
    clip: vs.VideoNode,
    shift: tuple[TopShift, LeftShift] = (0, 0),
    width: int | None = None,
    height: int | None = None,
    **kwargs: Any,
) -> dict[str, Any]:
    """
    Generate the keyword arguments used for scaling.

    :param clip:    The source clip.
    :param shift:   Subpixel shift (top, left).
    :param width:   Target width.
    :param height:  Target height.
    :param kwargs:  Extra parameters to merge.
    :return:        Final dictionary of keyword arguments for the scale function.
    """
    return dict(width=width, height=height, src_top=shift[0], src_left=shift[1]) | self.kwargs | kwargs

inference

inference(
    clip: ConstantFormatVideoNode, **kwargs: Any
) -> ConstantFormatVideoNode
Source code
606
607
608
609
610
611
612
613
614
615
616
617
618
619
620
def inference(self, clip: ConstantFormatVideoNode, **kwargs: Any) -> ConstantFormatVideoNode:
    from vsmlrt import Waifu2x as mlrt_Waifu2x
    from vsmlrt import Waifu2xModel

    return mlrt_Waifu2x(
        clip,
        self.noise,
        self.scale_w2x,
        self.tiles,
        self.tilesize,
        self.overlap,
        Waifu2xModel(self._model),
        self.backend,
        **kwargs
    )

kernel_radius

kernel_radius() -> int

Return the effective kernel radius for the scaler.

Returns:

  • int

    Kernel radius.

Raises:

  • CustomNotImplementedError

    If no kernel radius is defined.

Source code
347
348
349
350
351
352
353
354
355
@cached_property
def kernel_radius(self) -> int:
    """
    Return the effective kernel radius for the scaler.

    :raises CustomNotImplementedError:  If no kernel radius is defined.
    :return:                            Kernel radius.
    """
    ...

multi

multi(
    clip: VideoNodeT,
    multi: float = 2.0,
    shift: tuple[TopShift, LeftShift] = (0, 0),
    **kwargs: Any
) -> VideoNodeT

Deprecated alias for supersample.

Parameters:

  • clip

    (VideoNodeT) –

    The source clip.

  • multi

    (float, default: 2.0 ) –

    Supersampling factor.

  • shift

    (tuple[TopShift, LeftShift], default: (0, 0) ) –

    Subpixel shift (top, left) applied during scaling.

  • kwargs

    (Any, default: {} ) –

    Additional arguments forwarded to the scale function.

Returns:

Source code
451
452
453
454
455
456
457
458
459
460
461
462
463
464
@deprecated('The "multi" method is deprecated. Use "supersample" instead.', category=DeprecationWarning)
def multi(
    self, clip: VideoNodeT, multi: float = 2.0, shift: tuple[TopShift, LeftShift] = (0, 0), **kwargs: Any
) -> VideoNodeT:
    """
    Deprecated alias for `supersample`.

    :param clip:    The source clip.
    :param multi:   Supersampling factor.
    :param shift:   Subpixel shift (top, left) applied during scaling.
    :param kwargs:  Additional arguments forwarded to the scale function.
    :return:        The supersampled clip.
    """
    return self.supersample(clip, multi, shift, **kwargs)

postprocess_clip

postprocess_clip(
    clip: VideoNode, input_clip: VideoNode, **kwargs: Any
) -> ConstantFormatVideoNode
Source code
628
629
630
631
632
633
634
635
def postprocess_clip(self, clip: vs.VideoNode, input_clip: vs.VideoNode, **kwargs: Any) -> ConstantFormatVideoNode:
    assert check_variable_format(clip, self.__class__)

    if clip.format != get_video_format(input_clip):
        kwargs = dict(dither_type=DitherType.ORDERED) | kwargs
        clip = self.kernel.resample(clip, input_clip, Matrix.from_video(input_clip, func=self.__class__), **kwargs)

    return clip

preprocess_clip

preprocess_clip(clip: VideoNode, **kwargs: Any) -> ConstantFormatVideoNode
Source code
624
625
626
def preprocess_clip(self, clip: vs.VideoNode, **kwargs: Any) -> ConstantFormatVideoNode:
    clip = self.kernel.resample(clip, vs.RGBH if self.backend.fp16 else vs.RGBS, Matrix.RGB)
    return limiter(clip, func=self.__class__)

pretty_string

pretty_string() -> str

Cached property returning a user-friendly string representation.

Returns:

  • str

    Pretty-printed string with arguments.

Source code
368
369
370
371
372
373
374
375
@cached_property
def pretty_string(self) -> str:
    """
    Cached property returning a user-friendly string representation.

    :return: Pretty-printed string with arguments.
    """
    return self._pretty_string()

scale

scale(
    clip: VideoNode,
    width: int | None = None,
    height: int | None = None,
    shift: tuple[float, float] = (0, 0),
    **kwargs: Any
) -> ConstantFormatVideoNode

Scale the given clip using the ONNX model.

Parameters:

  • clip

    (VideoNode) –

    The input clip to be scaled.

  • width

    (int | None, default: None ) –

    The target width for scaling. If None, the width of the input clip will be used.

  • height

    (int | None, default: None ) –

    The target height for scaling. If None, the height of the input clip will be used.

  • shift

    (tuple[float, float], default: (0, 0) ) –

    A tuple representing the shift values for the x and y axes.

  • **kwargs

    (Any, default: {} ) –

    Additional arguments to be passed to the preprocess_clip, postprocess_clip, inference, and _final_scale methods. Use the prefix preprocess_ or postprocess_ to pass an argument to the respective method. Use the prefix inference_ to pass an argument to the inference method.

Returns:

  • ConstantFormatVideoNode

    The scaled clip.

Source code
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
def scale(
    self,
    clip: vs.VideoNode,
    width: int | None = None,
    height: int | None = None,
    shift: tuple[float, float] = (0, 0),
    **kwargs: Any
) -> ConstantFormatVideoNode:
    """
    Scale the given clip using the ONNX model.

    :param clip:        The input clip to be scaled.
    :param width:       The target width for scaling. If None, the width of the input clip will be used.
    :param height:      The target height for scaling. If None, the height of the input clip will be used.
    :param shift:       A tuple representing the shift values for the x and y axes.
    :param **kwargs:    Additional arguments to be passed to the `preprocess_clip`, `postprocess_clip`,
                        `inference`, and `_final_scale` methods.
                        Use the prefix `preprocess_` or `postprocess_` to pass an argument to the respective method.
                        Use the prefix `inference_` to pass an argument to the inference method.

    :return:            The scaled clip.
    """
    from vsmlrt import Backend

    assert check_variable_format(clip, self.__class__)

    width, height = self._wh_norm(clip, width, height)

    preprocess_kwargs = dict[str, Any]()
    postprocess_kwargs = dict[str, Any]()
    inference_kwargs = dict[str, Any]()

    for k in kwargs.copy():
        for prefix, ckwargs in zip(
            ("preprocess_", "postprocess_", "inference_"),
            (preprocess_kwargs, postprocess_kwargs, inference_kwargs)
        ):
            if k.startswith(prefix):
                ckwargs[k.removeprefix(prefix)] = kwargs.pop(k)
                break

    wclip = self.preprocess_clip(clip, **preprocess_kwargs)

    if 0 not in {clip.width, clip.height}:
        scaled = self.inference(wclip, **inference_kwargs)
    else:
        if not isinstance(self.backend, Backend.TRT):
            raise CustomValueError(
                "Variable resolution clips can only be processed with TRT Backend!", self.__class__, self.backend
            )

        warning(f"{self.__class__.__name__}: Variable resolution clip detected!")

        if self.backend.static_shape:
            warning("static_shape is True, setting it to False...")
            self.backend.static_shape = False

        if not self.backend.max_shapes:
            warning("max_shapes is None, setting it to (1936, 1088). You may want to adjust it...")
            self.backend.max_shapes = (1936, 1088)

        if not self.backend.opt_shapes:
            warning("opt_shapes is None, setting it to (64, 64). You may want to adjust it...")
            self.backend.opt_shapes = (64, 64)

        scaled = ProcessVariableResClip[ConstantFormatVideoNode].from_func(
            wclip, lambda c: self.inference(c, **inference_kwargs), False, wclip.format, self.max_instances
        )

    scaled = self.postprocess_clip(scaled, clip, **postprocess_kwargs)

    return self._finish_scale(scaled, clip, width, height, shift, **kwargs)

supersample

supersample(
    clip: VideoNodeT,
    rfactor: float = 2.0,
    shift: tuple[TopShift, LeftShift] = (0, 0),
    **kwargs: Any
) -> VideoNodeT

Supersample a clip by a given scaling factor.

Parameters:

  • clip

    (VideoNodeT) –

    The source clip.

  • rfactor

    (float, default: 2.0 ) –

    Scaling factor for supersampling.

  • shift

    (tuple[TopShift, LeftShift], default: (0, 0) ) –

    Subpixel shift (top, left) applied during scaling.

  • kwargs

    (Any, default: {} ) –

    Additional arguments forwarded to the scale function.

Returns:

Raises:

  • CustomValueError

    If resulting resolution is non-positive.

Source code
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
def supersample(
    self, clip: VideoNodeT, rfactor: float = 2.0, shift: tuple[TopShift, LeftShift] = (0, 0), **kwargs: Any
) -> VideoNodeT:
    """
    Supersample a clip by a given scaling factor.

    :param clip:                The source clip.
    :param rfactor:             Scaling factor for supersampling.
    :param shift:               Subpixel shift (top, left) applied during scaling.
    :param kwargs:              Additional arguments forwarded to the scale function.
    :raises CustomValueError:   If resulting resolution is non-positive.
    :return:                    The supersampled clip.
    """
    assert check_variable_resolution(clip, self.supersample)

    dst_width, dst_height = ceil(clip.width * rfactor), ceil(clip.height * rfactor)

    if max(dst_width, dst_height) <= 0.0:
        raise CustomValueError(
            'Multiplying the resolution by "rfactor" must result in a positive resolution!',
            self.supersample,
            rfactor,
        )

    return self.scale(clip, dst_width, dst_height, shift, **kwargs)  # type: ignore[return-value]

DPIR

DPIR(
    strength: SupportsFloat | VideoNode = 10,
    backend: Any | None = None,
    tiles: int | tuple[int, int] | None = None,
    tilesize: int | tuple[int, int] | None = None,
    overlap: int | tuple[int, int] | None = None,
    *,
    kernel: KernelLike = Catrom,
    scaler: ScalerLike | None = None,
    shifter: KernelLike | None = None,
    **kwargs: Any
)

Bases: BaseDPIR

Deep Plug-and-Play Image Restoration

Initializes the class with the specified parameters.

Parameters:

  • strength

    (SupportsFloat | VideoNode, default: 10 ) –

    Threshold (8-bit scale) strength for deblocking/denoising. If a VideoNode is used, it must be in GRAY8, GRAYH, or GRAYS format, with pixel values representing the 8-bit thresholds.

  • backend

    (Any | None, default: None ) –

    The backend to be used with the vs-mlrt framework. If set to None, the most suitable backend will be automatically selected, prioritizing fp16 support.

  • tiles

    (int | tuple[int, int] | None, default: None ) –

    Whether to split the image into multiple tiles. This can help reduce VRAM usage, but note that the model's behavior may vary when they are used.

  • tilesize

    (int | tuple[int, int] | None, default: None ) –

    The size of each tile when splitting the image (if tiles are enabled).

  • overlap

    (int | tuple[int, int] | None, default: None ) –

    The size of overlap between tiles.

  • kernel

    (KernelLike, default: Catrom ) –

    Base kernel to be used for certain scaling/shifting/resampling operations. Defaults to Catrom.

  • scaler

    (ScalerLike | None, default: None ) –

    Scaler used for scaling operations. Defaults to kernel.

  • shifter

    (KernelLike | None, default: None ) –

    Kernel used for shifting operations. Defaults to kernel.

  • **kwargs

    (Any, default: {} ) –

    Additional arguments to pass to the backend. See the vsmlrt backend's docstring for more details.

Classes:

Methods:

Attributes:

Source code
866
867
868
869
870
871
872
873
874
875
876
877
878
879
880
881
882
883
884
885
886
887
888
889
890
891
892
893
894
895
896
897
898
899
900
901
902
903
904
905
906
907
908
909
910
911
912
def __init__(
    self,
    strength: SupportsFloat | vs.VideoNode = 10,
    backend: Any | None = None,
    tiles: int | tuple[int, int] | None = None,
    tilesize: int | tuple[int, int] | None = None,
    overlap: int | tuple[int, int] | None = None,
    *,
    kernel: KernelLike = Catrom,
    scaler: ScalerLike | None = None,
    shifter: KernelLike | None = None,
    **kwargs: Any
) -> None:
    """
    Initializes the class with the specified parameters.

    :param strength:        Threshold (8-bit scale) strength for deblocking/denoising.
                            If a VideoNode is used, it must be in GRAY8, GRAYH, or GRAYS format, with pixel values
                            representing the 8-bit thresholds.
    :param backend:         The backend to be used with the vs-mlrt framework.
                            If set to None, the most suitable backend will be automatically selected, prioritizing fp16 support.
    :param tiles:           Whether to split the image into multiple tiles.
                            This can help reduce VRAM usage, but note that the model's behavior may vary when they are used.
    :param tilesize:        The size of each tile when splitting the image (if tiles are enabled).
    :param overlap:         The size of overlap between tiles.
    :param kernel:          Base kernel to be used for certain scaling/shifting/resampling operations.
                            Defaults to Catrom.
    :param scaler:          Scaler used for scaling operations. Defaults to kernel.
    :param shifter:         Kernel used for shifting operations. Defaults to kernel.
    :param **kwargs:        Additional arguments to pass to the backend.
                            See the vsmlrt backend's docstring for more details.
    """
    self.strength = strength
    self.multiple = 8

    super().__init__(
        None,
        backend,
        tiles,
        tilesize,
        16 if overlap is None else overlap,
        -1,
        kernel=kernel,
        scaler=scaler,
        shifter=shifter,
        **kwargs
    )

backend instance-attribute

backend = autoselect_backend(**_default_args | kwargs)

kernel instance-attribute

kernel = ensure_obj(kernel, __class__)

kwargs instance-attribute

kwargs: dict[str, Any] = kwargs

Arguments passed to the implemented funcs or internal scale function.

max_instances instance-attribute

max_instances = max_instances

model instance-attribute

model = str(resolve())

multiple instance-attribute

multiple = 8

overlap instance-attribute

overlap = overlap

overlap_h instance-attribute

overlap_h = 8

overlap_w instance-attribute

overlap_w = 8

scale_function instance-attribute

scale_function: Callable[..., VideoNode]

Scale function called internally when performing scaling operations.

scaler instance-attribute

scaler = ensure_obj(scaler or kernel, __class__)

shifter instance-attribute

shifter = ensure_obj(shifter or kernel, __class__)

strength instance-attribute

strength = strength

tiles instance-attribute

tiles = tiles

tilesize instance-attribute

tilesize = tilesize

DrunetDeblock

DrunetDeblock(
    strength: SupportsFloat | VideoNode = 10,
    backend: Any | None = None,
    tiles: int | tuple[int, int] | None = None,
    tilesize: int | tuple[int, int] | None = None,
    overlap: int | tuple[int, int] | None = None,
    *,
    kernel: KernelLike = Catrom,
    scaler: ScalerLike | None = None,
    shifter: KernelLike | None = None,
    **kwargs: Any
)

Bases: BaseDPIR

DPIR model for deblocking.

Initializes the class with the specified parameters.

Parameters:

  • strength

    (SupportsFloat | VideoNode, default: 10 ) –

    Threshold (8-bit scale) strength for deblocking/denoising. If a VideoNode is used, it must be in GRAY8, GRAYH, or GRAYS format, with pixel values representing the 8-bit thresholds.

  • backend

    (Any | None, default: None ) –

    The backend to be used with the vs-mlrt framework. If set to None, the most suitable backend will be automatically selected, prioritizing fp16 support.

  • tiles

    (int | tuple[int, int] | None, default: None ) –

    Whether to split the image into multiple tiles. This can help reduce VRAM usage, but note that the model's behavior may vary when they are used.

  • tilesize

    (int | tuple[int, int] | None, default: None ) –

    The size of each tile when splitting the image (if tiles are enabled).

  • overlap

    (int | tuple[int, int] | None, default: None ) –

    The size of overlap between tiles.

  • kernel

    (KernelLike, default: Catrom ) –

    Base kernel to be used for certain scaling/shifting/resampling operations. Defaults to Catrom.

  • scaler

    (ScalerLike | None, default: None ) –

    Scaler used for scaling operations. Defaults to kernel.

  • shifter

    (KernelLike | None, default: None ) –

    Kernel used for shifting operations. Defaults to kernel.

  • **kwargs

    (Any, default: {} ) –

    Additional arguments to pass to the backend. See the vsmlrt backend's docstring for more details.

Classes:

Methods:

Attributes:

Source code
866
867
868
869
870
871
872
873
874
875
876
877
878
879
880
881
882
883
884
885
886
887
888
889
890
891
892
893
894
895
896
897
898
899
900
901
902
903
904
905
906
907
908
909
910
911
912
def __init__(
    self,
    strength: SupportsFloat | vs.VideoNode = 10,
    backend: Any | None = None,
    tiles: int | tuple[int, int] | None = None,
    tilesize: int | tuple[int, int] | None = None,
    overlap: int | tuple[int, int] | None = None,
    *,
    kernel: KernelLike = Catrom,
    scaler: ScalerLike | None = None,
    shifter: KernelLike | None = None,
    **kwargs: Any
) -> None:
    """
    Initializes the class with the specified parameters.

    :param strength:        Threshold (8-bit scale) strength for deblocking/denoising.
                            If a VideoNode is used, it must be in GRAY8, GRAYH, or GRAYS format, with pixel values
                            representing the 8-bit thresholds.
    :param backend:         The backend to be used with the vs-mlrt framework.
                            If set to None, the most suitable backend will be automatically selected, prioritizing fp16 support.
    :param tiles:           Whether to split the image into multiple tiles.
                            This can help reduce VRAM usage, but note that the model's behavior may vary when they are used.
    :param tilesize:        The size of each tile when splitting the image (if tiles are enabled).
    :param overlap:         The size of overlap between tiles.
    :param kernel:          Base kernel to be used for certain scaling/shifting/resampling operations.
                            Defaults to Catrom.
    :param scaler:          Scaler used for scaling operations. Defaults to kernel.
    :param shifter:         Kernel used for shifting operations. Defaults to kernel.
    :param **kwargs:        Additional arguments to pass to the backend.
                            See the vsmlrt backend's docstring for more details.
    """
    self.strength = strength
    self.multiple = 8

    super().__init__(
        None,
        backend,
        tiles,
        tilesize,
        16 if overlap is None else overlap,
        -1,
        kernel=kernel,
        scaler=scaler,
        shifter=shifter,
        **kwargs
    )

backend instance-attribute

backend = autoselect_backend(**_default_args | kwargs)

kernel instance-attribute

kernel = ensure_obj(kernel, __class__)

kwargs instance-attribute

kwargs: dict[str, Any] = kwargs

Arguments passed to the implemented funcs or internal scale function.

max_instances instance-attribute

max_instances = max_instances

model instance-attribute

model = str(resolve())

multiple instance-attribute

multiple = 8

overlap instance-attribute

overlap = overlap

overlap_h instance-attribute

overlap_h = 8

overlap_w instance-attribute

overlap_w = 8

scale_function instance-attribute

scale_function: Callable[..., VideoNode]

Scale function called internally when performing scaling operations.

scaler instance-attribute

scaler = ensure_obj(scaler or kernel, __class__)

shifter instance-attribute

shifter = ensure_obj(shifter or kernel, __class__)

strength instance-attribute

strength = strength

tiles instance-attribute

tiles = tiles

tilesize instance-attribute

tilesize = tilesize

cached_property

cached_property(func: Callable[Concatenate[_BaseScalerT, P], T_co])

Bases: cached_property[T_co]

Read only version of functools.cached_property.

Source code
265
def __init__(self, func: Callable[Concatenate[_BaseScalerT, P], T_co]) -> None: ...

calc_tilesize

calc_tilesize(
    clip: VideoNode, **kwargs: Any
) -> tuple[tuple[int, int], tuple[int, int]]
Source code
926
927
def calc_tilesize(self, clip: vs.VideoNode, **kwargs: Any) -> tuple[tuple[int, int], tuple[int, int]]:
    return super().calc_tilesize(clip, **dict(multiple=self.multiple) | kwargs)

ensure_obj classmethod

ensure_obj(
    scaler: str | type[Self] | Self | None = None,
    /,
    func_except: FuncExceptT | None = None,
) -> Self

Ensure that the input is a scaler instance, resolving it if necessary.

Parameters:

  • scaler
    (str | type[Self] | Self | None, default: None ) –

    Scaler identifier (string, class, or instance).

  • func_except
    (FuncExceptT | None, default: None ) –

    Function returned for custom error handling.

Returns:

  • Self

    Scaler instance.

Source code
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
@classmethod
def ensure_obj(
    cls,
    scaler: str | type[Self] | Self | None = None,
    /,
    func_except: FuncExceptT | None = None,
) -> Self:
    """
    Ensure that the input is a scaler instance, resolving it if necessary.

    :param scaler:          Scaler identifier (string, class, or instance).
    :param func_except:     Function returned for custom error handling.
    :return:                Scaler instance.
    """
    return _base_ensure_obj(cls, scaler, func_except)

from_param classmethod

from_param(
    scaler: str | type[Self] | Self | None = None,
    /,
    func_except: FuncExceptT | None = None,
) -> type[Self]

Resolve and return a scaler type from a given input (string, type, or instance).

Parameters:

  • scaler
    (str | type[Self] | Self | None, default: None ) –

    Scaler identifier (string, class, or instance).

  • func_except
    (FuncExceptT | None, default: None ) –

    Function returned for custom error handling.

Returns:

  • type[Self]

    Resolved scaler type.

Source code
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
@classmethod
def from_param(
    cls,
    scaler: str | type[Self] | Self | None = None,
    /,
    func_except: FuncExceptT | None = None,
) -> type[Self]:
    """
    Resolve and return a scaler type from a given input (string, type, or instance).

    :param scaler:          Scaler identifier (string, class, or instance).
    :param func_except:     Function returned for custom error handling.
    :return:                Resolved scaler type.
    """
    return _base_from_param(cls, scaler, cls._err_class, func_except)

get_scale_args

get_scale_args(
    clip: VideoNode,
    shift: tuple[TopShift, LeftShift] = (0, 0),
    width: int | None = None,
    height: int | None = None,
    **kwargs: Any
) -> dict[str, Any]

Generate the keyword arguments used for scaling.

Parameters:

  • clip
    (VideoNode) –

    The source clip.

  • shift
    (tuple[TopShift, LeftShift], default: (0, 0) ) –

    Subpixel shift (top, left).

  • width
    (int | None, default: None ) –

    Target width.

  • height
    (int | None, default: None ) –

    Target height.

  • kwargs
    (Any, default: {} ) –

    Extra parameters to merge.

Returns:

  • dict[str, Any]

    Final dictionary of keyword arguments for the scale function.

Source code
466
467
468
469
470
471
472
473
474
475
476
477
478
479
480
481
482
483
484
def get_scale_args(
    self,
    clip: vs.VideoNode,
    shift: tuple[TopShift, LeftShift] = (0, 0),
    width: int | None = None,
    height: int | None = None,
    **kwargs: Any,
) -> dict[str, Any]:
    """
    Generate the keyword arguments used for scaling.

    :param clip:    The source clip.
    :param shift:   Subpixel shift (top, left).
    :param width:   Target width.
    :param height:  Target height.
    :param kwargs:  Extra parameters to merge.
    :return:        Final dictionary of keyword arguments for the scale function.
    """
    return dict(width=width, height=height, src_top=shift[0], src_left=shift[1]) | self.kwargs | kwargs

inference

inference(
    clip: ConstantFormatVideoNode, **kwargs: Any
) -> ConstantFormatVideoNode
Source code
946
947
948
949
950
951
952
953
954
955
956
957
958
959
960
961
962
963
964
965
966
967
def inference(self, clip: ConstantFormatVideoNode, **kwargs: Any) -> ConstantFormatVideoNode:
    from vsmlrt import DPIR as mlrt_dpir
    from vsmlrt import DPIRModel

    args = (
        self.tiles,
        self.tilesize,
        self.overlap,
        DPIRModel(self._model[0] if clip.format.color_family == vs.GRAY else self._model[1]),
        self.backend
    )
    padding = padder.mod_padding(clip, self.multiple, 0)

    if not any(padding) or kwargs.pop("no_pad", False):
        return mlrt_dpir(clip, self.strength, *args)

    clip = padder.MIRROR(clip, *padding)
    strength = padder.MIRROR(self.strength, *padding) if isinstance(self.strength, vs.VideoNode) else self.strength

    inferenced = mlrt_dpir(clip, strength, *args)

    return inferenced.std.Crop(*padding)

kernel_radius

kernel_radius() -> int

Return the effective kernel radius for the scaler.

Returns:

  • int

    Kernel radius.

Raises:

  • CustomNotImplementedError

    If no kernel radius is defined.

Source code
347
348
349
350
351
352
353
354
355
@cached_property
def kernel_radius(self) -> int:
    """
    Return the effective kernel radius for the scaler.

    :raises CustomNotImplementedError:  If no kernel radius is defined.
    :return:                            Kernel radius.
    """
    ...

multi

multi(
    clip: VideoNodeT,
    multi: float = 2.0,
    shift: tuple[TopShift, LeftShift] = (0, 0),
    **kwargs: Any
) -> VideoNodeT

Deprecated alias for supersample.

Parameters:

  • clip
    (VideoNodeT) –

    The source clip.

  • multi
    (float, default: 2.0 ) –

    Supersampling factor.

  • shift
    (tuple[TopShift, LeftShift], default: (0, 0) ) –

    Subpixel shift (top, left) applied during scaling.

  • kwargs
    (Any, default: {} ) –

    Additional arguments forwarded to the scale function.

Returns:

Source code
451
452
453
454
455
456
457
458
459
460
461
462
463
464
@deprecated('The "multi" method is deprecated. Use "supersample" instead.', category=DeprecationWarning)
def multi(
    self, clip: VideoNodeT, multi: float = 2.0, shift: tuple[TopShift, LeftShift] = (0, 0), **kwargs: Any
) -> VideoNodeT:
    """
    Deprecated alias for `supersample`.

    :param clip:    The source clip.
    :param multi:   Supersampling factor.
    :param shift:   Subpixel shift (top, left) applied during scaling.
    :param kwargs:  Additional arguments forwarded to the scale function.
    :return:        The supersampled clip.
    """
    return self.supersample(clip, multi, shift, **kwargs)

postprocess_clip

postprocess_clip(
    clip: VideoNode, input_clip: VideoNode, **kwargs: Any
) -> ConstantFormatVideoNode
Source code
937
938
939
940
941
942
943
944
def postprocess_clip(self, clip: vs.VideoNode, input_clip: vs.VideoNode, **kwargs: Any) -> ConstantFormatVideoNode:
    assert check_variable_format(clip, self.__class__)

    if get_video_format(clip) != get_video_format(input_clip):
        kwargs = dict(dither_type=DitherType.ORDERED) | kwargs
        clip = self.kernel.resample(clip, input_clip, Matrix.from_video(input_clip, func=self.__class__), **kwargs)

    return clip

preprocess_clip

preprocess_clip(clip: VideoNode, **kwargs: Any) -> ConstantFormatVideoNode
Source code
929
930
931
932
933
934
935
def preprocess_clip(self, clip: vs.VideoNode, **kwargs: Any) -> ConstantFormatVideoNode:
    if get_color_family(clip) == vs.GRAY:
        return super().preprocess_clip(clip, **kwargs)

    clip = self.kernel.resample(clip, vs.RGBH if self.backend.fp16 else vs.RGBS, Matrix.RGB)

    return limiter(clip, func=self.__class__)

pretty_string

pretty_string() -> str

Cached property returning a user-friendly string representation.

Returns:

  • str

    Pretty-printed string with arguments.

Source code
368
369
370
371
372
373
374
375
@cached_property
def pretty_string(self) -> str:
    """
    Cached property returning a user-friendly string representation.

    :return: Pretty-printed string with arguments.
    """
    return self._pretty_string()

scale

scale(
    clip: VideoNode,
    width: int | None = None,
    height: int | None = None,
    shift: tuple[float, float] = (0, 0),
    **kwargs: Any
) -> ConstantFormatVideoNode
Source code
914
915
916
917
918
919
920
921
922
923
924
def scale(
    self,
    clip: vs.VideoNode,
    width: int | None = None,
    height: int | None = None,
    shift: tuple[float, float] = (0, 0),
    **kwargs: Any
) -> ConstantFormatVideoNode:
    assert check_variable_resolution(clip, self.__class__)

    return super().scale(clip, width, height, shift, **kwargs)

supersample

supersample(
    clip: VideoNodeT,
    rfactor: float = 2.0,
    shift: tuple[TopShift, LeftShift] = (0, 0),
    **kwargs: Any
) -> VideoNodeT

Supersample a clip by a given scaling factor.

Parameters:

  • clip
    (VideoNodeT) –

    The source clip.

  • rfactor
    (float, default: 2.0 ) –

    Scaling factor for supersampling.

  • shift
    (tuple[TopShift, LeftShift], default: (0, 0) ) –

    Subpixel shift (top, left) applied during scaling.

  • kwargs
    (Any, default: {} ) –

    Additional arguments forwarded to the scale function.

Returns:

Raises:

  • CustomValueError

    If resulting resolution is non-positive.

Source code
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
def supersample(
    self, clip: VideoNodeT, rfactor: float = 2.0, shift: tuple[TopShift, LeftShift] = (0, 0), **kwargs: Any
) -> VideoNodeT:
    """
    Supersample a clip by a given scaling factor.

    :param clip:                The source clip.
    :param rfactor:             Scaling factor for supersampling.
    :param shift:               Subpixel shift (top, left) applied during scaling.
    :param kwargs:              Additional arguments forwarded to the scale function.
    :raises CustomValueError:   If resulting resolution is non-positive.
    :return:                    The supersampled clip.
    """
    assert check_variable_resolution(clip, self.supersample)

    dst_width, dst_height = ceil(clip.width * rfactor), ceil(clip.height * rfactor)

    if max(dst_width, dst_height) <= 0.0:
        raise CustomValueError(
            'Multiplying the resolution by "rfactor" must result in a positive resolution!',
            self.supersample,
            rfactor,
        )

    return self.scale(clip, dst_width, dst_height, shift, **kwargs)  # type: ignore[return-value]

DrunetDenoise

DrunetDenoise(
    strength: SupportsFloat | VideoNode = 10,
    backend: Any | None = None,
    tiles: int | tuple[int, int] | None = None,
    tilesize: int | tuple[int, int] | None = None,
    overlap: int | tuple[int, int] | None = None,
    *,
    kernel: KernelLike = Catrom,
    scaler: ScalerLike | None = None,
    shifter: KernelLike | None = None,
    **kwargs: Any
)

Bases: BaseDPIR

DPIR model for denoising.

Initializes the class with the specified parameters.

Parameters:

  • strength

    (SupportsFloat | VideoNode, default: 10 ) –

    Threshold (8-bit scale) strength for deblocking/denoising. If a VideoNode is used, it must be in GRAY8, GRAYH, or GRAYS format, with pixel values representing the 8-bit thresholds.

  • backend

    (Any | None, default: None ) –

    The backend to be used with the vs-mlrt framework. If set to None, the most suitable backend will be automatically selected, prioritizing fp16 support.

  • tiles

    (int | tuple[int, int] | None, default: None ) –

    Whether to split the image into multiple tiles. This can help reduce VRAM usage, but note that the model's behavior may vary when they are used.

  • tilesize

    (int | tuple[int, int] | None, default: None ) –

    The size of each tile when splitting the image (if tiles are enabled).

  • overlap

    (int | tuple[int, int] | None, default: None ) –

    The size of overlap between tiles.

  • kernel

    (KernelLike, default: Catrom ) –

    Base kernel to be used for certain scaling/shifting/resampling operations. Defaults to Catrom.

  • scaler

    (ScalerLike | None, default: None ) –

    Scaler used for scaling operations. Defaults to kernel.

  • shifter

    (KernelLike | None, default: None ) –

    Kernel used for shifting operations. Defaults to kernel.

  • **kwargs

    (Any, default: {} ) –

    Additional arguments to pass to the backend. See the vsmlrt backend's docstring for more details.

Classes:

Methods:

Attributes:

Source code
866
867
868
869
870
871
872
873
874
875
876
877
878
879
880
881
882
883
884
885
886
887
888
889
890
891
892
893
894
895
896
897
898
899
900
901
902
903
904
905
906
907
908
909
910
911
912
def __init__(
    self,
    strength: SupportsFloat | vs.VideoNode = 10,
    backend: Any | None = None,
    tiles: int | tuple[int, int] | None = None,
    tilesize: int | tuple[int, int] | None = None,
    overlap: int | tuple[int, int] | None = None,
    *,
    kernel: KernelLike = Catrom,
    scaler: ScalerLike | None = None,
    shifter: KernelLike | None = None,
    **kwargs: Any
) -> None:
    """
    Initializes the class with the specified parameters.

    :param strength:        Threshold (8-bit scale) strength for deblocking/denoising.
                            If a VideoNode is used, it must be in GRAY8, GRAYH, or GRAYS format, with pixel values
                            representing the 8-bit thresholds.
    :param backend:         The backend to be used with the vs-mlrt framework.
                            If set to None, the most suitable backend will be automatically selected, prioritizing fp16 support.
    :param tiles:           Whether to split the image into multiple tiles.
                            This can help reduce VRAM usage, but note that the model's behavior may vary when they are used.
    :param tilesize:        The size of each tile when splitting the image (if tiles are enabled).
    :param overlap:         The size of overlap between tiles.
    :param kernel:          Base kernel to be used for certain scaling/shifting/resampling operations.
                            Defaults to Catrom.
    :param scaler:          Scaler used for scaling operations. Defaults to kernel.
    :param shifter:         Kernel used for shifting operations. Defaults to kernel.
    :param **kwargs:        Additional arguments to pass to the backend.
                            See the vsmlrt backend's docstring for more details.
    """
    self.strength = strength
    self.multiple = 8

    super().__init__(
        None,
        backend,
        tiles,
        tilesize,
        16 if overlap is None else overlap,
        -1,
        kernel=kernel,
        scaler=scaler,
        shifter=shifter,
        **kwargs
    )

backend instance-attribute

backend = autoselect_backend(**_default_args | kwargs)

kernel instance-attribute

kernel = ensure_obj(kernel, __class__)

kwargs instance-attribute

kwargs: dict[str, Any] = kwargs

Arguments passed to the implemented funcs or internal scale function.

max_instances instance-attribute

max_instances = max_instances

model instance-attribute

model = str(resolve())

multiple instance-attribute

multiple = 8

overlap instance-attribute

overlap = overlap

overlap_h instance-attribute

overlap_h = 8

overlap_w instance-attribute

overlap_w = 8

scale_function instance-attribute

scale_function: Callable[..., VideoNode]

Scale function called internally when performing scaling operations.

scaler instance-attribute

scaler = ensure_obj(scaler or kernel, __class__)

shifter instance-attribute

shifter = ensure_obj(shifter or kernel, __class__)

strength instance-attribute

strength = strength

tiles instance-attribute

tiles = tiles

tilesize instance-attribute

tilesize = tilesize

cached_property

cached_property(func: Callable[Concatenate[_BaseScalerT, P], T_co])

Bases: cached_property[T_co]

Read only version of functools.cached_property.

Source code
265
def __init__(self, func: Callable[Concatenate[_BaseScalerT, P], T_co]) -> None: ...

calc_tilesize

calc_tilesize(
    clip: VideoNode, **kwargs: Any
) -> tuple[tuple[int, int], tuple[int, int]]
Source code
926
927
def calc_tilesize(self, clip: vs.VideoNode, **kwargs: Any) -> tuple[tuple[int, int], tuple[int, int]]:
    return super().calc_tilesize(clip, **dict(multiple=self.multiple) | kwargs)

ensure_obj classmethod

ensure_obj(
    scaler: str | type[Self] | Self | None = None,
    /,
    func_except: FuncExceptT | None = None,
) -> Self

Ensure that the input is a scaler instance, resolving it if necessary.

Parameters:

  • scaler
    (str | type[Self] | Self | None, default: None ) –

    Scaler identifier (string, class, or instance).

  • func_except
    (FuncExceptT | None, default: None ) –

    Function returned for custom error handling.

Returns:

  • Self

    Scaler instance.

Source code
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
@classmethod
def ensure_obj(
    cls,
    scaler: str | type[Self] | Self | None = None,
    /,
    func_except: FuncExceptT | None = None,
) -> Self:
    """
    Ensure that the input is a scaler instance, resolving it if necessary.

    :param scaler:          Scaler identifier (string, class, or instance).
    :param func_except:     Function returned for custom error handling.
    :return:                Scaler instance.
    """
    return _base_ensure_obj(cls, scaler, func_except)

from_param classmethod

from_param(
    scaler: str | type[Self] | Self | None = None,
    /,
    func_except: FuncExceptT | None = None,
) -> type[Self]

Resolve and return a scaler type from a given input (string, type, or instance).

Parameters:

  • scaler
    (str | type[Self] | Self | None, default: None ) –

    Scaler identifier (string, class, or instance).

  • func_except
    (FuncExceptT | None, default: None ) –

    Function returned for custom error handling.

Returns:

  • type[Self]

    Resolved scaler type.

Source code
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
@classmethod
def from_param(
    cls,
    scaler: str | type[Self] | Self | None = None,
    /,
    func_except: FuncExceptT | None = None,
) -> type[Self]:
    """
    Resolve and return a scaler type from a given input (string, type, or instance).

    :param scaler:          Scaler identifier (string, class, or instance).
    :param func_except:     Function returned for custom error handling.
    :return:                Resolved scaler type.
    """
    return _base_from_param(cls, scaler, cls._err_class, func_except)

get_scale_args

get_scale_args(
    clip: VideoNode,
    shift: tuple[TopShift, LeftShift] = (0, 0),
    width: int | None = None,
    height: int | None = None,
    **kwargs: Any
) -> dict[str, Any]

Generate the keyword arguments used for scaling.

Parameters:

  • clip
    (VideoNode) –

    The source clip.

  • shift
    (tuple[TopShift, LeftShift], default: (0, 0) ) –

    Subpixel shift (top, left).

  • width
    (int | None, default: None ) –

    Target width.

  • height
    (int | None, default: None ) –

    Target height.

  • kwargs
    (Any, default: {} ) –

    Extra parameters to merge.

Returns:

  • dict[str, Any]

    Final dictionary of keyword arguments for the scale function.

Source code
466
467
468
469
470
471
472
473
474
475
476
477
478
479
480
481
482
483
484
def get_scale_args(
    self,
    clip: vs.VideoNode,
    shift: tuple[TopShift, LeftShift] = (0, 0),
    width: int | None = None,
    height: int | None = None,
    **kwargs: Any,
) -> dict[str, Any]:
    """
    Generate the keyword arguments used for scaling.

    :param clip:    The source clip.
    :param shift:   Subpixel shift (top, left).
    :param width:   Target width.
    :param height:  Target height.
    :param kwargs:  Extra parameters to merge.
    :return:        Final dictionary of keyword arguments for the scale function.
    """
    return dict(width=width, height=height, src_top=shift[0], src_left=shift[1]) | self.kwargs | kwargs

inference

inference(
    clip: ConstantFormatVideoNode, **kwargs: Any
) -> ConstantFormatVideoNode
Source code
946
947
948
949
950
951
952
953
954
955
956
957
958
959
960
961
962
963
964
965
966
967
def inference(self, clip: ConstantFormatVideoNode, **kwargs: Any) -> ConstantFormatVideoNode:
    from vsmlrt import DPIR as mlrt_dpir
    from vsmlrt import DPIRModel

    args = (
        self.tiles,
        self.tilesize,
        self.overlap,
        DPIRModel(self._model[0] if clip.format.color_family == vs.GRAY else self._model[1]),
        self.backend
    )
    padding = padder.mod_padding(clip, self.multiple, 0)

    if not any(padding) or kwargs.pop("no_pad", False):
        return mlrt_dpir(clip, self.strength, *args)

    clip = padder.MIRROR(clip, *padding)
    strength = padder.MIRROR(self.strength, *padding) if isinstance(self.strength, vs.VideoNode) else self.strength

    inferenced = mlrt_dpir(clip, strength, *args)

    return inferenced.std.Crop(*padding)

kernel_radius

kernel_radius() -> int

Return the effective kernel radius for the scaler.

Returns:

  • int

    Kernel radius.

Raises:

  • CustomNotImplementedError

    If no kernel radius is defined.

Source code
347
348
349
350
351
352
353
354
355
@cached_property
def kernel_radius(self) -> int:
    """
    Return the effective kernel radius for the scaler.

    :raises CustomNotImplementedError:  If no kernel radius is defined.
    :return:                            Kernel radius.
    """
    ...

multi

multi(
    clip: VideoNodeT,
    multi: float = 2.0,
    shift: tuple[TopShift, LeftShift] = (0, 0),
    **kwargs: Any
) -> VideoNodeT

Deprecated alias for supersample.

Parameters:

  • clip
    (VideoNodeT) –

    The source clip.

  • multi
    (float, default: 2.0 ) –

    Supersampling factor.

  • shift
    (tuple[TopShift, LeftShift], default: (0, 0) ) –

    Subpixel shift (top, left) applied during scaling.

  • kwargs
    (Any, default: {} ) –

    Additional arguments forwarded to the scale function.

Returns:

Source code
451
452
453
454
455
456
457
458
459
460
461
462
463
464
@deprecated('The "multi" method is deprecated. Use "supersample" instead.', category=DeprecationWarning)
def multi(
    self, clip: VideoNodeT, multi: float = 2.0, shift: tuple[TopShift, LeftShift] = (0, 0), **kwargs: Any
) -> VideoNodeT:
    """
    Deprecated alias for `supersample`.

    :param clip:    The source clip.
    :param multi:   Supersampling factor.
    :param shift:   Subpixel shift (top, left) applied during scaling.
    :param kwargs:  Additional arguments forwarded to the scale function.
    :return:        The supersampled clip.
    """
    return self.supersample(clip, multi, shift, **kwargs)

postprocess_clip

postprocess_clip(
    clip: VideoNode, input_clip: VideoNode, **kwargs: Any
) -> ConstantFormatVideoNode
Source code
937
938
939
940
941
942
943
944
def postprocess_clip(self, clip: vs.VideoNode, input_clip: vs.VideoNode, **kwargs: Any) -> ConstantFormatVideoNode:
    assert check_variable_format(clip, self.__class__)

    if get_video_format(clip) != get_video_format(input_clip):
        kwargs = dict(dither_type=DitherType.ORDERED) | kwargs
        clip = self.kernel.resample(clip, input_clip, Matrix.from_video(input_clip, func=self.__class__), **kwargs)

    return clip

preprocess_clip

preprocess_clip(clip: VideoNode, **kwargs: Any) -> ConstantFormatVideoNode
Source code
929
930
931
932
933
934
935
def preprocess_clip(self, clip: vs.VideoNode, **kwargs: Any) -> ConstantFormatVideoNode:
    if get_color_family(clip) == vs.GRAY:
        return super().preprocess_clip(clip, **kwargs)

    clip = self.kernel.resample(clip, vs.RGBH if self.backend.fp16 else vs.RGBS, Matrix.RGB)

    return limiter(clip, func=self.__class__)

pretty_string

pretty_string() -> str

Cached property returning a user-friendly string representation.

Returns:

  • str

    Pretty-printed string with arguments.

Source code
368
369
370
371
372
373
374
375
@cached_property
def pretty_string(self) -> str:
    """
    Cached property returning a user-friendly string representation.

    :return: Pretty-printed string with arguments.
    """
    return self._pretty_string()

scale

scale(
    clip: VideoNode,
    width: int | None = None,
    height: int | None = None,
    shift: tuple[float, float] = (0, 0),
    **kwargs: Any
) -> ConstantFormatVideoNode
Source code
914
915
916
917
918
919
920
921
922
923
924
def scale(
    self,
    clip: vs.VideoNode,
    width: int | None = None,
    height: int | None = None,
    shift: tuple[float, float] = (0, 0),
    **kwargs: Any
) -> ConstantFormatVideoNode:
    assert check_variable_resolution(clip, self.__class__)

    return super().scale(clip, width, height, shift, **kwargs)

supersample

supersample(
    clip: VideoNodeT,
    rfactor: float = 2.0,
    shift: tuple[TopShift, LeftShift] = (0, 0),
    **kwargs: Any
) -> VideoNodeT

Supersample a clip by a given scaling factor.

Parameters:

  • clip
    (VideoNodeT) –

    The source clip.

  • rfactor
    (float, default: 2.0 ) –

    Scaling factor for supersampling.

  • shift
    (tuple[TopShift, LeftShift], default: (0, 0) ) –

    Subpixel shift (top, left) applied during scaling.

  • kwargs
    (Any, default: {} ) –

    Additional arguments forwarded to the scale function.

Returns:

Raises:

  • CustomValueError

    If resulting resolution is non-positive.

Source code
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
def supersample(
    self, clip: VideoNodeT, rfactor: float = 2.0, shift: tuple[TopShift, LeftShift] = (0, 0), **kwargs: Any
) -> VideoNodeT:
    """
    Supersample a clip by a given scaling factor.

    :param clip:                The source clip.
    :param rfactor:             Scaling factor for supersampling.
    :param shift:               Subpixel shift (top, left) applied during scaling.
    :param kwargs:              Additional arguments forwarded to the scale function.
    :raises CustomValueError:   If resulting resolution is non-positive.
    :return:                    The supersampled clip.
    """
    assert check_variable_resolution(clip, self.supersample)

    dst_width, dst_height = ceil(clip.width * rfactor), ceil(clip.height * rfactor)

    if max(dst_width, dst_height) <= 0.0:
        raise CustomValueError(
            'Multiplying the resolution by "rfactor" must result in a positive resolution!',
            self.supersample,
            rfactor,
        )

    return self.scale(clip, dst_width, dst_height, shift, **kwargs)  # type: ignore[return-value]

cached_property

cached_property(func: Callable[Concatenate[_BaseScalerT, P], T_co])

Bases: cached_property[T_co]

Read only version of functools.cached_property.

Source code
265
def __init__(self, func: Callable[Concatenate[_BaseScalerT, P], T_co]) -> None: ...

calc_tilesize

calc_tilesize(
    clip: VideoNode, **kwargs: Any
) -> tuple[tuple[int, int], tuple[int, int]]
Source code
926
927
def calc_tilesize(self, clip: vs.VideoNode, **kwargs: Any) -> tuple[tuple[int, int], tuple[int, int]]:
    return super().calc_tilesize(clip, **dict(multiple=self.multiple) | kwargs)

ensure_obj classmethod

ensure_obj(
    scaler: str | type[Self] | Self | None = None,
    /,
    func_except: FuncExceptT | None = None,
) -> Self

Ensure that the input is a scaler instance, resolving it if necessary.

Parameters:

  • scaler

    (str | type[Self] | Self | None, default: None ) –

    Scaler identifier (string, class, or instance).

  • func_except

    (FuncExceptT | None, default: None ) –

    Function returned for custom error handling.

Returns:

  • Self

    Scaler instance.

Source code
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
@classmethod
def ensure_obj(
    cls,
    scaler: str | type[Self] | Self | None = None,
    /,
    func_except: FuncExceptT | None = None,
) -> Self:
    """
    Ensure that the input is a scaler instance, resolving it if necessary.

    :param scaler:          Scaler identifier (string, class, or instance).
    :param func_except:     Function returned for custom error handling.
    :return:                Scaler instance.
    """
    return _base_ensure_obj(cls, scaler, func_except)

from_param classmethod

from_param(
    scaler: str | type[Self] | Self | None = None,
    /,
    func_except: FuncExceptT | None = None,
) -> type[Self]

Resolve and return a scaler type from a given input (string, type, or instance).

Parameters:

  • scaler

    (str | type[Self] | Self | None, default: None ) –

    Scaler identifier (string, class, or instance).

  • func_except

    (FuncExceptT | None, default: None ) –

    Function returned for custom error handling.

Returns:

  • type[Self]

    Resolved scaler type.

Source code
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
@classmethod
def from_param(
    cls,
    scaler: str | type[Self] | Self | None = None,
    /,
    func_except: FuncExceptT | None = None,
) -> type[Self]:
    """
    Resolve and return a scaler type from a given input (string, type, or instance).

    :param scaler:          Scaler identifier (string, class, or instance).
    :param func_except:     Function returned for custom error handling.
    :return:                Resolved scaler type.
    """
    return _base_from_param(cls, scaler, cls._err_class, func_except)

get_scale_args

get_scale_args(
    clip: VideoNode,
    shift: tuple[TopShift, LeftShift] = (0, 0),
    width: int | None = None,
    height: int | None = None,
    **kwargs: Any
) -> dict[str, Any]

Generate the keyword arguments used for scaling.

Parameters:

  • clip

    (VideoNode) –

    The source clip.

  • shift

    (tuple[TopShift, LeftShift], default: (0, 0) ) –

    Subpixel shift (top, left).

  • width

    (int | None, default: None ) –

    Target width.

  • height

    (int | None, default: None ) –

    Target height.

  • kwargs

    (Any, default: {} ) –

    Extra parameters to merge.

Returns:

  • dict[str, Any]

    Final dictionary of keyword arguments for the scale function.

Source code
466
467
468
469
470
471
472
473
474
475
476
477
478
479
480
481
482
483
484
def get_scale_args(
    self,
    clip: vs.VideoNode,
    shift: tuple[TopShift, LeftShift] = (0, 0),
    width: int | None = None,
    height: int | None = None,
    **kwargs: Any,
) -> dict[str, Any]:
    """
    Generate the keyword arguments used for scaling.

    :param clip:    The source clip.
    :param shift:   Subpixel shift (top, left).
    :param width:   Target width.
    :param height:  Target height.
    :param kwargs:  Extra parameters to merge.
    :return:        Final dictionary of keyword arguments for the scale function.
    """
    return dict(width=width, height=height, src_top=shift[0], src_left=shift[1]) | self.kwargs | kwargs

inference

inference(
    clip: ConstantFormatVideoNode, **kwargs: Any
) -> ConstantFormatVideoNode
Source code
946
947
948
949
950
951
952
953
954
955
956
957
958
959
960
961
962
963
964
965
966
967
def inference(self, clip: ConstantFormatVideoNode, **kwargs: Any) -> ConstantFormatVideoNode:
    from vsmlrt import DPIR as mlrt_dpir
    from vsmlrt import DPIRModel

    args = (
        self.tiles,
        self.tilesize,
        self.overlap,
        DPIRModel(self._model[0] if clip.format.color_family == vs.GRAY else self._model[1]),
        self.backend
    )
    padding = padder.mod_padding(clip, self.multiple, 0)

    if not any(padding) or kwargs.pop("no_pad", False):
        return mlrt_dpir(clip, self.strength, *args)

    clip = padder.MIRROR(clip, *padding)
    strength = padder.MIRROR(self.strength, *padding) if isinstance(self.strength, vs.VideoNode) else self.strength

    inferenced = mlrt_dpir(clip, strength, *args)

    return inferenced.std.Crop(*padding)

kernel_radius

kernel_radius() -> int

Return the effective kernel radius for the scaler.

Returns:

  • int

    Kernel radius.

Raises:

  • CustomNotImplementedError

    If no kernel radius is defined.

Source code
347
348
349
350
351
352
353
354
355
@cached_property
def kernel_radius(self) -> int:
    """
    Return the effective kernel radius for the scaler.

    :raises CustomNotImplementedError:  If no kernel radius is defined.
    :return:                            Kernel radius.
    """
    ...

multi

multi(
    clip: VideoNodeT,
    multi: float = 2.0,
    shift: tuple[TopShift, LeftShift] = (0, 0),
    **kwargs: Any
) -> VideoNodeT

Deprecated alias for supersample.

Parameters:

  • clip

    (VideoNodeT) –

    The source clip.

  • multi

    (float, default: 2.0 ) –

    Supersampling factor.

  • shift

    (tuple[TopShift, LeftShift], default: (0, 0) ) –

    Subpixel shift (top, left) applied during scaling.

  • kwargs

    (Any, default: {} ) –

    Additional arguments forwarded to the scale function.

Returns:

Source code
451
452
453
454
455
456
457
458
459
460
461
462
463
464
@deprecated('The "multi" method is deprecated. Use "supersample" instead.', category=DeprecationWarning)
def multi(
    self, clip: VideoNodeT, multi: float = 2.0, shift: tuple[TopShift, LeftShift] = (0, 0), **kwargs: Any
) -> VideoNodeT:
    """
    Deprecated alias for `supersample`.

    :param clip:    The source clip.
    :param multi:   Supersampling factor.
    :param shift:   Subpixel shift (top, left) applied during scaling.
    :param kwargs:  Additional arguments forwarded to the scale function.
    :return:        The supersampled clip.
    """
    return self.supersample(clip, multi, shift, **kwargs)

postprocess_clip

postprocess_clip(
    clip: VideoNode, input_clip: VideoNode, **kwargs: Any
) -> ConstantFormatVideoNode
Source code
937
938
939
940
941
942
943
944
def postprocess_clip(self, clip: vs.VideoNode, input_clip: vs.VideoNode, **kwargs: Any) -> ConstantFormatVideoNode:
    assert check_variable_format(clip, self.__class__)

    if get_video_format(clip) != get_video_format(input_clip):
        kwargs = dict(dither_type=DitherType.ORDERED) | kwargs
        clip = self.kernel.resample(clip, input_clip, Matrix.from_video(input_clip, func=self.__class__), **kwargs)

    return clip

preprocess_clip

preprocess_clip(clip: VideoNode, **kwargs: Any) -> ConstantFormatVideoNode
Source code
929
930
931
932
933
934
935
def preprocess_clip(self, clip: vs.VideoNode, **kwargs: Any) -> ConstantFormatVideoNode:
    if get_color_family(clip) == vs.GRAY:
        return super().preprocess_clip(clip, **kwargs)

    clip = self.kernel.resample(clip, vs.RGBH if self.backend.fp16 else vs.RGBS, Matrix.RGB)

    return limiter(clip, func=self.__class__)

pretty_string

pretty_string() -> str

Cached property returning a user-friendly string representation.

Returns:

  • str

    Pretty-printed string with arguments.

Source code
368
369
370
371
372
373
374
375
@cached_property
def pretty_string(self) -> str:
    """
    Cached property returning a user-friendly string representation.

    :return: Pretty-printed string with arguments.
    """
    return self._pretty_string()

scale

scale(
    clip: VideoNode,
    width: int | None = None,
    height: int | None = None,
    shift: tuple[float, float] = (0, 0),
    **kwargs: Any
) -> ConstantFormatVideoNode
Source code
914
915
916
917
918
919
920
921
922
923
924
def scale(
    self,
    clip: vs.VideoNode,
    width: int | None = None,
    height: int | None = None,
    shift: tuple[float, float] = (0, 0),
    **kwargs: Any
) -> ConstantFormatVideoNode:
    assert check_variable_resolution(clip, self.__class__)

    return super().scale(clip, width, height, shift, **kwargs)

supersample

supersample(
    clip: VideoNodeT,
    rfactor: float = 2.0,
    shift: tuple[TopShift, LeftShift] = (0, 0),
    **kwargs: Any
) -> VideoNodeT

Supersample a clip by a given scaling factor.

Parameters:

  • clip

    (VideoNodeT) –

    The source clip.

  • rfactor

    (float, default: 2.0 ) –

    Scaling factor for supersampling.

  • shift

    (tuple[TopShift, LeftShift], default: (0, 0) ) –

    Subpixel shift (top, left) applied during scaling.

  • kwargs

    (Any, default: {} ) –

    Additional arguments forwarded to the scale function.

Returns:

Raises:

  • CustomValueError

    If resulting resolution is non-positive.

Source code
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
def supersample(
    self, clip: VideoNodeT, rfactor: float = 2.0, shift: tuple[TopShift, LeftShift] = (0, 0), **kwargs: Any
) -> VideoNodeT:
    """
    Supersample a clip by a given scaling factor.

    :param clip:                The source clip.
    :param rfactor:             Scaling factor for supersampling.
    :param shift:               Subpixel shift (top, left) applied during scaling.
    :param kwargs:              Additional arguments forwarded to the scale function.
    :raises CustomValueError:   If resulting resolution is non-positive.
    :return:                    The supersampled clip.
    """
    assert check_variable_resolution(clip, self.supersample)

    dst_width, dst_height = ceil(clip.width * rfactor), ceil(clip.height * rfactor)

    if max(dst_width, dst_height) <= 0.0:
        raise CustomValueError(
            'Multiplying the resolution by "rfactor" must result in a positive resolution!',
            self.supersample,
            rfactor,
        )

    return self.scale(clip, dst_width, dst_height, shift, **kwargs)  # type: ignore[return-value]

GenericOnnxScaler

GenericOnnxScaler(
    model: SPathLike | None = None,
    backend: Any | None = None,
    tiles: int | tuple[int, int] | None = None,
    tilesize: int | tuple[int, int] | None = None,
    overlap: int | tuple[int, int] | None = None,
    max_instances: int = 2,
    *,
    kernel: KernelLike = Catrom,
    scaler: ScalerLike | None = None,
    shifter: KernelLike | None = None,
    **kwargs: Any
)

Bases: BaseOnnxScaler

Generic scaler class for an ONNX model.

Example usage:

from vsscale import GenericOnnxScaler

scaled = GenericOnnxScaler("path/to/model.onnx").scale(clip, ...)

# For Windows paths:
scaled = GenericOnnxScaler(r"path\to\model.onnx").scale(clip, ...)

Parameters:

  • model

    (SPathLike | None, default: None ) –

    Path to the ONNX model file.

  • backend

    (Any | None, default: None ) –

    The backend to be used with the vs-mlrt framework. If set to None, the most suitable backend will be automatically selected, prioritizing fp16 support.

  • tiles

    (int | tuple[int, int] | None, default: None ) –

    Whether to split the image into multiple tiles. This can help reduce VRAM usage, but note that the model's behavior may vary when they are used.

  • tilesize

    (int | tuple[int, int] | None, default: None ) –

    The size of each tile when splitting the image (if tiles are enabled).

  • overlap

    (int | tuple[int, int] | None, default: None ) –

    The size of overlap between tiles.

  • max_instances

    (int, default: 2 ) –

    Maximum instances to spawn when scaling a variable resolution clip.

  • kernel

    (KernelLike, default: Catrom ) –

    Base kernel to be used for certain scaling/shifting/resampling operations. Defaults to Catrom.

  • scaler

    (ScalerLike | None, default: None ) –

    Scaler used for scaling operations. Defaults to kernel.

  • shifter

    (KernelLike | None, default: None ) –

    Kernel used for shifting operations. Defaults to kernel.

  • **kwargs

    (Any, default: {} ) –

    Additional arguments to pass to the backend. See the vsmlrt backend's docstring for more details.

Classes:

Methods:

  • calc_tilesize

    Reimplementation of vsmlrt.calc_tilesize helper function

  • ensure_obj

    Ensure that the input is a scaler instance, resolving it if necessary.

  • from_param

    Resolve and return a scaler type from a given input (string, type, or instance).

  • get_scale_args

    Generate the keyword arguments used for scaling.

  • inference

    Runs inference on the given video clip using the configured model and backend.

  • kernel_radius

    Return the effective kernel radius for the scaler.

  • multi

    Deprecated alias for supersample.

  • postprocess_clip

    Handles postprocessing of the model's output after inference.

  • preprocess_clip

    Performs preprocessing on the clip prior to inference.

  • pretty_string

    Cached property returning a user-friendly string representation.

  • scale

    Scale the given clip using the ONNX model.

  • supersample

    Supersample a clip by a given scaling factor.

Attributes:

Source code
 80
 81
 82
 83
 84
 85
 86
 87
 88
 89
 90
 91
 92
 93
 94
 95
 96
 97
 98
 99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
def __init__(
    self,
    model: SPathLike | None = None,
    backend: Any | None = None,
    tiles: int | tuple[int, int] | None = None,
    tilesize: int | tuple[int, int] | None = None,
    overlap: int | tuple[int, int] | None = None,
    max_instances: int = 2,
    *,
    kernel: KernelLike = Catrom,
    scaler: ScalerLike | None = None,
    shifter: KernelLike | None = None,
    **kwargs: Any
) -> None:
    """
    :param model:           Path to the ONNX model file.
    :param backend:         The backend to be used with the vs-mlrt framework.
                            If set to None, the most suitable backend will be automatically selected, prioritizing fp16 support.
    :param tiles:           Whether to split the image into multiple tiles.
                            This can help reduce VRAM usage, but note that the model's behavior may vary when they are used.
    :param tilesize:        The size of each tile when splitting the image (if tiles are enabled).
    :param overlap:         The size of overlap between tiles.
    :param max_instances:   Maximum instances to spawn when scaling a variable resolution clip.
    :param kernel:          Base kernel to be used for certain scaling/shifting/resampling operations.
                            Defaults to Catrom.
    :param scaler:          Scaler used for scaling operations. Defaults to kernel.
    :param shifter:         Kernel used for shifting operations. Defaults to kernel.
    :param **kwargs:        Additional arguments to pass to the backend.
                            See the vsmlrt backend's docstring for more details.
    """
    super().__init__(kernel=kernel, scaler=scaler, shifter=shifter, **kwargs)

    if model is not None:
        self.model = str(SPath(model).resolve())

    if backend is None:
        _fp16 = self.kwargs.pop("fp16", True)
        _default_args = KwargsT(fp16=_fp16, output_format=int(_fp16), use_cuda_graph=True, use_cublas=True, heuristic=True)
        self.backend = autoselect_backend(**_default_args | self.kwargs)
    else:
        self.backend = backend

    self.tiles = tiles
    self.tilesize = tilesize
    self.overlap = overlap

    if self.overlap is None:
        self.overlap_w = self.overlap_h = 8
    elif isinstance(self.overlap, int):
        self.overlap_w = self.overlap_h = self.overlap
    else:
        self.overlap_w, self.overlap_h = self.overlap

    self.max_instances = max_instances

backend instance-attribute

backend = autoselect_backend(**_default_args | kwargs)

kernel instance-attribute

kernel = ensure_obj(kernel, __class__)

kwargs instance-attribute

kwargs: dict[str, Any] = kwargs

Arguments passed to the implemented funcs or internal scale function.

max_instances instance-attribute

max_instances = max_instances

model instance-attribute

model = str(resolve())

overlap instance-attribute

overlap = overlap

overlap_h instance-attribute

overlap_h = 8

overlap_w instance-attribute

overlap_w = 8

scale_function instance-attribute

scale_function: Callable[..., VideoNode]

Scale function called internally when performing scaling operations.

scaler instance-attribute

scaler = ensure_obj(scaler or kernel, __class__)

shifter instance-attribute

shifter = ensure_obj(shifter or kernel, __class__)

tiles instance-attribute

tiles = tiles

tilesize instance-attribute

tilesize = tilesize

cached_property

cached_property(func: Callable[Concatenate[_BaseScalerT, P], T_co])

Bases: cached_property[T_co]

Read only version of functools.cached_property.

Source code
265
def __init__(self, func: Callable[Concatenate[_BaseScalerT, P], T_co]) -> None: ...

calc_tilesize

calc_tilesize(
    clip: VideoNode, **kwargs: Any
) -> tuple[tuple[int, int], tuple[int, int]]

Reimplementation of vsmlrt.calc_tilesize helper function

Source code
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
def calc_tilesize(self, clip: vs.VideoNode, **kwargs: Any) -> tuple[tuple[int, int], tuple[int, int]]:
    """Reimplementation of vsmlrt.calc_tilesize helper function"""

    from vsmlrt import calc_tilesize

    kwargs = dict(
        tiles=self.tiles,
        tilesize=self.tilesize,
        width=clip.width,
        height=clip.height,
        multiple=1,
        overlap_w=self.overlap_w,
        overlap_h=self.overlap_h,
    ) | kwargs

    return calc_tilesize(**kwargs)

ensure_obj classmethod

ensure_obj(
    scaler: str | type[Self] | Self | None = None,
    /,
    func_except: FuncExceptT | None = None,
) -> Self

Ensure that the input is a scaler instance, resolving it if necessary.

Parameters:

  • scaler

    (str | type[Self] | Self | None, default: None ) –

    Scaler identifier (string, class, or instance).

  • func_except

    (FuncExceptT | None, default: None ) –

    Function returned for custom error handling.

Returns:

  • Self

    Scaler instance.

Source code
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
@classmethod
def ensure_obj(
    cls,
    scaler: str | type[Self] | Self | None = None,
    /,
    func_except: FuncExceptT | None = None,
) -> Self:
    """
    Ensure that the input is a scaler instance, resolving it if necessary.

    :param scaler:          Scaler identifier (string, class, or instance).
    :param func_except:     Function returned for custom error handling.
    :return:                Scaler instance.
    """
    return _base_ensure_obj(cls, scaler, func_except)

from_param classmethod

from_param(
    scaler: str | type[Self] | Self | None = None,
    /,
    func_except: FuncExceptT | None = None,
) -> type[Self]

Resolve and return a scaler type from a given input (string, type, or instance).

Parameters:

  • scaler

    (str | type[Self] | Self | None, default: None ) –

    Scaler identifier (string, class, or instance).

  • func_except

    (FuncExceptT | None, default: None ) –

    Function returned for custom error handling.

Returns:

  • type[Self]

    Resolved scaler type.

Source code
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
@classmethod
def from_param(
    cls,
    scaler: str | type[Self] | Self | None = None,
    /,
    func_except: FuncExceptT | None = None,
) -> type[Self]:
    """
    Resolve and return a scaler type from a given input (string, type, or instance).

    :param scaler:          Scaler identifier (string, class, or instance).
    :param func_except:     Function returned for custom error handling.
    :return:                Resolved scaler type.
    """
    return _base_from_param(cls, scaler, cls._err_class, func_except)

get_scale_args

get_scale_args(
    clip: VideoNode,
    shift: tuple[TopShift, LeftShift] = (0, 0),
    width: int | None = None,
    height: int | None = None,
    **kwargs: Any
) -> dict[str, Any]

Generate the keyword arguments used for scaling.

Parameters:

  • clip

    (VideoNode) –

    The source clip.

  • shift

    (tuple[TopShift, LeftShift], default: (0, 0) ) –

    Subpixel shift (top, left).

  • width

    (int | None, default: None ) –

    Target width.

  • height

    (int | None, default: None ) –

    Target height.

  • kwargs

    (Any, default: {} ) –

    Extra parameters to merge.

Returns:

  • dict[str, Any]

    Final dictionary of keyword arguments for the scale function.

Source code
466
467
468
469
470
471
472
473
474
475
476
477
478
479
480
481
482
483
484
def get_scale_args(
    self,
    clip: vs.VideoNode,
    shift: tuple[TopShift, LeftShift] = (0, 0),
    width: int | None = None,
    height: int | None = None,
    **kwargs: Any,
) -> dict[str, Any]:
    """
    Generate the keyword arguments used for scaling.

    :param clip:    The source clip.
    :param shift:   Subpixel shift (top, left).
    :param width:   Target width.
    :param height:  Target height.
    :param kwargs:  Extra parameters to merge.
    :return:        Final dictionary of keyword arguments for the scale function.
    """
    return dict(width=width, height=height, src_top=shift[0], src_left=shift[1]) | self.kwargs | kwargs

inference

inference(
    clip: ConstantFormatVideoNode, **kwargs: Any
) -> ConstantFormatVideoNode

Runs inference on the given video clip using the configured model and backend.

Source code
238
239
240
241
242
243
244
245
def inference(self, clip: ConstantFormatVideoNode, **kwargs: Any) -> ConstantFormatVideoNode:
    """Runs inference on the given video clip using the configured model and backend."""

    from vsmlrt import inference

    tiles, overlaps = self.calc_tilesize(clip)

    return inference(clip, self.model, overlaps, tiles, self.backend, **kwargs)

kernel_radius

kernel_radius() -> int

Return the effective kernel radius for the scaler.

Returns:

  • int

    Kernel radius.

Raises:

  • CustomNotImplementedError

    If no kernel radius is defined.

Source code
347
348
349
350
351
352
353
354
355
@cached_property
def kernel_radius(self) -> int:
    """
    Return the effective kernel radius for the scaler.

    :raises CustomNotImplementedError:  If no kernel radius is defined.
    :return:                            Kernel radius.
    """
    ...

multi

multi(
    clip: VideoNodeT,
    multi: float = 2.0,
    shift: tuple[TopShift, LeftShift] = (0, 0),
    **kwargs: Any
) -> VideoNodeT

Deprecated alias for supersample.

Parameters:

  • clip

    (VideoNodeT) –

    The source clip.

  • multi

    (float, default: 2.0 ) –

    Supersampling factor.

  • shift

    (tuple[TopShift, LeftShift], default: (0, 0) ) –

    Subpixel shift (top, left) applied during scaling.

  • kwargs

    (Any, default: {} ) –

    Additional arguments forwarded to the scale function.

Returns:

Source code
451
452
453
454
455
456
457
458
459
460
461
462
463
464
@deprecated('The "multi" method is deprecated. Use "supersample" instead.', category=DeprecationWarning)
def multi(
    self, clip: VideoNodeT, multi: float = 2.0, shift: tuple[TopShift, LeftShift] = (0, 0), **kwargs: Any
) -> VideoNodeT:
    """
    Deprecated alias for `supersample`.

    :param clip:    The source clip.
    :param multi:   Supersampling factor.
    :param shift:   Subpixel shift (top, left) applied during scaling.
    :param kwargs:  Additional arguments forwarded to the scale function.
    :return:        The supersampled clip.
    """
    return self.supersample(clip, multi, shift, **kwargs)

postprocess_clip

postprocess_clip(
    clip: VideoNode, input_clip: VideoNode, **kwargs: Any
) -> ConstantFormatVideoNode

Handles postprocessing of the model's output after inference.

Source code
231
232
233
234
235
236
def postprocess_clip(self, clip: vs.VideoNode, input_clip: vs.VideoNode, **kwargs: Any) -> ConstantFormatVideoNode:
    """Handles postprocessing of the model's output after inference."""

    return depth(
        clip, input_clip, dither_type=DitherType.ORDERED if 0 in {clip.width, clip.height} else DitherType.AUTO
    )

preprocess_clip

preprocess_clip(clip: VideoNode, **kwargs: Any) -> ConstantFormatVideoNode

Performs preprocessing on the clip prior to inference.

Source code
225
226
227
228
229
def preprocess_clip(self, clip: vs.VideoNode, **kwargs: Any) -> ConstantFormatVideoNode:
    """Performs preprocessing on the clip prior to inference."""

    clip = depth(clip, 16 if self.backend.fp16 else 32, vs.FLOAT)
    return limiter(clip, func=self.__class__)

pretty_string

pretty_string() -> str

Cached property returning a user-friendly string representation.

Returns:

  • str

    Pretty-printed string with arguments.

Source code
368
369
370
371
372
373
374
375
@cached_property
def pretty_string(self) -> str:
    """
    Cached property returning a user-friendly string representation.

    :return: Pretty-printed string with arguments.
    """
    return self._pretty_string()

scale

scale(
    clip: VideoNode,
    width: int | None = None,
    height: int | None = None,
    shift: tuple[float, float] = (0, 0),
    **kwargs: Any
) -> ConstantFormatVideoNode

Scale the given clip using the ONNX model.

Parameters:

  • clip

    (VideoNode) –

    The input clip to be scaled.

  • width

    (int | None, default: None ) –

    The target width for scaling. If None, the width of the input clip will be used.

  • height

    (int | None, default: None ) –

    The target height for scaling. If None, the height of the input clip will be used.

  • shift

    (tuple[float, float], default: (0, 0) ) –

    A tuple representing the shift values for the x and y axes.

  • **kwargs

    (Any, default: {} ) –

    Additional arguments to be passed to the preprocess_clip, postprocess_clip, inference, and _final_scale methods. Use the prefix preprocess_ or postprocess_ to pass an argument to the respective method. Use the prefix inference_ to pass an argument to the inference method.

Returns:

  • ConstantFormatVideoNode

    The scaled clip.

Source code
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
def scale(
    self,
    clip: vs.VideoNode,
    width: int | None = None,
    height: int | None = None,
    shift: tuple[float, float] = (0, 0),
    **kwargs: Any
) -> ConstantFormatVideoNode:
    """
    Scale the given clip using the ONNX model.

    :param clip:        The input clip to be scaled.
    :param width:       The target width for scaling. If None, the width of the input clip will be used.
    :param height:      The target height for scaling. If None, the height of the input clip will be used.
    :param shift:       A tuple representing the shift values for the x and y axes.
    :param **kwargs:    Additional arguments to be passed to the `preprocess_clip`, `postprocess_clip`,
                        `inference`, and `_final_scale` methods.
                        Use the prefix `preprocess_` or `postprocess_` to pass an argument to the respective method.
                        Use the prefix `inference_` to pass an argument to the inference method.

    :return:            The scaled clip.
    """
    from vsmlrt import Backend

    assert check_variable_format(clip, self.__class__)

    width, height = self._wh_norm(clip, width, height)

    preprocess_kwargs = dict[str, Any]()
    postprocess_kwargs = dict[str, Any]()
    inference_kwargs = dict[str, Any]()

    for k in kwargs.copy():
        for prefix, ckwargs in zip(
            ("preprocess_", "postprocess_", "inference_"),
            (preprocess_kwargs, postprocess_kwargs, inference_kwargs)
        ):
            if k.startswith(prefix):
                ckwargs[k.removeprefix(prefix)] = kwargs.pop(k)
                break

    wclip = self.preprocess_clip(clip, **preprocess_kwargs)

    if 0 not in {clip.width, clip.height}:
        scaled = self.inference(wclip, **inference_kwargs)
    else:
        if not isinstance(self.backend, Backend.TRT):
            raise CustomValueError(
                "Variable resolution clips can only be processed with TRT Backend!", self.__class__, self.backend
            )

        warning(f"{self.__class__.__name__}: Variable resolution clip detected!")

        if self.backend.static_shape:
            warning("static_shape is True, setting it to False...")
            self.backend.static_shape = False

        if not self.backend.max_shapes:
            warning("max_shapes is None, setting it to (1936, 1088). You may want to adjust it...")
            self.backend.max_shapes = (1936, 1088)

        if not self.backend.opt_shapes:
            warning("opt_shapes is None, setting it to (64, 64). You may want to adjust it...")
            self.backend.opt_shapes = (64, 64)

        scaled = ProcessVariableResClip[ConstantFormatVideoNode].from_func(
            wclip, lambda c: self.inference(c, **inference_kwargs), False, wclip.format, self.max_instances
        )

    scaled = self.postprocess_clip(scaled, clip, **postprocess_kwargs)

    return self._finish_scale(scaled, clip, width, height, shift, **kwargs)

supersample

supersample(
    clip: VideoNodeT,
    rfactor: float = 2.0,
    shift: tuple[TopShift, LeftShift] = (0, 0),
    **kwargs: Any
) -> VideoNodeT

Supersample a clip by a given scaling factor.

Parameters:

  • clip

    (VideoNodeT) –

    The source clip.

  • rfactor

    (float, default: 2.0 ) –

    Scaling factor for supersampling.

  • shift

    (tuple[TopShift, LeftShift], default: (0, 0) ) –

    Subpixel shift (top, left) applied during scaling.

  • kwargs

    (Any, default: {} ) –

    Additional arguments forwarded to the scale function.

Returns:

Raises:

  • CustomValueError

    If resulting resolution is non-positive.

Source code
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
def supersample(
    self, clip: VideoNodeT, rfactor: float = 2.0, shift: tuple[TopShift, LeftShift] = (0, 0), **kwargs: Any
) -> VideoNodeT:
    """
    Supersample a clip by a given scaling factor.

    :param clip:                The source clip.
    :param rfactor:             Scaling factor for supersampling.
    :param shift:               Subpixel shift (top, left) applied during scaling.
    :param kwargs:              Additional arguments forwarded to the scale function.
    :raises CustomValueError:   If resulting resolution is non-positive.
    :return:                    The supersampled clip.
    """
    assert check_variable_resolution(clip, self.supersample)

    dst_width, dst_height = ceil(clip.width * rfactor), ceil(clip.height * rfactor)

    if max(dst_width, dst_height) <= 0.0:
        raise CustomValueError(
            'Multiplying the resolution by "rfactor" must result in a positive resolution!',
            self.supersample,
            rfactor,
        )

    return self.scale(clip, dst_width, dst_height, shift, **kwargs)  # type: ignore[return-value]

Waifu2x

Waifu2x(
    scale: Literal[1, 2, 4] = 2,
    noise: Literal[-1, 0, 1, 2, 3] = -1,
    backend: Any | None = None,
    tiles: int | tuple[int, int] | None = None,
    tilesize: int | tuple[int, int] | None = None,
    overlap: int | tuple[int, int] | None = None,
    max_instances: int = 2,
    *,
    kernel: KernelLike = Catrom,
    scaler: ScalerLike | None = None,
    shifter: KernelLike | None = None,
    **kwargs: Any
)

Bases: _Waifu2xCunet

Well known Image Super-Resolution for Anime-Style Art.

Defaults to Cunet.

Example usage:

from vsscale import Waifu2x

doubled = Waifu2x().scale(clip, clip.width * 2, clip.height * 2)

Parameters:

  • scale

    (Literal[1, 2, 4], default: 2 ) –

    Upscaling factor. 1 = no uspcaling, 2 = 2x, 4 = 4x.

  • noise

    (Literal[-1, 0, 1, 2, 3], default: -1 ) –

    Noise reduction level. -1 = none, 0 = low, 1 = medium, 2 = high, 3 = highest.

  • backend

    (Any | None, default: None ) –

    The backend to be used with the vs-mlrt framework. If set to None, the most suitable backend will be automatically selected, prioritizing fp16 support.

  • tiles

    (int | tuple[int, int] | None, default: None ) –

    Whether to split the image into multiple tiles. This can help reduce VRAM usage, but note that the model's behavior may vary when they are used.

  • tilesize

    (int | tuple[int, int] | None, default: None ) –

    The size of each tile when splitting the image (if tiles are enabled).

  • overlap

    (int | tuple[int, int] | None, default: None ) –

    The size of overlap between tiles.

  • max_instances

    (int, default: 2 ) –

    Maximum instances to spawn when scaling a variable resolution clip.

  • kernel

    (KernelLike, default: Catrom ) –

    Base kernel to be used for certain scaling/shifting/resampling operations. Defaults to Catrom.

  • scaler

    (ScalerLike | None, default: None ) –

    Scaler used for scaling operations. Defaults to kernel.

  • shifter

    (KernelLike | None, default: None ) –

    Kernel used for shifting operations. Defaults to kernel.

  • **kwargs

    (Any, default: {} ) –

    Additional arguments to pass to the backend. See the vsmlrt backend's docstring for more details.

Classes:

  • AnimeStyleArt

    Waifu2x model for anime-style art.

  • AnimeStyleArtRGB

    RGB version of the anime-style model.

  • Cunet

    CUNet (Compact U-Net) model for anime art.

  • Photo

    Waifu2x model trained on real-world photographic images.

  • SwinUnetArt

    Swin-Unet-based model trained on anime-style images.

  • SwinUnetArtScan

    Swin-Unet model trained on anime scans.

  • SwinUnetPhoto

    Swin-Unet model trained on photographic content.

  • SwinUnetPhotoV2

    Improved Swin-Unet model for photos (v2).

  • UpConv7AnimeStyleArt

    UpConv7 model variant optimized for anime-style images.

  • UpConv7Photo

    UpConv7 model variant optimized for photographic images.

  • UpResNet10

    UpResNet10 model offering a balance of speed and quality.

  • cached_property

    Read only version of functools.cached_property.

Methods:

  • calc_tilesize

    Reimplementation of vsmlrt.calc_tilesize helper function

  • ensure_obj

    Ensure that the input is a scaler instance, resolving it if necessary.

  • from_param

    Resolve and return a scaler type from a given input (string, type, or instance).

  • get_scale_args

    Generate the keyword arguments used for scaling.

  • inference
  • kernel_radius

    Return the effective kernel radius for the scaler.

  • multi

    Deprecated alias for supersample.

  • postprocess_clip
  • preprocess_clip
  • pretty_string

    Cached property returning a user-friendly string representation.

  • scale

    Scale the given clip using the ONNX model.

  • supersample

    Supersample a clip by a given scaling factor.

Attributes:

Source code
568
569
570
571
572
573
574
575
576
577
578
579
580
581
582
583
584
585
586
587
588
589
590
591
592
593
594
595
596
597
598
599
600
601
602
603
604
def __init__(
    self,
    scale: Literal[1, 2, 4] = 2,
    noise: Literal[-1, 0, 1, 2, 3] = -1,
    backend: Any | None = None,
    tiles: int | tuple[int, int] | None = None,
    tilesize: int | tuple[int, int] | None = None,
    overlap: int | tuple[int, int] | None = None,
    max_instances: int = 2,
    *,
    kernel: KernelLike = Catrom,
    scaler: ScalerLike | None = None,
    shifter: KernelLike | None = None,
    **kwargs: Any
) -> None:
    """
    :param scale:           Upscaling factor. 1 = no uspcaling, 2 = 2x, 4 = 4x.
    :param noise:           Noise reduction level. -1 = none, 0 = low, 1 = medium, 2 = high, 3 = highest.
    :param backend:         The backend to be used with the vs-mlrt framework.
                            If set to None, the most suitable backend will be automatically selected, prioritizing fp16 support.
    :param tiles:           Whether to split the image into multiple tiles.
                            This can help reduce VRAM usage, but note that the model's behavior may vary when they are used.
    :param tilesize:        The size of each tile when splitting the image (if tiles are enabled).
    :param overlap:         The size of overlap between tiles.
    :param max_instances:   Maximum instances to spawn when scaling a variable resolution clip.
    :param kernel:          Base kernel to be used for certain scaling/shifting/resampling operations.
                            Defaults to Catrom.
    :param scaler:          Scaler used for scaling operations. Defaults to kernel.
    :param shifter:         Kernel used for shifting operations. Defaults to kernel.
    :param **kwargs:        Additional arguments to pass to the backend.
                            See the vsmlrt backend's docstring for more details.
    """
    self.scale_w2x = scale
    self.noise = noise
    super().__init__(
        None, backend, tiles, tilesize, overlap, max_instances, kernel=kernel, scaler=scaler, shifter=shifter, **kwargs
    )

backend instance-attribute

backend = autoselect_backend(**_default_args | kwargs)

kernel instance-attribute

kernel = ensure_obj(kernel, __class__)

kwargs instance-attribute

kwargs: dict[str, Any] = kwargs

Arguments passed to the implemented funcs or internal scale function.

max_instances instance-attribute

max_instances = max_instances

model instance-attribute

model = str(resolve())

noise instance-attribute

noise: Literal[-1, 0, 1, 2, 3] = noise

overlap instance-attribute

overlap = overlap

overlap_h instance-attribute

overlap_h = 8

overlap_w instance-attribute

overlap_w = 8

scale_function instance-attribute

scale_function: Callable[..., VideoNode]

Scale function called internally when performing scaling operations.

scale_w2x instance-attribute

scale_w2x: Literal[1, 2, 4] = scale

scaler instance-attribute

scaler = ensure_obj(scaler or kernel, __class__)

shifter instance-attribute

shifter = ensure_obj(shifter or kernel, __class__)

tiles instance-attribute

tiles = tiles

tilesize instance-attribute

tilesize = tilesize

AnimeStyleArt

AnimeStyleArt(
    scale: Literal[1, 2, 4] = 2,
    noise: Literal[-1, 0, 1, 2, 3] = -1,
    backend: Any | None = None,
    tiles: int | tuple[int, int] | None = None,
    tilesize: int | tuple[int, int] | None = None,
    overlap: int | tuple[int, int] | None = None,
    max_instances: int = 2,
    *,
    kernel: KernelLike = Catrom,
    scaler: ScalerLike | None = None,
    shifter: KernelLike | None = None,
    **kwargs: Any
)

Bases: BaseWaifu2x

Waifu2x model for anime-style art.

Example usage:

from vsscale import Waifu2x

doubled = Waifu2x.AnimeStyleArt().scale(clip, clip.width * 2, clip.height * 2)

Parameters:

  • scale

    (Literal[1, 2, 4], default: 2 ) –

    Upscaling factor. 1 = no uspcaling, 2 = 2x, 4 = 4x.

  • noise

    (Literal[-1, 0, 1, 2, 3], default: -1 ) –

    Noise reduction level. -1 = none, 0 = low, 1 = medium, 2 = high, 3 = highest.

  • backend

    (Any | None, default: None ) –

    The backend to be used with the vs-mlrt framework. If set to None, the most suitable backend will be automatically selected, prioritizing fp16 support.

  • tiles

    (int | tuple[int, int] | None, default: None ) –

    Whether to split the image into multiple tiles. This can help reduce VRAM usage, but note that the model's behavior may vary when they are used.

  • tilesize

    (int | tuple[int, int] | None, default: None ) –

    The size of each tile when splitting the image (if tiles are enabled).

  • overlap

    (int | tuple[int, int] | None, default: None ) –

    The size of overlap between tiles.

  • max_instances

    (int, default: 2 ) –

    Maximum instances to spawn when scaling a variable resolution clip.

  • kernel

    (KernelLike, default: Catrom ) –

    Base kernel to be used for certain scaling/shifting/resampling operations. Defaults to Catrom.

  • scaler

    (ScalerLike | None, default: None ) –

    Scaler used for scaling operations. Defaults to kernel.

  • shifter

    (KernelLike | None, default: None ) –

    Kernel used for shifting operations. Defaults to kernel.

  • **kwargs

    (Any, default: {} ) –

    Additional arguments to pass to the backend. See the vsmlrt backend's docstring for more details.

Classes:

Methods:

  • calc_tilesize

    Reimplementation of vsmlrt.calc_tilesize helper function

  • ensure_obj

    Ensure that the input is a scaler instance, resolving it if necessary.

  • from_param

    Resolve and return a scaler type from a given input (string, type, or instance).

  • get_scale_args

    Generate the keyword arguments used for scaling.

  • inference
  • kernel_radius

    Return the effective kernel radius for the scaler.

  • multi

    Deprecated alias for supersample.

  • postprocess_clip

    Handles postprocessing of the model's output after inference.

  • preprocess_clip

    Performs preprocessing on the clip prior to inference.

  • pretty_string

    Cached property returning a user-friendly string representation.

  • scale

    Scale the given clip using the ONNX model.

  • supersample

    Supersample a clip by a given scaling factor.

Attributes:

Source code
568
569
570
571
572
573
574
575
576
577
578
579
580
581
582
583
584
585
586
587
588
589
590
591
592
593
594
595
596
597
598
599
600
601
602
603
604
def __init__(
    self,
    scale: Literal[1, 2, 4] = 2,
    noise: Literal[-1, 0, 1, 2, 3] = -1,
    backend: Any | None = None,
    tiles: int | tuple[int, int] | None = None,
    tilesize: int | tuple[int, int] | None = None,
    overlap: int | tuple[int, int] | None = None,
    max_instances: int = 2,
    *,
    kernel: KernelLike = Catrom,
    scaler: ScalerLike | None = None,
    shifter: KernelLike | None = None,
    **kwargs: Any
) -> None:
    """
    :param scale:           Upscaling factor. 1 = no uspcaling, 2 = 2x, 4 = 4x.
    :param noise:           Noise reduction level. -1 = none, 0 = low, 1 = medium, 2 = high, 3 = highest.
    :param backend:         The backend to be used with the vs-mlrt framework.
                            If set to None, the most suitable backend will be automatically selected, prioritizing fp16 support.
    :param tiles:           Whether to split the image into multiple tiles.
                            This can help reduce VRAM usage, but note that the model's behavior may vary when they are used.
    :param tilesize:        The size of each tile when splitting the image (if tiles are enabled).
    :param overlap:         The size of overlap between tiles.
    :param max_instances:   Maximum instances to spawn when scaling a variable resolution clip.
    :param kernel:          Base kernel to be used for certain scaling/shifting/resampling operations.
                            Defaults to Catrom.
    :param scaler:          Scaler used for scaling operations. Defaults to kernel.
    :param shifter:         Kernel used for shifting operations. Defaults to kernel.
    :param **kwargs:        Additional arguments to pass to the backend.
                            See the vsmlrt backend's docstring for more details.
    """
    self.scale_w2x = scale
    self.noise = noise
    super().__init__(
        None, backend, tiles, tilesize, overlap, max_instances, kernel=kernel, scaler=scaler, shifter=shifter, **kwargs
    )

backend instance-attribute

backend = autoselect_backend(**_default_args | kwargs)

kernel instance-attribute

kernel = ensure_obj(kernel, __class__)

kwargs instance-attribute

kwargs: dict[str, Any] = kwargs

Arguments passed to the implemented funcs or internal scale function.

max_instances instance-attribute

max_instances = max_instances

model instance-attribute

model = str(resolve())

noise instance-attribute

noise: Literal[-1, 0, 1, 2, 3] = noise

overlap instance-attribute

overlap = overlap

overlap_h instance-attribute

overlap_h = 8

overlap_w instance-attribute

overlap_w = 8

scale_function instance-attribute

scale_function: Callable[..., VideoNode]

Scale function called internally when performing scaling operations.

scale_w2x instance-attribute

scale_w2x: Literal[1, 2, 4] = scale

scaler instance-attribute

scaler = ensure_obj(scaler or kernel, __class__)

shifter instance-attribute

shifter = ensure_obj(shifter or kernel, __class__)

tiles instance-attribute

tiles = tiles

tilesize instance-attribute

tilesize = tilesize

cached_property

cached_property(func: Callable[Concatenate[_BaseScalerT, P], T_co])

Bases: cached_property[T_co]

Read only version of functools.cached_property.

Source code
265
def __init__(self, func: Callable[Concatenate[_BaseScalerT, P], T_co]) -> None: ...

calc_tilesize

calc_tilesize(
    clip: VideoNode, **kwargs: Any
) -> tuple[tuple[int, int], tuple[int, int]]

Reimplementation of vsmlrt.calc_tilesize helper function

Source code
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
def calc_tilesize(self, clip: vs.VideoNode, **kwargs: Any) -> tuple[tuple[int, int], tuple[int, int]]:
    """Reimplementation of vsmlrt.calc_tilesize helper function"""

    from vsmlrt import calc_tilesize

    kwargs = dict(
        tiles=self.tiles,
        tilesize=self.tilesize,
        width=clip.width,
        height=clip.height,
        multiple=1,
        overlap_w=self.overlap_w,
        overlap_h=self.overlap_h,
    ) | kwargs

    return calc_tilesize(**kwargs)

ensure_obj classmethod

ensure_obj(
    scaler: str | type[Self] | Self | None = None,
    /,
    func_except: FuncExceptT | None = None,
) -> Self

Ensure that the input is a scaler instance, resolving it if necessary.

Parameters:

  • scaler
    (str | type[Self] | Self | None, default: None ) –

    Scaler identifier (string, class, or instance).

  • func_except
    (FuncExceptT | None, default: None ) –

    Function returned for custom error handling.

Returns:

  • Self

    Scaler instance.

Source code
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
@classmethod
def ensure_obj(
    cls,
    scaler: str | type[Self] | Self | None = None,
    /,
    func_except: FuncExceptT | None = None,
) -> Self:
    """
    Ensure that the input is a scaler instance, resolving it if necessary.

    :param scaler:          Scaler identifier (string, class, or instance).
    :param func_except:     Function returned for custom error handling.
    :return:                Scaler instance.
    """
    return _base_ensure_obj(cls, scaler, func_except)

from_param classmethod

from_param(
    scaler: str | type[Self] | Self | None = None,
    /,
    func_except: FuncExceptT | None = None,
) -> type[Self]

Resolve and return a scaler type from a given input (string, type, or instance).

Parameters:

  • scaler
    (str | type[Self] | Self | None, default: None ) –

    Scaler identifier (string, class, or instance).

  • func_except
    (FuncExceptT | None, default: None ) –

    Function returned for custom error handling.

Returns:

  • type[Self]

    Resolved scaler type.

Source code
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
@classmethod
def from_param(
    cls,
    scaler: str | type[Self] | Self | None = None,
    /,
    func_except: FuncExceptT | None = None,
) -> type[Self]:
    """
    Resolve and return a scaler type from a given input (string, type, or instance).

    :param scaler:          Scaler identifier (string, class, or instance).
    :param func_except:     Function returned for custom error handling.
    :return:                Resolved scaler type.
    """
    return _base_from_param(cls, scaler, cls._err_class, func_except)

get_scale_args

get_scale_args(
    clip: VideoNode,
    shift: tuple[TopShift, LeftShift] = (0, 0),
    width: int | None = None,
    height: int | None = None,
    **kwargs: Any
) -> dict[str, Any]

Generate the keyword arguments used for scaling.

Parameters:

  • clip
    (VideoNode) –

    The source clip.

  • shift
    (tuple[TopShift, LeftShift], default: (0, 0) ) –

    Subpixel shift (top, left).

  • width
    (int | None, default: None ) –

    Target width.

  • height
    (int | None, default: None ) –

    Target height.

  • kwargs
    (Any, default: {} ) –

    Extra parameters to merge.

Returns:

  • dict[str, Any]

    Final dictionary of keyword arguments for the scale function.

Source code
466
467
468
469
470
471
472
473
474
475
476
477
478
479
480
481
482
483
484
def get_scale_args(
    self,
    clip: vs.VideoNode,
    shift: tuple[TopShift, LeftShift] = (0, 0),
    width: int | None = None,
    height: int | None = None,
    **kwargs: Any,
) -> dict[str, Any]:
    """
    Generate the keyword arguments used for scaling.

    :param clip:    The source clip.
    :param shift:   Subpixel shift (top, left).
    :param width:   Target width.
    :param height:  Target height.
    :param kwargs:  Extra parameters to merge.
    :return:        Final dictionary of keyword arguments for the scale function.
    """
    return dict(width=width, height=height, src_top=shift[0], src_left=shift[1]) | self.kwargs | kwargs

inference

inference(
    clip: ConstantFormatVideoNode, **kwargs: Any
) -> ConstantFormatVideoNode
Source code
606
607
608
609
610
611
612
613
614
615
616
617
618
619
620
def inference(self, clip: ConstantFormatVideoNode, **kwargs: Any) -> ConstantFormatVideoNode:
    from vsmlrt import Waifu2x as mlrt_Waifu2x
    from vsmlrt import Waifu2xModel

    return mlrt_Waifu2x(
        clip,
        self.noise,
        self.scale_w2x,
        self.tiles,
        self.tilesize,
        self.overlap,
        Waifu2xModel(self._model),
        self.backend,
        **kwargs
    )

kernel_radius

kernel_radius() -> int

Return the effective kernel radius for the scaler.

Returns:

  • int

    Kernel radius.

Raises:

  • CustomNotImplementedError

    If no kernel radius is defined.

Source code
347
348
349
350
351
352
353
354
355
@cached_property
def kernel_radius(self) -> int:
    """
    Return the effective kernel radius for the scaler.

    :raises CustomNotImplementedError:  If no kernel radius is defined.
    :return:                            Kernel radius.
    """
    ...

multi

multi(
    clip: VideoNodeT,
    multi: float = 2.0,
    shift: tuple[TopShift, LeftShift] = (0, 0),
    **kwargs: Any
) -> VideoNodeT

Deprecated alias for supersample.

Parameters:

  • clip
    (VideoNodeT) –

    The source clip.

  • multi
    (float, default: 2.0 ) –

    Supersampling factor.

  • shift
    (tuple[TopShift, LeftShift], default: (0, 0) ) –

    Subpixel shift (top, left) applied during scaling.

  • kwargs
    (Any, default: {} ) –

    Additional arguments forwarded to the scale function.

Returns:

Source code
451
452
453
454
455
456
457
458
459
460
461
462
463
464
@deprecated('The "multi" method is deprecated. Use "supersample" instead.', category=DeprecationWarning)
def multi(
    self, clip: VideoNodeT, multi: float = 2.0, shift: tuple[TopShift, LeftShift] = (0, 0), **kwargs: Any
) -> VideoNodeT:
    """
    Deprecated alias for `supersample`.

    :param clip:    The source clip.
    :param multi:   Supersampling factor.
    :param shift:   Subpixel shift (top, left) applied during scaling.
    :param kwargs:  Additional arguments forwarded to the scale function.
    :return:        The supersampled clip.
    """
    return self.supersample(clip, multi, shift, **kwargs)

postprocess_clip

postprocess_clip(
    clip: VideoNode, input_clip: VideoNode, **kwargs: Any
) -> ConstantFormatVideoNode

Handles postprocessing of the model's output after inference.

Source code
231
232
233
234
235
236
def postprocess_clip(self, clip: vs.VideoNode, input_clip: vs.VideoNode, **kwargs: Any) -> ConstantFormatVideoNode:
    """Handles postprocessing of the model's output after inference."""

    return depth(
        clip, input_clip, dither_type=DitherType.ORDERED if 0 in {clip.width, clip.height} else DitherType.AUTO
    )

preprocess_clip

preprocess_clip(clip: VideoNode, **kwargs: Any) -> ConstantFormatVideoNode

Performs preprocessing on the clip prior to inference.

Source code
225
226
227
228
229
def preprocess_clip(self, clip: vs.VideoNode, **kwargs: Any) -> ConstantFormatVideoNode:
    """Performs preprocessing on the clip prior to inference."""

    clip = depth(clip, 16 if self.backend.fp16 else 32, vs.FLOAT)
    return limiter(clip, func=self.__class__)

pretty_string

pretty_string() -> str

Cached property returning a user-friendly string representation.

Returns:

  • str

    Pretty-printed string with arguments.

Source code
368
369
370
371
372
373
374
375
@cached_property
def pretty_string(self) -> str:
    """
    Cached property returning a user-friendly string representation.

    :return: Pretty-printed string with arguments.
    """
    return self._pretty_string()

scale

scale(
    clip: VideoNode,
    width: int | None = None,
    height: int | None = None,
    shift: tuple[float, float] = (0, 0),
    **kwargs: Any
) -> ConstantFormatVideoNode

Scale the given clip using the ONNX model.

Parameters:

  • clip
    (VideoNode) –

    The input clip to be scaled.

  • width
    (int | None, default: None ) –

    The target width for scaling. If None, the width of the input clip will be used.

  • height
    (int | None, default: None ) –

    The target height for scaling. If None, the height of the input clip will be used.

  • shift
    (tuple[float, float], default: (0, 0) ) –

    A tuple representing the shift values for the x and y axes.

  • **kwargs
    (Any, default: {} ) –

    Additional arguments to be passed to the preprocess_clip, postprocess_clip, inference, and _final_scale methods. Use the prefix preprocess_ or postprocess_ to pass an argument to the respective method. Use the prefix inference_ to pass an argument to the inference method.

Returns:

  • ConstantFormatVideoNode

    The scaled clip.

Source code
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
def scale(
    self,
    clip: vs.VideoNode,
    width: int | None = None,
    height: int | None = None,
    shift: tuple[float, float] = (0, 0),
    **kwargs: Any
) -> ConstantFormatVideoNode:
    """
    Scale the given clip using the ONNX model.

    :param clip:        The input clip to be scaled.
    :param width:       The target width for scaling. If None, the width of the input clip will be used.
    :param height:      The target height for scaling. If None, the height of the input clip will be used.
    :param shift:       A tuple representing the shift values for the x and y axes.
    :param **kwargs:    Additional arguments to be passed to the `preprocess_clip`, `postprocess_clip`,
                        `inference`, and `_final_scale` methods.
                        Use the prefix `preprocess_` or `postprocess_` to pass an argument to the respective method.
                        Use the prefix `inference_` to pass an argument to the inference method.

    :return:            The scaled clip.
    """
    from vsmlrt import Backend

    assert check_variable_format(clip, self.__class__)

    width, height = self._wh_norm(clip, width, height)

    preprocess_kwargs = dict[str, Any]()
    postprocess_kwargs = dict[str, Any]()
    inference_kwargs = dict[str, Any]()

    for k in kwargs.copy():
        for prefix, ckwargs in zip(
            ("preprocess_", "postprocess_", "inference_"),
            (preprocess_kwargs, postprocess_kwargs, inference_kwargs)
        ):
            if k.startswith(prefix):
                ckwargs[k.removeprefix(prefix)] = kwargs.pop(k)
                break

    wclip = self.preprocess_clip(clip, **preprocess_kwargs)

    if 0 not in {clip.width, clip.height}:
        scaled = self.inference(wclip, **inference_kwargs)
    else:
        if not isinstance(self.backend, Backend.TRT):
            raise CustomValueError(
                "Variable resolution clips can only be processed with TRT Backend!", self.__class__, self.backend
            )

        warning(f"{self.__class__.__name__}: Variable resolution clip detected!")

        if self.backend.static_shape:
            warning("static_shape is True, setting it to False...")
            self.backend.static_shape = False

        if not self.backend.max_shapes:
            warning("max_shapes is None, setting it to (1936, 1088). You may want to adjust it...")
            self.backend.max_shapes = (1936, 1088)

        if not self.backend.opt_shapes:
            warning("opt_shapes is None, setting it to (64, 64). You may want to adjust it...")
            self.backend.opt_shapes = (64, 64)

        scaled = ProcessVariableResClip[ConstantFormatVideoNode].from_func(
            wclip, lambda c: self.inference(c, **inference_kwargs), False, wclip.format, self.max_instances
        )

    scaled = self.postprocess_clip(scaled, clip, **postprocess_kwargs)

    return self._finish_scale(scaled, clip, width, height, shift, **kwargs)

supersample

supersample(
    clip: VideoNodeT,
    rfactor: float = 2.0,
    shift: tuple[TopShift, LeftShift] = (0, 0),
    **kwargs: Any
) -> VideoNodeT

Supersample a clip by a given scaling factor.

Parameters:

  • clip
    (VideoNodeT) –

    The source clip.

  • rfactor
    (float, default: 2.0 ) –

    Scaling factor for supersampling.

  • shift
    (tuple[TopShift, LeftShift], default: (0, 0) ) –

    Subpixel shift (top, left) applied during scaling.

  • kwargs
    (Any, default: {} ) –

    Additional arguments forwarded to the scale function.

Returns:

Raises:

  • CustomValueError

    If resulting resolution is non-positive.

Source code
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
def supersample(
    self, clip: VideoNodeT, rfactor: float = 2.0, shift: tuple[TopShift, LeftShift] = (0, 0), **kwargs: Any
) -> VideoNodeT:
    """
    Supersample a clip by a given scaling factor.

    :param clip:                The source clip.
    :param rfactor:             Scaling factor for supersampling.
    :param shift:               Subpixel shift (top, left) applied during scaling.
    :param kwargs:              Additional arguments forwarded to the scale function.
    :raises CustomValueError:   If resulting resolution is non-positive.
    :return:                    The supersampled clip.
    """
    assert check_variable_resolution(clip, self.supersample)

    dst_width, dst_height = ceil(clip.width * rfactor), ceil(clip.height * rfactor)

    if max(dst_width, dst_height) <= 0.0:
        raise CustomValueError(
            'Multiplying the resolution by "rfactor" must result in a positive resolution!',
            self.supersample,
            rfactor,
        )

    return self.scale(clip, dst_width, dst_height, shift, **kwargs)  # type: ignore[return-value]

AnimeStyleArtRGB

AnimeStyleArtRGB(
    scale: Literal[1, 2, 4] = 2,
    noise: Literal[-1, 0, 1, 2, 3] = -1,
    backend: Any | None = None,
    tiles: int | tuple[int, int] | None = None,
    tilesize: int | tuple[int, int] | None = None,
    overlap: int | tuple[int, int] | None = None,
    max_instances: int = 2,
    *,
    kernel: KernelLike = Catrom,
    scaler: ScalerLike | None = None,
    shifter: KernelLike | None = None,
    **kwargs: Any
)

Bases: BaseWaifu2xRGB

RGB version of the anime-style model.

Example usage:

from vsscale import Waifu2x

doubled = Waifu2x.AnimeStyleArtRGB().scale(clip, clip.width * 2, clip.height * 2)

Parameters:

  • scale

    (Literal[1, 2, 4], default: 2 ) –

    Upscaling factor. 1 = no uspcaling, 2 = 2x, 4 = 4x.

  • noise

    (Literal[-1, 0, 1, 2, 3], default: -1 ) –

    Noise reduction level. -1 = none, 0 = low, 1 = medium, 2 = high, 3 = highest.

  • backend

    (Any | None, default: None ) –

    The backend to be used with the vs-mlrt framework. If set to None, the most suitable backend will be automatically selected, prioritizing fp16 support.

  • tiles

    (int | tuple[int, int] | None, default: None ) –

    Whether to split the image into multiple tiles. This can help reduce VRAM usage, but note that the model's behavior may vary when they are used.

  • tilesize

    (int | tuple[int, int] | None, default: None ) –

    The size of each tile when splitting the image (if tiles are enabled).

  • overlap

    (int | tuple[int, int] | None, default: None ) –

    The size of overlap between tiles.

  • max_instances

    (int, default: 2 ) –

    Maximum instances to spawn when scaling a variable resolution clip.

  • kernel

    (KernelLike, default: Catrom ) –

    Base kernel to be used for certain scaling/shifting/resampling operations. Defaults to Catrom.

  • scaler

    (ScalerLike | None, default: None ) –

    Scaler used for scaling operations. Defaults to kernel.

  • shifter

    (KernelLike | None, default: None ) –

    Kernel used for shifting operations. Defaults to kernel.

  • **kwargs

    (Any, default: {} ) –

    Additional arguments to pass to the backend. See the vsmlrt backend's docstring for more details.

Classes:

Methods:

  • calc_tilesize

    Reimplementation of vsmlrt.calc_tilesize helper function

  • ensure_obj

    Ensure that the input is a scaler instance, resolving it if necessary.

  • from_param

    Resolve and return a scaler type from a given input (string, type, or instance).

  • get_scale_args

    Generate the keyword arguments used for scaling.

  • inference
  • kernel_radius

    Return the effective kernel radius for the scaler.

  • multi

    Deprecated alias for supersample.

  • postprocess_clip
  • preprocess_clip
  • pretty_string

    Cached property returning a user-friendly string representation.

  • scale

    Scale the given clip using the ONNX model.

  • supersample

    Supersample a clip by a given scaling factor.

Attributes:

Source code
568
569
570
571
572
573
574
575
576
577
578
579
580
581
582
583
584
585
586
587
588
589
590
591
592
593
594
595
596
597
598
599
600
601
602
603
604
def __init__(
    self,
    scale: Literal[1, 2, 4] = 2,
    noise: Literal[-1, 0, 1, 2, 3] = -1,
    backend: Any | None = None,
    tiles: int | tuple[int, int] | None = None,
    tilesize: int | tuple[int, int] | None = None,
    overlap: int | tuple[int, int] | None = None,
    max_instances: int = 2,
    *,
    kernel: KernelLike = Catrom,
    scaler: ScalerLike | None = None,
    shifter: KernelLike | None = None,
    **kwargs: Any
) -> None:
    """
    :param scale:           Upscaling factor. 1 = no uspcaling, 2 = 2x, 4 = 4x.
    :param noise:           Noise reduction level. -1 = none, 0 = low, 1 = medium, 2 = high, 3 = highest.
    :param backend:         The backend to be used with the vs-mlrt framework.
                            If set to None, the most suitable backend will be automatically selected, prioritizing fp16 support.
    :param tiles:           Whether to split the image into multiple tiles.
                            This can help reduce VRAM usage, but note that the model's behavior may vary when they are used.
    :param tilesize:        The size of each tile when splitting the image (if tiles are enabled).
    :param overlap:         The size of overlap between tiles.
    :param max_instances:   Maximum instances to spawn when scaling a variable resolution clip.
    :param kernel:          Base kernel to be used for certain scaling/shifting/resampling operations.
                            Defaults to Catrom.
    :param scaler:          Scaler used for scaling operations. Defaults to kernel.
    :param shifter:         Kernel used for shifting operations. Defaults to kernel.
    :param **kwargs:        Additional arguments to pass to the backend.
                            See the vsmlrt backend's docstring for more details.
    """
    self.scale_w2x = scale
    self.noise = noise
    super().__init__(
        None, backend, tiles, tilesize, overlap, max_instances, kernel=kernel, scaler=scaler, shifter=shifter, **kwargs
    )

backend instance-attribute

backend = autoselect_backend(**_default_args | kwargs)

kernel instance-attribute

kernel = ensure_obj(kernel, __class__)

kwargs instance-attribute

kwargs: dict[str, Any] = kwargs

Arguments passed to the implemented funcs or internal scale function.

max_instances instance-attribute

max_instances = max_instances

model instance-attribute

model = str(resolve())

noise instance-attribute

noise: Literal[-1, 0, 1, 2, 3] = noise

overlap instance-attribute

overlap = overlap

overlap_h instance-attribute

overlap_h = 8

overlap_w instance-attribute

overlap_w = 8

scale_function instance-attribute

scale_function: Callable[..., VideoNode]

Scale function called internally when performing scaling operations.

scale_w2x instance-attribute

scale_w2x: Literal[1, 2, 4] = scale

scaler instance-attribute

scaler = ensure_obj(scaler or kernel, __class__)

shifter instance-attribute

shifter = ensure_obj(shifter or kernel, __class__)

tiles instance-attribute

tiles = tiles

tilesize instance-attribute

tilesize = tilesize

cached_property

cached_property(func: Callable[Concatenate[_BaseScalerT, P], T_co])

Bases: cached_property[T_co]

Read only version of functools.cached_property.

Source code
265
def __init__(self, func: Callable[Concatenate[_BaseScalerT, P], T_co]) -> None: ...

calc_tilesize

calc_tilesize(
    clip: VideoNode, **kwargs: Any
) -> tuple[tuple[int, int], tuple[int, int]]

Reimplementation of vsmlrt.calc_tilesize helper function

Source code
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
def calc_tilesize(self, clip: vs.VideoNode, **kwargs: Any) -> tuple[tuple[int, int], tuple[int, int]]:
    """Reimplementation of vsmlrt.calc_tilesize helper function"""

    from vsmlrt import calc_tilesize

    kwargs = dict(
        tiles=self.tiles,
        tilesize=self.tilesize,
        width=clip.width,
        height=clip.height,
        multiple=1,
        overlap_w=self.overlap_w,
        overlap_h=self.overlap_h,
    ) | kwargs

    return calc_tilesize(**kwargs)

ensure_obj classmethod

ensure_obj(
    scaler: str | type[Self] | Self | None = None,
    /,
    func_except: FuncExceptT | None = None,
) -> Self

Ensure that the input is a scaler instance, resolving it if necessary.

Parameters:

  • scaler
    (str | type[Self] | Self | None, default: None ) –

    Scaler identifier (string, class, or instance).

  • func_except
    (FuncExceptT | None, default: None ) –

    Function returned for custom error handling.

Returns:

  • Self

    Scaler instance.

Source code
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
@classmethod
def ensure_obj(
    cls,
    scaler: str | type[Self] | Self | None = None,
    /,
    func_except: FuncExceptT | None = None,
) -> Self:
    """
    Ensure that the input is a scaler instance, resolving it if necessary.

    :param scaler:          Scaler identifier (string, class, or instance).
    :param func_except:     Function returned for custom error handling.
    :return:                Scaler instance.
    """
    return _base_ensure_obj(cls, scaler, func_except)

from_param classmethod

from_param(
    scaler: str | type[Self] | Self | None = None,
    /,
    func_except: FuncExceptT | None = None,
) -> type[Self]

Resolve and return a scaler type from a given input (string, type, or instance).

Parameters:

  • scaler
    (str | type[Self] | Self | None, default: None ) –

    Scaler identifier (string, class, or instance).

  • func_except
    (FuncExceptT | None, default: None ) –

    Function returned for custom error handling.

Returns:

  • type[Self]

    Resolved scaler type.

Source code
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
@classmethod
def from_param(
    cls,
    scaler: str | type[Self] | Self | None = None,
    /,
    func_except: FuncExceptT | None = None,
) -> type[Self]:
    """
    Resolve and return a scaler type from a given input (string, type, or instance).

    :param scaler:          Scaler identifier (string, class, or instance).
    :param func_except:     Function returned for custom error handling.
    :return:                Resolved scaler type.
    """
    return _base_from_param(cls, scaler, cls._err_class, func_except)

get_scale_args

get_scale_args(
    clip: VideoNode,
    shift: tuple[TopShift, LeftShift] = (0, 0),
    width: int | None = None,
    height: int | None = None,
    **kwargs: Any
) -> dict[str, Any]

Generate the keyword arguments used for scaling.

Parameters:

  • clip
    (VideoNode) –

    The source clip.

  • shift
    (tuple[TopShift, LeftShift], default: (0, 0) ) –

    Subpixel shift (top, left).

  • width
    (int | None, default: None ) –

    Target width.

  • height
    (int | None, default: None ) –

    Target height.

  • kwargs
    (Any, default: {} ) –

    Extra parameters to merge.

Returns:

  • dict[str, Any]

    Final dictionary of keyword arguments for the scale function.

Source code
466
467
468
469
470
471
472
473
474
475
476
477
478
479
480
481
482
483
484
def get_scale_args(
    self,
    clip: vs.VideoNode,
    shift: tuple[TopShift, LeftShift] = (0, 0),
    width: int | None = None,
    height: int | None = None,
    **kwargs: Any,
) -> dict[str, Any]:
    """
    Generate the keyword arguments used for scaling.

    :param clip:    The source clip.
    :param shift:   Subpixel shift (top, left).
    :param width:   Target width.
    :param height:  Target height.
    :param kwargs:  Extra parameters to merge.
    :return:        Final dictionary of keyword arguments for the scale function.
    """
    return dict(width=width, height=height, src_top=shift[0], src_left=shift[1]) | self.kwargs | kwargs

inference

inference(
    clip: ConstantFormatVideoNode, **kwargs: Any
) -> ConstantFormatVideoNode
Source code
606
607
608
609
610
611
612
613
614
615
616
617
618
619
620
def inference(self, clip: ConstantFormatVideoNode, **kwargs: Any) -> ConstantFormatVideoNode:
    from vsmlrt import Waifu2x as mlrt_Waifu2x
    from vsmlrt import Waifu2xModel

    return mlrt_Waifu2x(
        clip,
        self.noise,
        self.scale_w2x,
        self.tiles,
        self.tilesize,
        self.overlap,
        Waifu2xModel(self._model),
        self.backend,
        **kwargs
    )

kernel_radius

kernel_radius() -> int

Return the effective kernel radius for the scaler.

Returns:

  • int

    Kernel radius.

Raises:

  • CustomNotImplementedError

    If no kernel radius is defined.

Source code
347
348
349
350
351
352
353
354
355
@cached_property
def kernel_radius(self) -> int:
    """
    Return the effective kernel radius for the scaler.

    :raises CustomNotImplementedError:  If no kernel radius is defined.
    :return:                            Kernel radius.
    """
    ...

multi

multi(
    clip: VideoNodeT,
    multi: float = 2.0,
    shift: tuple[TopShift, LeftShift] = (0, 0),
    **kwargs: Any
) -> VideoNodeT

Deprecated alias for supersample.

Parameters:

  • clip
    (VideoNodeT) –

    The source clip.

  • multi
    (float, default: 2.0 ) –

    Supersampling factor.

  • shift
    (tuple[TopShift, LeftShift], default: (0, 0) ) –

    Subpixel shift (top, left) applied during scaling.

  • kwargs
    (Any, default: {} ) –

    Additional arguments forwarded to the scale function.

Returns:

Source code
451
452
453
454
455
456
457
458
459
460
461
462
463
464
@deprecated('The "multi" method is deprecated. Use "supersample" instead.', category=DeprecationWarning)
def multi(
    self, clip: VideoNodeT, multi: float = 2.0, shift: tuple[TopShift, LeftShift] = (0, 0), **kwargs: Any
) -> VideoNodeT:
    """
    Deprecated alias for `supersample`.

    :param clip:    The source clip.
    :param multi:   Supersampling factor.
    :param shift:   Subpixel shift (top, left) applied during scaling.
    :param kwargs:  Additional arguments forwarded to the scale function.
    :return:        The supersampled clip.
    """
    return self.supersample(clip, multi, shift, **kwargs)

postprocess_clip

postprocess_clip(
    clip: VideoNode, input_clip: VideoNode, **kwargs: Any
) -> ConstantFormatVideoNode
Source code
628
629
630
631
632
633
634
635
def postprocess_clip(self, clip: vs.VideoNode, input_clip: vs.VideoNode, **kwargs: Any) -> ConstantFormatVideoNode:
    assert check_variable_format(clip, self.__class__)

    if clip.format != get_video_format(input_clip):
        kwargs = dict(dither_type=DitherType.ORDERED) | kwargs
        clip = self.kernel.resample(clip, input_clip, Matrix.from_video(input_clip, func=self.__class__), **kwargs)

    return clip

preprocess_clip

preprocess_clip(clip: VideoNode, **kwargs: Any) -> ConstantFormatVideoNode
Source code
624
625
626
def preprocess_clip(self, clip: vs.VideoNode, **kwargs: Any) -> ConstantFormatVideoNode:
    clip = self.kernel.resample(clip, vs.RGBH if self.backend.fp16 else vs.RGBS, Matrix.RGB)
    return limiter(clip, func=self.__class__)

pretty_string

pretty_string() -> str

Cached property returning a user-friendly string representation.

Returns:

  • str

    Pretty-printed string with arguments.

Source code
368
369
370
371
372
373
374
375
@cached_property
def pretty_string(self) -> str:
    """
    Cached property returning a user-friendly string representation.

    :return: Pretty-printed string with arguments.
    """
    return self._pretty_string()

scale

scale(
    clip: VideoNode,
    width: int | None = None,
    height: int | None = None,
    shift: tuple[float, float] = (0, 0),
    **kwargs: Any
) -> ConstantFormatVideoNode

Scale the given clip using the ONNX model.

Parameters:

  • clip
    (VideoNode) –

    The input clip to be scaled.

  • width
    (int | None, default: None ) –

    The target width for scaling. If None, the width of the input clip will be used.

  • height
    (int | None, default: None ) –

    The target height for scaling. If None, the height of the input clip will be used.

  • shift
    (tuple[float, float], default: (0, 0) ) –

    A tuple representing the shift values for the x and y axes.

  • **kwargs
    (Any, default: {} ) –

    Additional arguments to be passed to the preprocess_clip, postprocess_clip, inference, and _final_scale methods. Use the prefix preprocess_ or postprocess_ to pass an argument to the respective method. Use the prefix inference_ to pass an argument to the inference method.

Returns:

  • ConstantFormatVideoNode

    The scaled clip.

Source code
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
def scale(
    self,
    clip: vs.VideoNode,
    width: int | None = None,
    height: int | None = None,
    shift: tuple[float, float] = (0, 0),
    **kwargs: Any
) -> ConstantFormatVideoNode:
    """
    Scale the given clip using the ONNX model.

    :param clip:        The input clip to be scaled.
    :param width:       The target width for scaling. If None, the width of the input clip will be used.
    :param height:      The target height for scaling. If None, the height of the input clip will be used.
    :param shift:       A tuple representing the shift values for the x and y axes.
    :param **kwargs:    Additional arguments to be passed to the `preprocess_clip`, `postprocess_clip`,
                        `inference`, and `_final_scale` methods.
                        Use the prefix `preprocess_` or `postprocess_` to pass an argument to the respective method.
                        Use the prefix `inference_` to pass an argument to the inference method.

    :return:            The scaled clip.
    """
    from vsmlrt import Backend

    assert check_variable_format(clip, self.__class__)

    width, height = self._wh_norm(clip, width, height)

    preprocess_kwargs = dict[str, Any]()
    postprocess_kwargs = dict[str, Any]()
    inference_kwargs = dict[str, Any]()

    for k in kwargs.copy():
        for prefix, ckwargs in zip(
            ("preprocess_", "postprocess_", "inference_"),
            (preprocess_kwargs, postprocess_kwargs, inference_kwargs)
        ):
            if k.startswith(prefix):
                ckwargs[k.removeprefix(prefix)] = kwargs.pop(k)
                break

    wclip = self.preprocess_clip(clip, **preprocess_kwargs)

    if 0 not in {clip.width, clip.height}:
        scaled = self.inference(wclip, **inference_kwargs)
    else:
        if not isinstance(self.backend, Backend.TRT):
            raise CustomValueError(
                "Variable resolution clips can only be processed with TRT Backend!", self.__class__, self.backend
            )

        warning(f"{self.__class__.__name__}: Variable resolution clip detected!")

        if self.backend.static_shape:
            warning("static_shape is True, setting it to False...")
            self.backend.static_shape = False

        if not self.backend.max_shapes:
            warning("max_shapes is None, setting it to (1936, 1088). You may want to adjust it...")
            self.backend.max_shapes = (1936, 1088)

        if not self.backend.opt_shapes:
            warning("opt_shapes is None, setting it to (64, 64). You may want to adjust it...")
            self.backend.opt_shapes = (64, 64)

        scaled = ProcessVariableResClip[ConstantFormatVideoNode].from_func(
            wclip, lambda c: self.inference(c, **inference_kwargs), False, wclip.format, self.max_instances
        )

    scaled = self.postprocess_clip(scaled, clip, **postprocess_kwargs)

    return self._finish_scale(scaled, clip, width, height, shift, **kwargs)

supersample

supersample(
    clip: VideoNodeT,
    rfactor: float = 2.0,
    shift: tuple[TopShift, LeftShift] = (0, 0),
    **kwargs: Any
) -> VideoNodeT

Supersample a clip by a given scaling factor.

Parameters:

  • clip
    (VideoNodeT) –

    The source clip.

  • rfactor
    (float, default: 2.0 ) –

    Scaling factor for supersampling.

  • shift
    (tuple[TopShift, LeftShift], default: (0, 0) ) –

    Subpixel shift (top, left) applied during scaling.

  • kwargs
    (Any, default: {} ) –

    Additional arguments forwarded to the scale function.

Returns:

Raises:

  • CustomValueError

    If resulting resolution is non-positive.

Source code
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
def supersample(
    self, clip: VideoNodeT, rfactor: float = 2.0, shift: tuple[TopShift, LeftShift] = (0, 0), **kwargs: Any
) -> VideoNodeT:
    """
    Supersample a clip by a given scaling factor.

    :param clip:                The source clip.
    :param rfactor:             Scaling factor for supersampling.
    :param shift:               Subpixel shift (top, left) applied during scaling.
    :param kwargs:              Additional arguments forwarded to the scale function.
    :raises CustomValueError:   If resulting resolution is non-positive.
    :return:                    The supersampled clip.
    """
    assert check_variable_resolution(clip, self.supersample)

    dst_width, dst_height = ceil(clip.width * rfactor), ceil(clip.height * rfactor)

    if max(dst_width, dst_height) <= 0.0:
        raise CustomValueError(
            'Multiplying the resolution by "rfactor" must result in a positive resolution!',
            self.supersample,
            rfactor,
        )

    return self.scale(clip, dst_width, dst_height, shift, **kwargs)  # type: ignore[return-value]

Cunet

Cunet(
    scale: Literal[1, 2, 4] = 2,
    noise: Literal[-1, 0, 1, 2, 3] = -1,
    backend: Any | None = None,
    tiles: int | tuple[int, int] | None = None,
    tilesize: int | tuple[int, int] | None = None,
    overlap: int | tuple[int, int] | None = None,
    max_instances: int = 2,
    *,
    kernel: KernelLike = Catrom,
    scaler: ScalerLike | None = None,
    shifter: KernelLike | None = None,
    **kwargs: Any
)

Bases: _Waifu2xCunet

CUNet (Compact U-Net) model for anime art.

Example usage:

from vsscale import Waifu2x

doubled = Waifu2x.Cunet().scale(clip, clip.width * 2, clip.height * 2)

Parameters:

  • scale

    (Literal[1, 2, 4], default: 2 ) –

    Upscaling factor. 1 = no uspcaling, 2 = 2x, 4 = 4x.

  • noise

    (Literal[-1, 0, 1, 2, 3], default: -1 ) –

    Noise reduction level. -1 = none, 0 = low, 1 = medium, 2 = high, 3 = highest.

  • backend

    (Any | None, default: None ) –

    The backend to be used with the vs-mlrt framework. If set to None, the most suitable backend will be automatically selected, prioritizing fp16 support.

  • tiles

    (int | tuple[int, int] | None, default: None ) –

    Whether to split the image into multiple tiles. This can help reduce VRAM usage, but note that the model's behavior may vary when they are used.

  • tilesize

    (int | tuple[int, int] | None, default: None ) –

    The size of each tile when splitting the image (if tiles are enabled).

  • overlap

    (int | tuple[int, int] | None, default: None ) –

    The size of overlap between tiles.

  • max_instances

    (int, default: 2 ) –

    Maximum instances to spawn when scaling a variable resolution clip.

  • kernel

    (KernelLike, default: Catrom ) –

    Base kernel to be used for certain scaling/shifting/resampling operations. Defaults to Catrom.

  • scaler

    (ScalerLike | None, default: None ) –

    Scaler used for scaling operations. Defaults to kernel.

  • shifter

    (KernelLike | None, default: None ) –

    Kernel used for shifting operations. Defaults to kernel.

  • **kwargs

    (Any, default: {} ) –

    Additional arguments to pass to the backend. See the vsmlrt backend's docstring for more details.

Classes:

Methods:

  • calc_tilesize

    Reimplementation of vsmlrt.calc_tilesize helper function

  • ensure_obj

    Ensure that the input is a scaler instance, resolving it if necessary.

  • from_param

    Resolve and return a scaler type from a given input (string, type, or instance).

  • get_scale_args

    Generate the keyword arguments used for scaling.

  • inference
  • kernel_radius

    Return the effective kernel radius for the scaler.

  • multi

    Deprecated alias for supersample.

  • postprocess_clip
  • preprocess_clip
  • pretty_string

    Cached property returning a user-friendly string representation.

  • scale

    Scale the given clip using the ONNX model.

  • supersample

    Supersample a clip by a given scaling factor.

Attributes:

Source code
568
569
570
571
572
573
574
575
576
577
578
579
580
581
582
583
584
585
586
587
588
589
590
591
592
593
594
595
596
597
598
599
600
601
602
603
604
def __init__(
    self,
    scale: Literal[1, 2, 4] = 2,
    noise: Literal[-1, 0, 1, 2, 3] = -1,
    backend: Any | None = None,
    tiles: int | tuple[int, int] | None = None,
    tilesize: int | tuple[int, int] | None = None,
    overlap: int | tuple[int, int] | None = None,
    max_instances: int = 2,
    *,
    kernel: KernelLike = Catrom,
    scaler: ScalerLike | None = None,
    shifter: KernelLike | None = None,
    **kwargs: Any
) -> None:
    """
    :param scale:           Upscaling factor. 1 = no uspcaling, 2 = 2x, 4 = 4x.
    :param noise:           Noise reduction level. -1 = none, 0 = low, 1 = medium, 2 = high, 3 = highest.
    :param backend:         The backend to be used with the vs-mlrt framework.
                            If set to None, the most suitable backend will be automatically selected, prioritizing fp16 support.
    :param tiles:           Whether to split the image into multiple tiles.
                            This can help reduce VRAM usage, but note that the model's behavior may vary when they are used.
    :param tilesize:        The size of each tile when splitting the image (if tiles are enabled).
    :param overlap:         The size of overlap between tiles.
    :param max_instances:   Maximum instances to spawn when scaling a variable resolution clip.
    :param kernel:          Base kernel to be used for certain scaling/shifting/resampling operations.
                            Defaults to Catrom.
    :param scaler:          Scaler used for scaling operations. Defaults to kernel.
    :param shifter:         Kernel used for shifting operations. Defaults to kernel.
    :param **kwargs:        Additional arguments to pass to the backend.
                            See the vsmlrt backend's docstring for more details.
    """
    self.scale_w2x = scale
    self.noise = noise
    super().__init__(
        None, backend, tiles, tilesize, overlap, max_instances, kernel=kernel, scaler=scaler, shifter=shifter, **kwargs
    )

backend instance-attribute

backend = autoselect_backend(**_default_args | kwargs)

kernel instance-attribute

kernel = ensure_obj(kernel, __class__)

kwargs instance-attribute

kwargs: dict[str, Any] = kwargs

Arguments passed to the implemented funcs or internal scale function.

max_instances instance-attribute

max_instances = max_instances

model instance-attribute

model = str(resolve())

noise instance-attribute

noise: Literal[-1, 0, 1, 2, 3] = noise

overlap instance-attribute

overlap = overlap

overlap_h instance-attribute

overlap_h = 8

overlap_w instance-attribute

overlap_w = 8

scale_function instance-attribute

scale_function: Callable[..., VideoNode]

Scale function called internally when performing scaling operations.

scale_w2x instance-attribute

scale_w2x: Literal[1, 2, 4] = scale

scaler instance-attribute

scaler = ensure_obj(scaler or kernel, __class__)

shifter instance-attribute

shifter = ensure_obj(shifter or kernel, __class__)

tiles instance-attribute

tiles = tiles

tilesize instance-attribute

tilesize = tilesize

cached_property

cached_property(func: Callable[Concatenate[_BaseScalerT, P], T_co])

Bases: cached_property[T_co]

Read only version of functools.cached_property.

Source code
265
def __init__(self, func: Callable[Concatenate[_BaseScalerT, P], T_co]) -> None: ...

calc_tilesize

calc_tilesize(
    clip: VideoNode, **kwargs: Any
) -> tuple[tuple[int, int], tuple[int, int]]

Reimplementation of vsmlrt.calc_tilesize helper function

Source code
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
def calc_tilesize(self, clip: vs.VideoNode, **kwargs: Any) -> tuple[tuple[int, int], tuple[int, int]]:
    """Reimplementation of vsmlrt.calc_tilesize helper function"""

    from vsmlrt import calc_tilesize

    kwargs = dict(
        tiles=self.tiles,
        tilesize=self.tilesize,
        width=clip.width,
        height=clip.height,
        multiple=1,
        overlap_w=self.overlap_w,
        overlap_h=self.overlap_h,
    ) | kwargs

    return calc_tilesize(**kwargs)

ensure_obj classmethod

ensure_obj(
    scaler: str | type[Self] | Self | None = None,
    /,
    func_except: FuncExceptT | None = None,
) -> Self

Ensure that the input is a scaler instance, resolving it if necessary.

Parameters:

  • scaler
    (str | type[Self] | Self | None, default: None ) –

    Scaler identifier (string, class, or instance).

  • func_except
    (FuncExceptT | None, default: None ) –

    Function returned for custom error handling.

Returns:

  • Self

    Scaler instance.

Source code
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
@classmethod
def ensure_obj(
    cls,
    scaler: str | type[Self] | Self | None = None,
    /,
    func_except: FuncExceptT | None = None,
) -> Self:
    """
    Ensure that the input is a scaler instance, resolving it if necessary.

    :param scaler:          Scaler identifier (string, class, or instance).
    :param func_except:     Function returned for custom error handling.
    :return:                Scaler instance.
    """
    return _base_ensure_obj(cls, scaler, func_except)

from_param classmethod

from_param(
    scaler: str | type[Self] | Self | None = None,
    /,
    func_except: FuncExceptT | None = None,
) -> type[Self]

Resolve and return a scaler type from a given input (string, type, or instance).

Parameters:

  • scaler
    (str | type[Self] | Self | None, default: None ) –

    Scaler identifier (string, class, or instance).

  • func_except
    (FuncExceptT | None, default: None ) –

    Function returned for custom error handling.

Returns:

  • type[Self]

    Resolved scaler type.

Source code
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
@classmethod
def from_param(
    cls,
    scaler: str | type[Self] | Self | None = None,
    /,
    func_except: FuncExceptT | None = None,
) -> type[Self]:
    """
    Resolve and return a scaler type from a given input (string, type, or instance).

    :param scaler:          Scaler identifier (string, class, or instance).
    :param func_except:     Function returned for custom error handling.
    :return:                Resolved scaler type.
    """
    return _base_from_param(cls, scaler, cls._err_class, func_except)

get_scale_args

get_scale_args(
    clip: VideoNode,
    shift: tuple[TopShift, LeftShift] = (0, 0),
    width: int | None = None,
    height: int | None = None,
    **kwargs: Any
) -> dict[str, Any]

Generate the keyword arguments used for scaling.

Parameters:

  • clip
    (VideoNode) –

    The source clip.

  • shift
    (tuple[TopShift, LeftShift], default: (0, 0) ) –

    Subpixel shift (top, left).

  • width
    (int | None, default: None ) –

    Target width.

  • height
    (int | None, default: None ) –

    Target height.

  • kwargs
    (Any, default: {} ) –

    Extra parameters to merge.

Returns:

  • dict[str, Any]

    Final dictionary of keyword arguments for the scale function.

Source code
466
467
468
469
470
471
472
473
474
475
476
477
478
479
480
481
482
483
484
def get_scale_args(
    self,
    clip: vs.VideoNode,
    shift: tuple[TopShift, LeftShift] = (0, 0),
    width: int | None = None,
    height: int | None = None,
    **kwargs: Any,
) -> dict[str, Any]:
    """
    Generate the keyword arguments used for scaling.

    :param clip:    The source clip.
    :param shift:   Subpixel shift (top, left).
    :param width:   Target width.
    :param height:  Target height.
    :param kwargs:  Extra parameters to merge.
    :return:        Final dictionary of keyword arguments for the scale function.
    """
    return dict(width=width, height=height, src_top=shift[0], src_left=shift[1]) | self.kwargs | kwargs

inference

inference(
    clip: ConstantFormatVideoNode, **kwargs: Any
) -> ConstantFormatVideoNode
Source code
675
676
677
678
679
680
681
682
683
684
685
def inference(self, clip: ConstantFormatVideoNode, **kwargs: Any) -> ConstantFormatVideoNode:
    # Cunet model ruins image borders, so we need to pad it before upscale and crop it after.
    if kwargs.pop("no_pad", False):
        return super().inference(clip, **kwargs)

    with padder.ctx(16, 4) as pad:
        padded = pad.MIRROR(clip)
        scaled = super().inference(padded, **kwargs)
        cropped = pad.CROP(scaled)

    return cropped

kernel_radius

kernel_radius() -> int

Return the effective kernel radius for the scaler.

Returns:

  • int

    Kernel radius.

Raises:

  • CustomNotImplementedError

    If no kernel radius is defined.

Source code
347
348
349
350
351
352
353
354
355
@cached_property
def kernel_radius(self) -> int:
    """
    Return the effective kernel radius for the scaler.

    :raises CustomNotImplementedError:  If no kernel radius is defined.
    :return:                            Kernel radius.
    """
    ...

multi

multi(
    clip: VideoNodeT,
    multi: float = 2.0,
    shift: tuple[TopShift, LeftShift] = (0, 0),
    **kwargs: Any
) -> VideoNodeT

Deprecated alias for supersample.

Parameters:

  • clip
    (VideoNodeT) –

    The source clip.

  • multi
    (float, default: 2.0 ) –

    Supersampling factor.

  • shift
    (tuple[TopShift, LeftShift], default: (0, 0) ) –

    Subpixel shift (top, left) applied during scaling.

  • kwargs
    (Any, default: {} ) –

    Additional arguments forwarded to the scale function.

Returns:

Source code
451
452
453
454
455
456
457
458
459
460
461
462
463
464
@deprecated('The "multi" method is deprecated. Use "supersample" instead.', category=DeprecationWarning)
def multi(
    self, clip: VideoNodeT, multi: float = 2.0, shift: tuple[TopShift, LeftShift] = (0, 0), **kwargs: Any
) -> VideoNodeT:
    """
    Deprecated alias for `supersample`.

    :param clip:    The source clip.
    :param multi:   Supersampling factor.
    :param shift:   Subpixel shift (top, left) applied during scaling.
    :param kwargs:  Additional arguments forwarded to the scale function.
    :return:        The supersampled clip.
    """
    return self.supersample(clip, multi, shift, **kwargs)

postprocess_clip

postprocess_clip(
    clip: VideoNode, input_clip: VideoNode, **kwargs: Any
) -> ConstantFormatVideoNode
Source code
687
688
689
690
691
692
693
694
695
696
697
698
699
def postprocess_clip(self, clip: vs.VideoNode, input_clip: vs.VideoNode, **kwargs: Any) -> ConstantFormatVideoNode:
    # Cunet model also has a tint issue but it is not constant
    # It leaves flat areas alone but tints detailed areas.
    # Since most people will use Cunet to rescale details, the tint fix is enabled by default.
    if kwargs.pop("no_tint_fix", False):
        return super().postprocess_clip(clip, input_clip, **kwargs)

    tint_fix = norm_expr(
        clip, 'x 0.5 255 / + 0 1 clamp',
        planes=0 if get_video_format(input_clip).color_family is vs.GRAY else None,
        func="Waifu2x." + self.__class__.__name__
    )
    return super().postprocess_clip(tint_fix, input_clip, **kwargs)

preprocess_clip

preprocess_clip(clip: VideoNode, **kwargs: Any) -> ConstantFormatVideoNode
Source code
624
625
626
def preprocess_clip(self, clip: vs.VideoNode, **kwargs: Any) -> ConstantFormatVideoNode:
    clip = self.kernel.resample(clip, vs.RGBH if self.backend.fp16 else vs.RGBS, Matrix.RGB)
    return limiter(clip, func=self.__class__)

pretty_string

pretty_string() -> str

Cached property returning a user-friendly string representation.

Returns:

  • str

    Pretty-printed string with arguments.

Source code
368
369
370
371
372
373
374
375
@cached_property
def pretty_string(self) -> str:
    """
    Cached property returning a user-friendly string representation.

    :return: Pretty-printed string with arguments.
    """
    return self._pretty_string()

scale

scale(
    clip: VideoNode,
    width: int | None = None,
    height: int | None = None,
    shift: tuple[float, float] = (0, 0),
    **kwargs: Any
) -> ConstantFormatVideoNode

Scale the given clip using the ONNX model.

Parameters:

  • clip
    (VideoNode) –

    The input clip to be scaled.

  • width
    (int | None, default: None ) –

    The target width for scaling. If None, the width of the input clip will be used.

  • height
    (int | None, default: None ) –

    The target height for scaling. If None, the height of the input clip will be used.

  • shift
    (tuple[float, float], default: (0, 0) ) –

    A tuple representing the shift values for the x and y axes.

  • **kwargs
    (Any, default: {} ) –

    Additional arguments to be passed to the preprocess_clip, postprocess_clip, inference, and _final_scale methods. Use the prefix preprocess_ or postprocess_ to pass an argument to the respective method. Use the prefix inference_ to pass an argument to the inference method. Additional Notes for the Cunet model: - The model can cause artifacts around the image edges. To mitigate this, mirrored padding is applied to the image before inference. This behavior can be disabled by setting inference_no_pad=True. - A tint issue is also present but it is not constant. It leaves flat areas alone but tints detailed areas. Since most people will use Cunet to rescale details, the tint fix is enabled by default. This behavior can be disabled with postprocess_no_tint_fix=True

Returns:

  • ConstantFormatVideoNode

    The scaled clip.

Source code
643
644
645
646
647
648
649
650
651
652
653
654
655
656
657
658
659
660
661
662
663
664
665
666
667
668
669
670
671
672
673
def scale(
    self,
    clip: vs.VideoNode,
    width: int | None = None,
    height: int | None = None,
    shift: tuple[float, float] = (0, 0),
    **kwargs: Any
) -> ConstantFormatVideoNode:
    """
    Scale the given clip using the ONNX model.

    :param clip:        The input clip to be scaled.
    :param width:       The target width for scaling. If None, the width of the input clip will be used.
    :param height:      The target height for scaling. If None, the height of the input clip will be used.
    :param shift:       A tuple representing the shift values for the x and y axes.
    :param **kwargs:    Additional arguments to be passed to the `preprocess_clip`, `postprocess_clip`,
                        `inference`, and `_final_scale` methods.
                        Use the prefix `preprocess_` or `postprocess_` to pass an argument to the respective method.
                        Use the prefix `inference_` to pass an argument to the inference method.

                        Additional Notes for the Cunet model:
                        - The model can cause artifacts around the image edges.
                        To mitigate this, mirrored padding is applied to the image before inference.  
                        This behavior can be disabled by setting `inference_no_pad=True`.
                        - A tint issue is also present but it is not constant. It leaves flat areas alone but tints detailed areas.
                        Since most people will use Cunet to rescale details, the tint fix is enabled by default.
                        This behavior can be disabled with `postprocess_no_tint_fix=True`

    :return:            The scaled clip.
    """
    ...

supersample

supersample(
    clip: VideoNodeT,
    rfactor: float = 2.0,
    shift: tuple[TopShift, LeftShift] = (0, 0),
    **kwargs: Any
) -> VideoNodeT

Supersample a clip by a given scaling factor.

Parameters:

  • clip
    (VideoNodeT) –

    The source clip.

  • rfactor
    (float, default: 2.0 ) –

    Scaling factor for supersampling.

  • shift
    (tuple[TopShift, LeftShift], default: (0, 0) ) –

    Subpixel shift (top, left) applied during scaling.

  • kwargs
    (Any, default: {} ) –

    Additional arguments forwarded to the scale function.

Returns:

Raises:

  • CustomValueError

    If resulting resolution is non-positive.

Source code
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
def supersample(
    self, clip: VideoNodeT, rfactor: float = 2.0, shift: tuple[TopShift, LeftShift] = (0, 0), **kwargs: Any
) -> VideoNodeT:
    """
    Supersample a clip by a given scaling factor.

    :param clip:                The source clip.
    :param rfactor:             Scaling factor for supersampling.
    :param shift:               Subpixel shift (top, left) applied during scaling.
    :param kwargs:              Additional arguments forwarded to the scale function.
    :raises CustomValueError:   If resulting resolution is non-positive.
    :return:                    The supersampled clip.
    """
    assert check_variable_resolution(clip, self.supersample)

    dst_width, dst_height = ceil(clip.width * rfactor), ceil(clip.height * rfactor)

    if max(dst_width, dst_height) <= 0.0:
        raise CustomValueError(
            'Multiplying the resolution by "rfactor" must result in a positive resolution!',
            self.supersample,
            rfactor,
        )

    return self.scale(clip, dst_width, dst_height, shift, **kwargs)  # type: ignore[return-value]

Photo

Photo(
    scale: Literal[1, 2, 4] = 2,
    noise: Literal[-1, 0, 1, 2, 3] = -1,
    backend: Any | None = None,
    tiles: int | tuple[int, int] | None = None,
    tilesize: int | tuple[int, int] | None = None,
    overlap: int | tuple[int, int] | None = None,
    max_instances: int = 2,
    *,
    kernel: KernelLike = Catrom,
    scaler: ScalerLike | None = None,
    shifter: KernelLike | None = None,
    **kwargs: Any
)

Bases: BaseWaifu2xRGB

Waifu2x model trained on real-world photographic images.

Example usage:

from vsscale import Waifu2x

doubled = Waifu2x.Photo().scale(clip, clip.width * 2, clip.height * 2)

Parameters:

  • scale

    (Literal[1, 2, 4], default: 2 ) –

    Upscaling factor. 1 = no uspcaling, 2 = 2x, 4 = 4x.

  • noise

    (Literal[-1, 0, 1, 2, 3], default: -1 ) –

    Noise reduction level. -1 = none, 0 = low, 1 = medium, 2 = high, 3 = highest.

  • backend

    (Any | None, default: None ) –

    The backend to be used with the vs-mlrt framework. If set to None, the most suitable backend will be automatically selected, prioritizing fp16 support.

  • tiles

    (int | tuple[int, int] | None, default: None ) –

    Whether to split the image into multiple tiles. This can help reduce VRAM usage, but note that the model's behavior may vary when they are used.

  • tilesize

    (int | tuple[int, int] | None, default: None ) –

    The size of each tile when splitting the image (if tiles are enabled).

  • overlap

    (int | tuple[int, int] | None, default: None ) –

    The size of overlap between tiles.

  • max_instances

    (int, default: 2 ) –

    Maximum instances to spawn when scaling a variable resolution clip.

  • kernel

    (KernelLike, default: Catrom ) –

    Base kernel to be used for certain scaling/shifting/resampling operations. Defaults to Catrom.

  • scaler

    (ScalerLike | None, default: None ) –

    Scaler used for scaling operations. Defaults to kernel.

  • shifter

    (KernelLike | None, default: None ) –

    Kernel used for shifting operations. Defaults to kernel.

  • **kwargs

    (Any, default: {} ) –

    Additional arguments to pass to the backend. See the vsmlrt backend's docstring for more details.

Classes:

Methods:

  • calc_tilesize

    Reimplementation of vsmlrt.calc_tilesize helper function

  • ensure_obj

    Ensure that the input is a scaler instance, resolving it if necessary.

  • from_param

    Resolve and return a scaler type from a given input (string, type, or instance).

  • get_scale_args

    Generate the keyword arguments used for scaling.

  • inference
  • kernel_radius

    Return the effective kernel radius for the scaler.

  • multi

    Deprecated alias for supersample.

  • postprocess_clip
  • preprocess_clip
  • pretty_string

    Cached property returning a user-friendly string representation.

  • scale

    Scale the given clip using the ONNX model.

  • supersample

    Supersample a clip by a given scaling factor.

Attributes:

Source code
568
569
570
571
572
573
574
575
576
577
578
579
580
581
582
583
584
585
586
587
588
589
590
591
592
593
594
595
596
597
598
599
600
601
602
603
604
def __init__(
    self,
    scale: Literal[1, 2, 4] = 2,
    noise: Literal[-1, 0, 1, 2, 3] = -1,
    backend: Any | None = None,
    tiles: int | tuple[int, int] | None = None,
    tilesize: int | tuple[int, int] | None = None,
    overlap: int | tuple[int, int] | None = None,
    max_instances: int = 2,
    *,
    kernel: KernelLike = Catrom,
    scaler: ScalerLike | None = None,
    shifter: KernelLike | None = None,
    **kwargs: Any
) -> None:
    """
    :param scale:           Upscaling factor. 1 = no uspcaling, 2 = 2x, 4 = 4x.
    :param noise:           Noise reduction level. -1 = none, 0 = low, 1 = medium, 2 = high, 3 = highest.
    :param backend:         The backend to be used with the vs-mlrt framework.
                            If set to None, the most suitable backend will be automatically selected, prioritizing fp16 support.
    :param tiles:           Whether to split the image into multiple tiles.
                            This can help reduce VRAM usage, but note that the model's behavior may vary when they are used.
    :param tilesize:        The size of each tile when splitting the image (if tiles are enabled).
    :param overlap:         The size of overlap between tiles.
    :param max_instances:   Maximum instances to spawn when scaling a variable resolution clip.
    :param kernel:          Base kernel to be used for certain scaling/shifting/resampling operations.
                            Defaults to Catrom.
    :param scaler:          Scaler used for scaling operations. Defaults to kernel.
    :param shifter:         Kernel used for shifting operations. Defaults to kernel.
    :param **kwargs:        Additional arguments to pass to the backend.
                            See the vsmlrt backend's docstring for more details.
    """
    self.scale_w2x = scale
    self.noise = noise
    super().__init__(
        None, backend, tiles, tilesize, overlap, max_instances, kernel=kernel, scaler=scaler, shifter=shifter, **kwargs
    )

backend instance-attribute

backend = autoselect_backend(**_default_args | kwargs)

kernel instance-attribute

kernel = ensure_obj(kernel, __class__)

kwargs instance-attribute

kwargs: dict[str, Any] = kwargs

Arguments passed to the implemented funcs or internal scale function.

max_instances instance-attribute

max_instances = max_instances

model instance-attribute

model = str(resolve())

noise instance-attribute

noise: Literal[-1, 0, 1, 2, 3] = noise

overlap instance-attribute

overlap = overlap

overlap_h instance-attribute

overlap_h = 8

overlap_w instance-attribute

overlap_w = 8

scale_function instance-attribute

scale_function: Callable[..., VideoNode]

Scale function called internally when performing scaling operations.

scale_w2x instance-attribute

scale_w2x: Literal[1, 2, 4] = scale

scaler instance-attribute

scaler = ensure_obj(scaler or kernel, __class__)

shifter instance-attribute

shifter = ensure_obj(shifter or kernel, __class__)

tiles instance-attribute

tiles = tiles

tilesize instance-attribute

tilesize = tilesize

cached_property

cached_property(func: Callable[Concatenate[_BaseScalerT, P], T_co])

Bases: cached_property[T_co]

Read only version of functools.cached_property.

Source code
265
def __init__(self, func: Callable[Concatenate[_BaseScalerT, P], T_co]) -> None: ...

calc_tilesize

calc_tilesize(
    clip: VideoNode, **kwargs: Any
) -> tuple[tuple[int, int], tuple[int, int]]

Reimplementation of vsmlrt.calc_tilesize helper function

Source code
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
def calc_tilesize(self, clip: vs.VideoNode, **kwargs: Any) -> tuple[tuple[int, int], tuple[int, int]]:
    """Reimplementation of vsmlrt.calc_tilesize helper function"""

    from vsmlrt import calc_tilesize

    kwargs = dict(
        tiles=self.tiles,
        tilesize=self.tilesize,
        width=clip.width,
        height=clip.height,
        multiple=1,
        overlap_w=self.overlap_w,
        overlap_h=self.overlap_h,
    ) | kwargs

    return calc_tilesize(**kwargs)

ensure_obj classmethod

ensure_obj(
    scaler: str | type[Self] | Self | None = None,
    /,
    func_except: FuncExceptT | None = None,
) -> Self

Ensure that the input is a scaler instance, resolving it if necessary.

Parameters:

  • scaler
    (str | type[Self] | Self | None, default: None ) –

    Scaler identifier (string, class, or instance).

  • func_except
    (FuncExceptT | None, default: None ) –

    Function returned for custom error handling.

Returns:

  • Self

    Scaler instance.

Source code
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
@classmethod
def ensure_obj(
    cls,
    scaler: str | type[Self] | Self | None = None,
    /,
    func_except: FuncExceptT | None = None,
) -> Self:
    """
    Ensure that the input is a scaler instance, resolving it if necessary.

    :param scaler:          Scaler identifier (string, class, or instance).
    :param func_except:     Function returned for custom error handling.
    :return:                Scaler instance.
    """
    return _base_ensure_obj(cls, scaler, func_except)

from_param classmethod

from_param(
    scaler: str | type[Self] | Self | None = None,
    /,
    func_except: FuncExceptT | None = None,
) -> type[Self]

Resolve and return a scaler type from a given input (string, type, or instance).

Parameters:

  • scaler
    (str | type[Self] | Self | None, default: None ) –

    Scaler identifier (string, class, or instance).

  • func_except
    (FuncExceptT | None, default: None ) –

    Function returned for custom error handling.

Returns:

  • type[Self]

    Resolved scaler type.

Source code
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
@classmethod
def from_param(
    cls,
    scaler: str | type[Self] | Self | None = None,
    /,
    func_except: FuncExceptT | None = None,
) -> type[Self]:
    """
    Resolve and return a scaler type from a given input (string, type, or instance).

    :param scaler:          Scaler identifier (string, class, or instance).
    :param func_except:     Function returned for custom error handling.
    :return:                Resolved scaler type.
    """
    return _base_from_param(cls, scaler, cls._err_class, func_except)

get_scale_args

get_scale_args(
    clip: VideoNode,
    shift: tuple[TopShift, LeftShift] = (0, 0),
    width: int | None = None,
    height: int | None = None,
    **kwargs: Any
) -> dict[str, Any]

Generate the keyword arguments used for scaling.

Parameters:

  • clip
    (VideoNode) –

    The source clip.

  • shift
    (tuple[TopShift, LeftShift], default: (0, 0) ) –

    Subpixel shift (top, left).

  • width
    (int | None, default: None ) –

    Target width.

  • height
    (int | None, default: None ) –

    Target height.

  • kwargs
    (Any, default: {} ) –

    Extra parameters to merge.

Returns:

  • dict[str, Any]

    Final dictionary of keyword arguments for the scale function.

Source code
466
467
468
469
470
471
472
473
474
475
476
477
478
479
480
481
482
483
484
def get_scale_args(
    self,
    clip: vs.VideoNode,
    shift: tuple[TopShift, LeftShift] = (0, 0),
    width: int | None = None,
    height: int | None = None,
    **kwargs: Any,
) -> dict[str, Any]:
    """
    Generate the keyword arguments used for scaling.

    :param clip:    The source clip.
    :param shift:   Subpixel shift (top, left).
    :param width:   Target width.
    :param height:  Target height.
    :param kwargs:  Extra parameters to merge.
    :return:        Final dictionary of keyword arguments for the scale function.
    """
    return dict(width=width, height=height, src_top=shift[0], src_left=shift[1]) | self.kwargs | kwargs

inference

inference(
    clip: ConstantFormatVideoNode, **kwargs: Any
) -> ConstantFormatVideoNode
Source code
606
607
608
609
610
611
612
613
614
615
616
617
618
619
620
def inference(self, clip: ConstantFormatVideoNode, **kwargs: Any) -> ConstantFormatVideoNode:
    from vsmlrt import Waifu2x as mlrt_Waifu2x
    from vsmlrt import Waifu2xModel

    return mlrt_Waifu2x(
        clip,
        self.noise,
        self.scale_w2x,
        self.tiles,
        self.tilesize,
        self.overlap,
        Waifu2xModel(self._model),
        self.backend,
        **kwargs
    )

kernel_radius

kernel_radius() -> int

Return the effective kernel radius for the scaler.

Returns:

  • int

    Kernel radius.

Raises:

  • CustomNotImplementedError

    If no kernel radius is defined.

Source code
347
348
349
350
351
352
353
354
355
@cached_property
def kernel_radius(self) -> int:
    """
    Return the effective kernel radius for the scaler.

    :raises CustomNotImplementedError:  If no kernel radius is defined.
    :return:                            Kernel radius.
    """
    ...

multi

multi(
    clip: VideoNodeT,
    multi: float = 2.0,
    shift: tuple[TopShift, LeftShift] = (0, 0),
    **kwargs: Any
) -> VideoNodeT

Deprecated alias for supersample.

Parameters:

  • clip
    (VideoNodeT) –

    The source clip.

  • multi
    (float, default: 2.0 ) –

    Supersampling factor.

  • shift
    (tuple[TopShift, LeftShift], default: (0, 0) ) –

    Subpixel shift (top, left) applied during scaling.

  • kwargs
    (Any, default: {} ) –

    Additional arguments forwarded to the scale function.

Returns:

Source code
451
452
453
454
455
456
457
458
459
460
461
462
463
464
@deprecated('The "multi" method is deprecated. Use "supersample" instead.', category=DeprecationWarning)
def multi(
    self, clip: VideoNodeT, multi: float = 2.0, shift: tuple[TopShift, LeftShift] = (0, 0), **kwargs: Any
) -> VideoNodeT:
    """
    Deprecated alias for `supersample`.

    :param clip:    The source clip.
    :param multi:   Supersampling factor.
    :param shift:   Subpixel shift (top, left) applied during scaling.
    :param kwargs:  Additional arguments forwarded to the scale function.
    :return:        The supersampled clip.
    """
    return self.supersample(clip, multi, shift, **kwargs)

postprocess_clip

postprocess_clip(
    clip: VideoNode, input_clip: VideoNode, **kwargs: Any
) -> ConstantFormatVideoNode
Source code
628
629
630
631
632
633
634
635
def postprocess_clip(self, clip: vs.VideoNode, input_clip: vs.VideoNode, **kwargs: Any) -> ConstantFormatVideoNode:
    assert check_variable_format(clip, self.__class__)

    if clip.format != get_video_format(input_clip):
        kwargs = dict(dither_type=DitherType.ORDERED) | kwargs
        clip = self.kernel.resample(clip, input_clip, Matrix.from_video(input_clip, func=self.__class__), **kwargs)

    return clip

preprocess_clip

preprocess_clip(clip: VideoNode, **kwargs: Any) -> ConstantFormatVideoNode
Source code
624
625
626
def preprocess_clip(self, clip: vs.VideoNode, **kwargs: Any) -> ConstantFormatVideoNode:
    clip = self.kernel.resample(clip, vs.RGBH if self.backend.fp16 else vs.RGBS, Matrix.RGB)
    return limiter(clip, func=self.__class__)

pretty_string

pretty_string() -> str

Cached property returning a user-friendly string representation.

Returns:

  • str

    Pretty-printed string with arguments.

Source code
368
369
370
371
372
373
374
375
@cached_property
def pretty_string(self) -> str:
    """
    Cached property returning a user-friendly string representation.

    :return: Pretty-printed string with arguments.
    """
    return self._pretty_string()

scale

scale(
    clip: VideoNode,
    width: int | None = None,
    height: int | None = None,
    shift: tuple[float, float] = (0, 0),
    **kwargs: Any
) -> ConstantFormatVideoNode

Scale the given clip using the ONNX model.

Parameters:

  • clip
    (VideoNode) –

    The input clip to be scaled.

  • width
    (int | None, default: None ) –

    The target width for scaling. If None, the width of the input clip will be used.

  • height
    (int | None, default: None ) –

    The target height for scaling. If None, the height of the input clip will be used.

  • shift
    (tuple[float, float], default: (0, 0) ) –

    A tuple representing the shift values for the x and y axes.

  • **kwargs
    (Any, default: {} ) –

    Additional arguments to be passed to the preprocess_clip, postprocess_clip, inference, and _final_scale methods. Use the prefix preprocess_ or postprocess_ to pass an argument to the respective method. Use the prefix inference_ to pass an argument to the inference method.

Returns:

  • ConstantFormatVideoNode

    The scaled clip.

Source code
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
def scale(
    self,
    clip: vs.VideoNode,
    width: int | None = None,
    height: int | None = None,
    shift: tuple[float, float] = (0, 0),
    **kwargs: Any
) -> ConstantFormatVideoNode:
    """
    Scale the given clip using the ONNX model.

    :param clip:        The input clip to be scaled.
    :param width:       The target width for scaling. If None, the width of the input clip will be used.
    :param height:      The target height for scaling. If None, the height of the input clip will be used.
    :param shift:       A tuple representing the shift values for the x and y axes.
    :param **kwargs:    Additional arguments to be passed to the `preprocess_clip`, `postprocess_clip`,
                        `inference`, and `_final_scale` methods.
                        Use the prefix `preprocess_` or `postprocess_` to pass an argument to the respective method.
                        Use the prefix `inference_` to pass an argument to the inference method.

    :return:            The scaled clip.
    """
    from vsmlrt import Backend

    assert check_variable_format(clip, self.__class__)

    width, height = self._wh_norm(clip, width, height)

    preprocess_kwargs = dict[str, Any]()
    postprocess_kwargs = dict[str, Any]()
    inference_kwargs = dict[str, Any]()

    for k in kwargs.copy():
        for prefix, ckwargs in zip(
            ("preprocess_", "postprocess_", "inference_"),
            (preprocess_kwargs, postprocess_kwargs, inference_kwargs)
        ):
            if k.startswith(prefix):
                ckwargs[k.removeprefix(prefix)] = kwargs.pop(k)
                break

    wclip = self.preprocess_clip(clip, **preprocess_kwargs)

    if 0 not in {clip.width, clip.height}:
        scaled = self.inference(wclip, **inference_kwargs)
    else:
        if not isinstance(self.backend, Backend.TRT):
            raise CustomValueError(
                "Variable resolution clips can only be processed with TRT Backend!", self.__class__, self.backend
            )

        warning(f"{self.__class__.__name__}: Variable resolution clip detected!")

        if self.backend.static_shape:
            warning("static_shape is True, setting it to False...")
            self.backend.static_shape = False

        if not self.backend.max_shapes:
            warning("max_shapes is None, setting it to (1936, 1088). You may want to adjust it...")
            self.backend.max_shapes = (1936, 1088)

        if not self.backend.opt_shapes:
            warning("opt_shapes is None, setting it to (64, 64). You may want to adjust it...")
            self.backend.opt_shapes = (64, 64)

        scaled = ProcessVariableResClip[ConstantFormatVideoNode].from_func(
            wclip, lambda c: self.inference(c, **inference_kwargs), False, wclip.format, self.max_instances
        )

    scaled = self.postprocess_clip(scaled, clip, **postprocess_kwargs)

    return self._finish_scale(scaled, clip, width, height, shift, **kwargs)

supersample

supersample(
    clip: VideoNodeT,
    rfactor: float = 2.0,
    shift: tuple[TopShift, LeftShift] = (0, 0),
    **kwargs: Any
) -> VideoNodeT

Supersample a clip by a given scaling factor.

Parameters:

  • clip
    (VideoNodeT) –

    The source clip.

  • rfactor
    (float, default: 2.0 ) –

    Scaling factor for supersampling.

  • shift
    (tuple[TopShift, LeftShift], default: (0, 0) ) –

    Subpixel shift (top, left) applied during scaling.

  • kwargs
    (Any, default: {} ) –

    Additional arguments forwarded to the scale function.

Returns:

Raises:

  • CustomValueError

    If resulting resolution is non-positive.

Source code
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
def supersample(
    self, clip: VideoNodeT, rfactor: float = 2.0, shift: tuple[TopShift, LeftShift] = (0, 0), **kwargs: Any
) -> VideoNodeT:
    """
    Supersample a clip by a given scaling factor.

    :param clip:                The source clip.
    :param rfactor:             Scaling factor for supersampling.
    :param shift:               Subpixel shift (top, left) applied during scaling.
    :param kwargs:              Additional arguments forwarded to the scale function.
    :raises CustomValueError:   If resulting resolution is non-positive.
    :return:                    The supersampled clip.
    """
    assert check_variable_resolution(clip, self.supersample)

    dst_width, dst_height = ceil(clip.width * rfactor), ceil(clip.height * rfactor)

    if max(dst_width, dst_height) <= 0.0:
        raise CustomValueError(
            'Multiplying the resolution by "rfactor" must result in a positive resolution!',
            self.supersample,
            rfactor,
        )

    return self.scale(clip, dst_width, dst_height, shift, **kwargs)  # type: ignore[return-value]

SwinUnetArt

SwinUnetArt(
    scale: Literal[1, 2, 4] = 2,
    noise: Literal[-1, 0, 1, 2, 3] = -1,
    backend: Any | None = None,
    tiles: int | tuple[int, int] | None = None,
    tilesize: int | tuple[int, int] | None = None,
    overlap: int | tuple[int, int] | None = None,
    max_instances: int = 2,
    *,
    kernel: KernelLike = Catrom,
    scaler: ScalerLike | None = None,
    shifter: KernelLike | None = None,
    **kwargs: Any
)

Bases: BaseWaifu2xRGB

Swin-Unet-based model trained on anime-style images.

Example usage:

from vsscale import Waifu2x

doubled = Waifu2x.SwinUnetArt().scale(clip, clip.width * 2, clip.height * 2)

Parameters:

  • scale

    (Literal[1, 2, 4], default: 2 ) –

    Upscaling factor. 1 = no uspcaling, 2 = 2x, 4 = 4x.

  • noise

    (Literal[-1, 0, 1, 2, 3], default: -1 ) –

    Noise reduction level. -1 = none, 0 = low, 1 = medium, 2 = high, 3 = highest.

  • backend

    (Any | None, default: None ) –

    The backend to be used with the vs-mlrt framework. If set to None, the most suitable backend will be automatically selected, prioritizing fp16 support.

  • tiles

    (int | tuple[int, int] | None, default: None ) –

    Whether to split the image into multiple tiles. This can help reduce VRAM usage, but note that the model's behavior may vary when they are used.

  • tilesize

    (int | tuple[int, int] | None, default: None ) –

    The size of each tile when splitting the image (if tiles are enabled).

  • overlap

    (int | tuple[int, int] | None, default: None ) –

    The size of overlap between tiles.

  • max_instances

    (int, default: 2 ) –

    Maximum instances to spawn when scaling a variable resolution clip.

  • kernel

    (KernelLike, default: Catrom ) –

    Base kernel to be used for certain scaling/shifting/resampling operations. Defaults to Catrom.

  • scaler

    (ScalerLike | None, default: None ) –

    Scaler used for scaling operations. Defaults to kernel.

  • shifter

    (KernelLike | None, default: None ) –

    Kernel used for shifting operations. Defaults to kernel.

  • **kwargs

    (Any, default: {} ) –

    Additional arguments to pass to the backend. See the vsmlrt backend's docstring for more details.

Classes:

Methods:

  • calc_tilesize

    Reimplementation of vsmlrt.calc_tilesize helper function

  • ensure_obj

    Ensure that the input is a scaler instance, resolving it if necessary.

  • from_param

    Resolve and return a scaler type from a given input (string, type, or instance).

  • get_scale_args

    Generate the keyword arguments used for scaling.

  • inference
  • kernel_radius

    Return the effective kernel radius for the scaler.

  • multi

    Deprecated alias for supersample.

  • postprocess_clip
  • preprocess_clip
  • pretty_string

    Cached property returning a user-friendly string representation.

  • scale

    Scale the given clip using the ONNX model.

  • supersample

    Supersample a clip by a given scaling factor.

Attributes:

Source code
568
569
570
571
572
573
574
575
576
577
578
579
580
581
582
583
584
585
586
587
588
589
590
591
592
593
594
595
596
597
598
599
600
601
602
603
604
def __init__(
    self,
    scale: Literal[1, 2, 4] = 2,
    noise: Literal[-1, 0, 1, 2, 3] = -1,
    backend: Any | None = None,
    tiles: int | tuple[int, int] | None = None,
    tilesize: int | tuple[int, int] | None = None,
    overlap: int | tuple[int, int] | None = None,
    max_instances: int = 2,
    *,
    kernel: KernelLike = Catrom,
    scaler: ScalerLike | None = None,
    shifter: KernelLike | None = None,
    **kwargs: Any
) -> None:
    """
    :param scale:           Upscaling factor. 1 = no uspcaling, 2 = 2x, 4 = 4x.
    :param noise:           Noise reduction level. -1 = none, 0 = low, 1 = medium, 2 = high, 3 = highest.
    :param backend:         The backend to be used with the vs-mlrt framework.
                            If set to None, the most suitable backend will be automatically selected, prioritizing fp16 support.
    :param tiles:           Whether to split the image into multiple tiles.
                            This can help reduce VRAM usage, but note that the model's behavior may vary when they are used.
    :param tilesize:        The size of each tile when splitting the image (if tiles are enabled).
    :param overlap:         The size of overlap between tiles.
    :param max_instances:   Maximum instances to spawn when scaling a variable resolution clip.
    :param kernel:          Base kernel to be used for certain scaling/shifting/resampling operations.
                            Defaults to Catrom.
    :param scaler:          Scaler used for scaling operations. Defaults to kernel.
    :param shifter:         Kernel used for shifting operations. Defaults to kernel.
    :param **kwargs:        Additional arguments to pass to the backend.
                            See the vsmlrt backend's docstring for more details.
    """
    self.scale_w2x = scale
    self.noise = noise
    super().__init__(
        None, backend, tiles, tilesize, overlap, max_instances, kernel=kernel, scaler=scaler, shifter=shifter, **kwargs
    )

backend instance-attribute

backend = autoselect_backend(**_default_args | kwargs)

kernel instance-attribute

kernel = ensure_obj(kernel, __class__)

kwargs instance-attribute

kwargs: dict[str, Any] = kwargs

Arguments passed to the implemented funcs or internal scale function.

max_instances instance-attribute

max_instances = max_instances

model instance-attribute

model = str(resolve())

noise instance-attribute

noise: Literal[-1, 0, 1, 2, 3] = noise

overlap instance-attribute

overlap = overlap

overlap_h instance-attribute

overlap_h = 8

overlap_w instance-attribute

overlap_w = 8

scale_function instance-attribute

scale_function: Callable[..., VideoNode]

Scale function called internally when performing scaling operations.

scale_w2x instance-attribute

scale_w2x: Literal[1, 2, 4] = scale

scaler instance-attribute

scaler = ensure_obj(scaler or kernel, __class__)

shifter instance-attribute

shifter = ensure_obj(shifter or kernel, __class__)

tiles instance-attribute

tiles = tiles

tilesize instance-attribute

tilesize = tilesize

cached_property

cached_property(func: Callable[Concatenate[_BaseScalerT, P], T_co])

Bases: cached_property[T_co]

Read only version of functools.cached_property.

Source code
265
def __init__(self, func: Callable[Concatenate[_BaseScalerT, P], T_co]) -> None: ...

calc_tilesize

calc_tilesize(
    clip: VideoNode, **kwargs: Any
) -> tuple[tuple[int, int], tuple[int, int]]

Reimplementation of vsmlrt.calc_tilesize helper function

Source code
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
def calc_tilesize(self, clip: vs.VideoNode, **kwargs: Any) -> tuple[tuple[int, int], tuple[int, int]]:
    """Reimplementation of vsmlrt.calc_tilesize helper function"""

    from vsmlrt import calc_tilesize

    kwargs = dict(
        tiles=self.tiles,
        tilesize=self.tilesize,
        width=clip.width,
        height=clip.height,
        multiple=1,
        overlap_w=self.overlap_w,
        overlap_h=self.overlap_h,
    ) | kwargs

    return calc_tilesize(**kwargs)

ensure_obj classmethod

ensure_obj(
    scaler: str | type[Self] | Self | None = None,
    /,
    func_except: FuncExceptT | None = None,
) -> Self

Ensure that the input is a scaler instance, resolving it if necessary.

Parameters:

  • scaler
    (str | type[Self] | Self | None, default: None ) –

    Scaler identifier (string, class, or instance).

  • func_except
    (FuncExceptT | None, default: None ) –

    Function returned for custom error handling.

Returns:

  • Self

    Scaler instance.

Source code
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
@classmethod
def ensure_obj(
    cls,
    scaler: str | type[Self] | Self | None = None,
    /,
    func_except: FuncExceptT | None = None,
) -> Self:
    """
    Ensure that the input is a scaler instance, resolving it if necessary.

    :param scaler:          Scaler identifier (string, class, or instance).
    :param func_except:     Function returned for custom error handling.
    :return:                Scaler instance.
    """
    return _base_ensure_obj(cls, scaler, func_except)

from_param classmethod

from_param(
    scaler: str | type[Self] | Self | None = None,
    /,
    func_except: FuncExceptT | None = None,
) -> type[Self]

Resolve and return a scaler type from a given input (string, type, or instance).

Parameters:

  • scaler
    (str | type[Self] | Self | None, default: None ) –

    Scaler identifier (string, class, or instance).

  • func_except
    (FuncExceptT | None, default: None ) –

    Function returned for custom error handling.

Returns:

  • type[Self]

    Resolved scaler type.

Source code
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
@classmethod
def from_param(
    cls,
    scaler: str | type[Self] | Self | None = None,
    /,
    func_except: FuncExceptT | None = None,
) -> type[Self]:
    """
    Resolve and return a scaler type from a given input (string, type, or instance).

    :param scaler:          Scaler identifier (string, class, or instance).
    :param func_except:     Function returned for custom error handling.
    :return:                Resolved scaler type.
    """
    return _base_from_param(cls, scaler, cls._err_class, func_except)

get_scale_args

get_scale_args(
    clip: VideoNode,
    shift: tuple[TopShift, LeftShift] = (0, 0),
    width: int | None = None,
    height: int | None = None,
    **kwargs: Any
) -> dict[str, Any]

Generate the keyword arguments used for scaling.

Parameters:

  • clip
    (VideoNode) –

    The source clip.

  • shift
    (tuple[TopShift, LeftShift], default: (0, 0) ) –

    Subpixel shift (top, left).

  • width
    (int | None, default: None ) –

    Target width.

  • height
    (int | None, default: None ) –

    Target height.

  • kwargs
    (Any, default: {} ) –

    Extra parameters to merge.

Returns:

  • dict[str, Any]

    Final dictionary of keyword arguments for the scale function.

Source code
466
467
468
469
470
471
472
473
474
475
476
477
478
479
480
481
482
483
484
def get_scale_args(
    self,
    clip: vs.VideoNode,
    shift: tuple[TopShift, LeftShift] = (0, 0),
    width: int | None = None,
    height: int | None = None,
    **kwargs: Any,
) -> dict[str, Any]:
    """
    Generate the keyword arguments used for scaling.

    :param clip:    The source clip.
    :param shift:   Subpixel shift (top, left).
    :param width:   Target width.
    :param height:  Target height.
    :param kwargs:  Extra parameters to merge.
    :return:        Final dictionary of keyword arguments for the scale function.
    """
    return dict(width=width, height=height, src_top=shift[0], src_left=shift[1]) | self.kwargs | kwargs

inference

inference(
    clip: ConstantFormatVideoNode, **kwargs: Any
) -> ConstantFormatVideoNode
Source code
606
607
608
609
610
611
612
613
614
615
616
617
618
619
620
def inference(self, clip: ConstantFormatVideoNode, **kwargs: Any) -> ConstantFormatVideoNode:
    from vsmlrt import Waifu2x as mlrt_Waifu2x
    from vsmlrt import Waifu2xModel

    return mlrt_Waifu2x(
        clip,
        self.noise,
        self.scale_w2x,
        self.tiles,
        self.tilesize,
        self.overlap,
        Waifu2xModel(self._model),
        self.backend,
        **kwargs
    )

kernel_radius

kernel_radius() -> int

Return the effective kernel radius for the scaler.

Returns:

  • int

    Kernel radius.

Raises:

  • CustomNotImplementedError

    If no kernel radius is defined.

Source code
347
348
349
350
351
352
353
354
355
@cached_property
def kernel_radius(self) -> int:
    """
    Return the effective kernel radius for the scaler.

    :raises CustomNotImplementedError:  If no kernel radius is defined.
    :return:                            Kernel radius.
    """
    ...

multi

multi(
    clip: VideoNodeT,
    multi: float = 2.0,
    shift: tuple[TopShift, LeftShift] = (0, 0),
    **kwargs: Any
) -> VideoNodeT

Deprecated alias for supersample.

Parameters:

  • clip
    (VideoNodeT) –

    The source clip.

  • multi
    (float, default: 2.0 ) –

    Supersampling factor.

  • shift
    (tuple[TopShift, LeftShift], default: (0, 0) ) –

    Subpixel shift (top, left) applied during scaling.

  • kwargs
    (Any, default: {} ) –

    Additional arguments forwarded to the scale function.

Returns:

Source code
451
452
453
454
455
456
457
458
459
460
461
462
463
464
@deprecated('The "multi" method is deprecated. Use "supersample" instead.', category=DeprecationWarning)
def multi(
    self, clip: VideoNodeT, multi: float = 2.0, shift: tuple[TopShift, LeftShift] = (0, 0), **kwargs: Any
) -> VideoNodeT:
    """
    Deprecated alias for `supersample`.

    :param clip:    The source clip.
    :param multi:   Supersampling factor.
    :param shift:   Subpixel shift (top, left) applied during scaling.
    :param kwargs:  Additional arguments forwarded to the scale function.
    :return:        The supersampled clip.
    """
    return self.supersample(clip, multi, shift, **kwargs)

postprocess_clip

postprocess_clip(
    clip: VideoNode, input_clip: VideoNode, **kwargs: Any
) -> ConstantFormatVideoNode
Source code
628
629
630
631
632
633
634
635
def postprocess_clip(self, clip: vs.VideoNode, input_clip: vs.VideoNode, **kwargs: Any) -> ConstantFormatVideoNode:
    assert check_variable_format(clip, self.__class__)

    if clip.format != get_video_format(input_clip):
        kwargs = dict(dither_type=DitherType.ORDERED) | kwargs
        clip = self.kernel.resample(clip, input_clip, Matrix.from_video(input_clip, func=self.__class__), **kwargs)

    return clip

preprocess_clip

preprocess_clip(clip: VideoNode, **kwargs: Any) -> ConstantFormatVideoNode
Source code
624
625
626
def preprocess_clip(self, clip: vs.VideoNode, **kwargs: Any) -> ConstantFormatVideoNode:
    clip = self.kernel.resample(clip, vs.RGBH if self.backend.fp16 else vs.RGBS, Matrix.RGB)
    return limiter(clip, func=self.__class__)

pretty_string

pretty_string() -> str

Cached property returning a user-friendly string representation.

Returns:

  • str

    Pretty-printed string with arguments.

Source code
368
369
370
371
372
373
374
375
@cached_property
def pretty_string(self) -> str:
    """
    Cached property returning a user-friendly string representation.

    :return: Pretty-printed string with arguments.
    """
    return self._pretty_string()

scale

scale(
    clip: VideoNode,
    width: int | None = None,
    height: int | None = None,
    shift: tuple[float, float] = (0, 0),
    **kwargs: Any
) -> ConstantFormatVideoNode

Scale the given clip using the ONNX model.

Parameters:

  • clip
    (VideoNode) –

    The input clip to be scaled.

  • width
    (int | None, default: None ) –

    The target width for scaling. If None, the width of the input clip will be used.

  • height
    (int | None, default: None ) –

    The target height for scaling. If None, the height of the input clip will be used.

  • shift
    (tuple[float, float], default: (0, 0) ) –

    A tuple representing the shift values for the x and y axes.

  • **kwargs
    (Any, default: {} ) –

    Additional arguments to be passed to the preprocess_clip, postprocess_clip, inference, and _final_scale methods. Use the prefix preprocess_ or postprocess_ to pass an argument to the respective method. Use the prefix inference_ to pass an argument to the inference method.

Returns:

  • ConstantFormatVideoNode

    The scaled clip.

Source code
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
def scale(
    self,
    clip: vs.VideoNode,
    width: int | None = None,
    height: int | None = None,
    shift: tuple[float, float] = (0, 0),
    **kwargs: Any
) -> ConstantFormatVideoNode:
    """
    Scale the given clip using the ONNX model.

    :param clip:        The input clip to be scaled.
    :param width:       The target width for scaling. If None, the width of the input clip will be used.
    :param height:      The target height for scaling. If None, the height of the input clip will be used.
    :param shift:       A tuple representing the shift values for the x and y axes.
    :param **kwargs:    Additional arguments to be passed to the `preprocess_clip`, `postprocess_clip`,
                        `inference`, and `_final_scale` methods.
                        Use the prefix `preprocess_` or `postprocess_` to pass an argument to the respective method.
                        Use the prefix `inference_` to pass an argument to the inference method.

    :return:            The scaled clip.
    """
    from vsmlrt import Backend

    assert check_variable_format(clip, self.__class__)

    width, height = self._wh_norm(clip, width, height)

    preprocess_kwargs = dict[str, Any]()
    postprocess_kwargs = dict[str, Any]()
    inference_kwargs = dict[str, Any]()

    for k in kwargs.copy():
        for prefix, ckwargs in zip(
            ("preprocess_", "postprocess_", "inference_"),
            (preprocess_kwargs, postprocess_kwargs, inference_kwargs)
        ):
            if k.startswith(prefix):
                ckwargs[k.removeprefix(prefix)] = kwargs.pop(k)
                break

    wclip = self.preprocess_clip(clip, **preprocess_kwargs)

    if 0 not in {clip.width, clip.height}:
        scaled = self.inference(wclip, **inference_kwargs)
    else:
        if not isinstance(self.backend, Backend.TRT):
            raise CustomValueError(
                "Variable resolution clips can only be processed with TRT Backend!", self.__class__, self.backend
            )

        warning(f"{self.__class__.__name__}: Variable resolution clip detected!")

        if self.backend.static_shape:
            warning("static_shape is True, setting it to False...")
            self.backend.static_shape = False

        if not self.backend.max_shapes:
            warning("max_shapes is None, setting it to (1936, 1088). You may want to adjust it...")
            self.backend.max_shapes = (1936, 1088)

        if not self.backend.opt_shapes:
            warning("opt_shapes is None, setting it to (64, 64). You may want to adjust it...")
            self.backend.opt_shapes = (64, 64)

        scaled = ProcessVariableResClip[ConstantFormatVideoNode].from_func(
            wclip, lambda c: self.inference(c, **inference_kwargs), False, wclip.format, self.max_instances
        )

    scaled = self.postprocess_clip(scaled, clip, **postprocess_kwargs)

    return self._finish_scale(scaled, clip, width, height, shift, **kwargs)

supersample

supersample(
    clip: VideoNodeT,
    rfactor: float = 2.0,
    shift: tuple[TopShift, LeftShift] = (0, 0),
    **kwargs: Any
) -> VideoNodeT

Supersample a clip by a given scaling factor.

Parameters:

  • clip
    (VideoNodeT) –

    The source clip.

  • rfactor
    (float, default: 2.0 ) –

    Scaling factor for supersampling.

  • shift
    (tuple[TopShift, LeftShift], default: (0, 0) ) –

    Subpixel shift (top, left) applied during scaling.

  • kwargs
    (Any, default: {} ) –

    Additional arguments forwarded to the scale function.

Returns:

Raises:

  • CustomValueError

    If resulting resolution is non-positive.

Source code
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
def supersample(
    self, clip: VideoNodeT, rfactor: float = 2.0, shift: tuple[TopShift, LeftShift] = (0, 0), **kwargs: Any
) -> VideoNodeT:
    """
    Supersample a clip by a given scaling factor.

    :param clip:                The source clip.
    :param rfactor:             Scaling factor for supersampling.
    :param shift:               Subpixel shift (top, left) applied during scaling.
    :param kwargs:              Additional arguments forwarded to the scale function.
    :raises CustomValueError:   If resulting resolution is non-positive.
    :return:                    The supersampled clip.
    """
    assert check_variable_resolution(clip, self.supersample)

    dst_width, dst_height = ceil(clip.width * rfactor), ceil(clip.height * rfactor)

    if max(dst_width, dst_height) <= 0.0:
        raise CustomValueError(
            'Multiplying the resolution by "rfactor" must result in a positive resolution!',
            self.supersample,
            rfactor,
        )

    return self.scale(clip, dst_width, dst_height, shift, **kwargs)  # type: ignore[return-value]

SwinUnetArtScan

SwinUnetArtScan(
    scale: Literal[1, 2, 4] = 2,
    noise: Literal[-1, 0, 1, 2, 3] = -1,
    backend: Any | None = None,
    tiles: int | tuple[int, int] | None = None,
    tilesize: int | tuple[int, int] | None = None,
    overlap: int | tuple[int, int] | None = None,
    max_instances: int = 2,
    *,
    kernel: KernelLike = Catrom,
    scaler: ScalerLike | None = None,
    shifter: KernelLike | None = None,
    **kwargs: Any
)

Bases: BaseWaifu2xRGB

Swin-Unet model trained on anime scans.

Example usage:

from vsscale import Waifu2x

doubled = Waifu2x.SwinUnetArtScan().scale(clip, clip.width * 2, clip.height * 2)

Parameters:

  • scale

    (Literal[1, 2, 4], default: 2 ) –

    Upscaling factor. 1 = no uspcaling, 2 = 2x, 4 = 4x.

  • noise

    (Literal[-1, 0, 1, 2, 3], default: -1 ) –

    Noise reduction level. -1 = none, 0 = low, 1 = medium, 2 = high, 3 = highest.

  • backend

    (Any | None, default: None ) –

    The backend to be used with the vs-mlrt framework. If set to None, the most suitable backend will be automatically selected, prioritizing fp16 support.

  • tiles

    (int | tuple[int, int] | None, default: None ) –

    Whether to split the image into multiple tiles. This can help reduce VRAM usage, but note that the model's behavior may vary when they are used.

  • tilesize

    (int | tuple[int, int] | None, default: None ) –

    The size of each tile when splitting the image (if tiles are enabled).

  • overlap

    (int | tuple[int, int] | None, default: None ) –

    The size of overlap between tiles.

  • max_instances

    (int, default: 2 ) –

    Maximum instances to spawn when scaling a variable resolution clip.

  • kernel

    (KernelLike, default: Catrom ) –

    Base kernel to be used for certain scaling/shifting/resampling operations. Defaults to Catrom.

  • scaler

    (ScalerLike | None, default: None ) –

    Scaler used for scaling operations. Defaults to kernel.

  • shifter

    (KernelLike | None, default: None ) –

    Kernel used for shifting operations. Defaults to kernel.

  • **kwargs

    (Any, default: {} ) –

    Additional arguments to pass to the backend. See the vsmlrt backend's docstring for more details.

Classes:

Methods:

  • calc_tilesize

    Reimplementation of vsmlrt.calc_tilesize helper function

  • ensure_obj

    Ensure that the input is a scaler instance, resolving it if necessary.

  • from_param

    Resolve and return a scaler type from a given input (string, type, or instance).

  • get_scale_args

    Generate the keyword arguments used for scaling.

  • inference
  • kernel_radius

    Return the effective kernel radius for the scaler.

  • multi

    Deprecated alias for supersample.

  • postprocess_clip
  • preprocess_clip
  • pretty_string

    Cached property returning a user-friendly string representation.

  • scale

    Scale the given clip using the ONNX model.

  • supersample

    Supersample a clip by a given scaling factor.

Attributes:

Source code
568
569
570
571
572
573
574
575
576
577
578
579
580
581
582
583
584
585
586
587
588
589
590
591
592
593
594
595
596
597
598
599
600
601
602
603
604
def __init__(
    self,
    scale: Literal[1, 2, 4] = 2,
    noise: Literal[-1, 0, 1, 2, 3] = -1,
    backend: Any | None = None,
    tiles: int | tuple[int, int] | None = None,
    tilesize: int | tuple[int, int] | None = None,
    overlap: int | tuple[int, int] | None = None,
    max_instances: int = 2,
    *,
    kernel: KernelLike = Catrom,
    scaler: ScalerLike | None = None,
    shifter: KernelLike | None = None,
    **kwargs: Any
) -> None:
    """
    :param scale:           Upscaling factor. 1 = no uspcaling, 2 = 2x, 4 = 4x.
    :param noise:           Noise reduction level. -1 = none, 0 = low, 1 = medium, 2 = high, 3 = highest.
    :param backend:         The backend to be used with the vs-mlrt framework.
                            If set to None, the most suitable backend will be automatically selected, prioritizing fp16 support.
    :param tiles:           Whether to split the image into multiple tiles.
                            This can help reduce VRAM usage, but note that the model's behavior may vary when they are used.
    :param tilesize:        The size of each tile when splitting the image (if tiles are enabled).
    :param overlap:         The size of overlap between tiles.
    :param max_instances:   Maximum instances to spawn when scaling a variable resolution clip.
    :param kernel:          Base kernel to be used for certain scaling/shifting/resampling operations.
                            Defaults to Catrom.
    :param scaler:          Scaler used for scaling operations. Defaults to kernel.
    :param shifter:         Kernel used for shifting operations. Defaults to kernel.
    :param **kwargs:        Additional arguments to pass to the backend.
                            See the vsmlrt backend's docstring for more details.
    """
    self.scale_w2x = scale
    self.noise = noise
    super().__init__(
        None, backend, tiles, tilesize, overlap, max_instances, kernel=kernel, scaler=scaler, shifter=shifter, **kwargs
    )

backend instance-attribute

backend = autoselect_backend(**_default_args | kwargs)

kernel instance-attribute

kernel = ensure_obj(kernel, __class__)

kwargs instance-attribute

kwargs: dict[str, Any] = kwargs

Arguments passed to the implemented funcs or internal scale function.

max_instances instance-attribute

max_instances = max_instances

model instance-attribute

model = str(resolve())

noise instance-attribute

noise: Literal[-1, 0, 1, 2, 3] = noise

overlap instance-attribute

overlap = overlap

overlap_h instance-attribute

overlap_h = 8

overlap_w instance-attribute

overlap_w = 8

scale_function instance-attribute

scale_function: Callable[..., VideoNode]

Scale function called internally when performing scaling operations.

scale_w2x instance-attribute

scale_w2x: Literal[1, 2, 4] = scale

scaler instance-attribute

scaler = ensure_obj(scaler or kernel, __class__)

shifter instance-attribute

shifter = ensure_obj(shifter or kernel, __class__)

tiles instance-attribute

tiles = tiles

tilesize instance-attribute

tilesize = tilesize

cached_property

cached_property(func: Callable[Concatenate[_BaseScalerT, P], T_co])

Bases: cached_property[T_co]

Read only version of functools.cached_property.

Source code
265
def __init__(self, func: Callable[Concatenate[_BaseScalerT, P], T_co]) -> None: ...

calc_tilesize

calc_tilesize(
    clip: VideoNode, **kwargs: Any
) -> tuple[tuple[int, int], tuple[int, int]]

Reimplementation of vsmlrt.calc_tilesize helper function

Source code
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
def calc_tilesize(self, clip: vs.VideoNode, **kwargs: Any) -> tuple[tuple[int, int], tuple[int, int]]:
    """Reimplementation of vsmlrt.calc_tilesize helper function"""

    from vsmlrt import calc_tilesize

    kwargs = dict(
        tiles=self.tiles,
        tilesize=self.tilesize,
        width=clip.width,
        height=clip.height,
        multiple=1,
        overlap_w=self.overlap_w,
        overlap_h=self.overlap_h,
    ) | kwargs

    return calc_tilesize(**kwargs)

ensure_obj classmethod

ensure_obj(
    scaler: str | type[Self] | Self | None = None,
    /,
    func_except: FuncExceptT | None = None,
) -> Self

Ensure that the input is a scaler instance, resolving it if necessary.

Parameters:

  • scaler
    (str | type[Self] | Self | None, default: None ) –

    Scaler identifier (string, class, or instance).

  • func_except
    (FuncExceptT | None, default: None ) –

    Function returned for custom error handling.

Returns:

  • Self

    Scaler instance.

Source code
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
@classmethod
def ensure_obj(
    cls,
    scaler: str | type[Self] | Self | None = None,
    /,
    func_except: FuncExceptT | None = None,
) -> Self:
    """
    Ensure that the input is a scaler instance, resolving it if necessary.

    :param scaler:          Scaler identifier (string, class, or instance).
    :param func_except:     Function returned for custom error handling.
    :return:                Scaler instance.
    """
    return _base_ensure_obj(cls, scaler, func_except)

from_param classmethod

from_param(
    scaler: str | type[Self] | Self | None = None,
    /,
    func_except: FuncExceptT | None = None,
) -> type[Self]

Resolve and return a scaler type from a given input (string, type, or instance).

Parameters:

  • scaler
    (str | type[Self] | Self | None, default: None ) –

    Scaler identifier (string, class, or instance).

  • func_except
    (FuncExceptT | None, default: None ) –

    Function returned for custom error handling.

Returns:

  • type[Self]

    Resolved scaler type.

Source code
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
@classmethod
def from_param(
    cls,
    scaler: str | type[Self] | Self | None = None,
    /,
    func_except: FuncExceptT | None = None,
) -> type[Self]:
    """
    Resolve and return a scaler type from a given input (string, type, or instance).

    :param scaler:          Scaler identifier (string, class, or instance).
    :param func_except:     Function returned for custom error handling.
    :return:                Resolved scaler type.
    """
    return _base_from_param(cls, scaler, cls._err_class, func_except)

get_scale_args

get_scale_args(
    clip: VideoNode,
    shift: tuple[TopShift, LeftShift] = (0, 0),
    width: int | None = None,
    height: int | None = None,
    **kwargs: Any
) -> dict[str, Any]

Generate the keyword arguments used for scaling.

Parameters:

  • clip
    (VideoNode) –

    The source clip.

  • shift
    (tuple[TopShift, LeftShift], default: (0, 0) ) –

    Subpixel shift (top, left).

  • width
    (int | None, default: None ) –

    Target width.

  • height
    (int | None, default: None ) –

    Target height.

  • kwargs
    (Any, default: {} ) –

    Extra parameters to merge.

Returns:

  • dict[str, Any]

    Final dictionary of keyword arguments for the scale function.

Source code
466
467
468
469
470
471
472
473
474
475
476
477
478
479
480
481
482
483
484
def get_scale_args(
    self,
    clip: vs.VideoNode,
    shift: tuple[TopShift, LeftShift] = (0, 0),
    width: int | None = None,
    height: int | None = None,
    **kwargs: Any,
) -> dict[str, Any]:
    """
    Generate the keyword arguments used for scaling.

    :param clip:    The source clip.
    :param shift:   Subpixel shift (top, left).
    :param width:   Target width.
    :param height:  Target height.
    :param kwargs:  Extra parameters to merge.
    :return:        Final dictionary of keyword arguments for the scale function.
    """
    return dict(width=width, height=height, src_top=shift[0], src_left=shift[1]) | self.kwargs | kwargs

inference

inference(
    clip: ConstantFormatVideoNode, **kwargs: Any
) -> ConstantFormatVideoNode
Source code
606
607
608
609
610
611
612
613
614
615
616
617
618
619
620
def inference(self, clip: ConstantFormatVideoNode, **kwargs: Any) -> ConstantFormatVideoNode:
    from vsmlrt import Waifu2x as mlrt_Waifu2x
    from vsmlrt import Waifu2xModel

    return mlrt_Waifu2x(
        clip,
        self.noise,
        self.scale_w2x,
        self.tiles,
        self.tilesize,
        self.overlap,
        Waifu2xModel(self._model),
        self.backend,
        **kwargs
    )

kernel_radius

kernel_radius() -> int

Return the effective kernel radius for the scaler.

Returns:

  • int

    Kernel radius.

Raises:

  • CustomNotImplementedError

    If no kernel radius is defined.

Source code
347
348
349
350
351
352
353
354
355
@cached_property
def kernel_radius(self) -> int:
    """
    Return the effective kernel radius for the scaler.

    :raises CustomNotImplementedError:  If no kernel radius is defined.
    :return:                            Kernel radius.
    """
    ...

multi

multi(
    clip: VideoNodeT,
    multi: float = 2.0,
    shift: tuple[TopShift, LeftShift] = (0, 0),
    **kwargs: Any
) -> VideoNodeT

Deprecated alias for supersample.

Parameters:

  • clip
    (VideoNodeT) –

    The source clip.

  • multi
    (float, default: 2.0 ) –

    Supersampling factor.

  • shift
    (tuple[TopShift, LeftShift], default: (0, 0) ) –

    Subpixel shift (top, left) applied during scaling.

  • kwargs
    (Any, default: {} ) –

    Additional arguments forwarded to the scale function.

Returns:

Source code
451
452
453
454
455
456
457
458
459
460
461
462
463
464
@deprecated('The "multi" method is deprecated. Use "supersample" instead.', category=DeprecationWarning)
def multi(
    self, clip: VideoNodeT, multi: float = 2.0, shift: tuple[TopShift, LeftShift] = (0, 0), **kwargs: Any
) -> VideoNodeT:
    """
    Deprecated alias for `supersample`.

    :param clip:    The source clip.
    :param multi:   Supersampling factor.
    :param shift:   Subpixel shift (top, left) applied during scaling.
    :param kwargs:  Additional arguments forwarded to the scale function.
    :return:        The supersampled clip.
    """
    return self.supersample(clip, multi, shift, **kwargs)

postprocess_clip

postprocess_clip(
    clip: VideoNode, input_clip: VideoNode, **kwargs: Any
) -> ConstantFormatVideoNode
Source code
628
629
630
631
632
633
634
635
def postprocess_clip(self, clip: vs.VideoNode, input_clip: vs.VideoNode, **kwargs: Any) -> ConstantFormatVideoNode:
    assert check_variable_format(clip, self.__class__)

    if clip.format != get_video_format(input_clip):
        kwargs = dict(dither_type=DitherType.ORDERED) | kwargs
        clip = self.kernel.resample(clip, input_clip, Matrix.from_video(input_clip, func=self.__class__), **kwargs)

    return clip

preprocess_clip

preprocess_clip(clip: VideoNode, **kwargs: Any) -> ConstantFormatVideoNode
Source code
624
625
626
def preprocess_clip(self, clip: vs.VideoNode, **kwargs: Any) -> ConstantFormatVideoNode:
    clip = self.kernel.resample(clip, vs.RGBH if self.backend.fp16 else vs.RGBS, Matrix.RGB)
    return limiter(clip, func=self.__class__)

pretty_string

pretty_string() -> str

Cached property returning a user-friendly string representation.

Returns:

  • str

    Pretty-printed string with arguments.

Source code
368
369
370
371
372
373
374
375
@cached_property
def pretty_string(self) -> str:
    """
    Cached property returning a user-friendly string representation.

    :return: Pretty-printed string with arguments.
    """
    return self._pretty_string()

scale

scale(
    clip: VideoNode,
    width: int | None = None,
    height: int | None = None,
    shift: tuple[float, float] = (0, 0),
    **kwargs: Any
) -> ConstantFormatVideoNode

Scale the given clip using the ONNX model.

Parameters:

  • clip
    (VideoNode) –

    The input clip to be scaled.

  • width
    (int | None, default: None ) –

    The target width for scaling. If None, the width of the input clip will be used.

  • height
    (int | None, default: None ) –

    The target height for scaling. If None, the height of the input clip will be used.

  • shift
    (tuple[float, float], default: (0, 0) ) –

    A tuple representing the shift values for the x and y axes.

  • **kwargs
    (Any, default: {} ) –

    Additional arguments to be passed to the preprocess_clip, postprocess_clip, inference, and _final_scale methods. Use the prefix preprocess_ or postprocess_ to pass an argument to the respective method. Use the prefix inference_ to pass an argument to the inference method.

Returns:

  • ConstantFormatVideoNode

    The scaled clip.

Source code
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
def scale(
    self,
    clip: vs.VideoNode,
    width: int | None = None,
    height: int | None = None,
    shift: tuple[float, float] = (0, 0),
    **kwargs: Any
) -> ConstantFormatVideoNode:
    """
    Scale the given clip using the ONNX model.

    :param clip:        The input clip to be scaled.
    :param width:       The target width for scaling. If None, the width of the input clip will be used.
    :param height:      The target height for scaling. If None, the height of the input clip will be used.
    :param shift:       A tuple representing the shift values for the x and y axes.
    :param **kwargs:    Additional arguments to be passed to the `preprocess_clip`, `postprocess_clip`,
                        `inference`, and `_final_scale` methods.
                        Use the prefix `preprocess_` or `postprocess_` to pass an argument to the respective method.
                        Use the prefix `inference_` to pass an argument to the inference method.

    :return:            The scaled clip.
    """
    from vsmlrt import Backend

    assert check_variable_format(clip, self.__class__)

    width, height = self._wh_norm(clip, width, height)

    preprocess_kwargs = dict[str, Any]()
    postprocess_kwargs = dict[str, Any]()
    inference_kwargs = dict[str, Any]()

    for k in kwargs.copy():
        for prefix, ckwargs in zip(
            ("preprocess_", "postprocess_", "inference_"),
            (preprocess_kwargs, postprocess_kwargs, inference_kwargs)
        ):
            if k.startswith(prefix):
                ckwargs[k.removeprefix(prefix)] = kwargs.pop(k)
                break

    wclip = self.preprocess_clip(clip, **preprocess_kwargs)

    if 0 not in {clip.width, clip.height}:
        scaled = self.inference(wclip, **inference_kwargs)
    else:
        if not isinstance(self.backend, Backend.TRT):
            raise CustomValueError(
                "Variable resolution clips can only be processed with TRT Backend!", self.__class__, self.backend
            )

        warning(f"{self.__class__.__name__}: Variable resolution clip detected!")

        if self.backend.static_shape:
            warning("static_shape is True, setting it to False...")
            self.backend.static_shape = False

        if not self.backend.max_shapes:
            warning("max_shapes is None, setting it to (1936, 1088). You may want to adjust it...")
            self.backend.max_shapes = (1936, 1088)

        if not self.backend.opt_shapes:
            warning("opt_shapes is None, setting it to (64, 64). You may want to adjust it...")
            self.backend.opt_shapes = (64, 64)

        scaled = ProcessVariableResClip[ConstantFormatVideoNode].from_func(
            wclip, lambda c: self.inference(c, **inference_kwargs), False, wclip.format, self.max_instances
        )

    scaled = self.postprocess_clip(scaled, clip, **postprocess_kwargs)

    return self._finish_scale(scaled, clip, width, height, shift, **kwargs)

supersample

supersample(
    clip: VideoNodeT,
    rfactor: float = 2.0,
    shift: tuple[TopShift, LeftShift] = (0, 0),
    **kwargs: Any
) -> VideoNodeT

Supersample a clip by a given scaling factor.

Parameters:

  • clip
    (VideoNodeT) –

    The source clip.

  • rfactor
    (float, default: 2.0 ) –

    Scaling factor for supersampling.

  • shift
    (tuple[TopShift, LeftShift], default: (0, 0) ) –

    Subpixel shift (top, left) applied during scaling.

  • kwargs
    (Any, default: {} ) –

    Additional arguments forwarded to the scale function.

Returns:

Raises:

  • CustomValueError

    If resulting resolution is non-positive.

Source code
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
def supersample(
    self, clip: VideoNodeT, rfactor: float = 2.0, shift: tuple[TopShift, LeftShift] = (0, 0), **kwargs: Any
) -> VideoNodeT:
    """
    Supersample a clip by a given scaling factor.

    :param clip:                The source clip.
    :param rfactor:             Scaling factor for supersampling.
    :param shift:               Subpixel shift (top, left) applied during scaling.
    :param kwargs:              Additional arguments forwarded to the scale function.
    :raises CustomValueError:   If resulting resolution is non-positive.
    :return:                    The supersampled clip.
    """
    assert check_variable_resolution(clip, self.supersample)

    dst_width, dst_height = ceil(clip.width * rfactor), ceil(clip.height * rfactor)

    if max(dst_width, dst_height) <= 0.0:
        raise CustomValueError(
            'Multiplying the resolution by "rfactor" must result in a positive resolution!',
            self.supersample,
            rfactor,
        )

    return self.scale(clip, dst_width, dst_height, shift, **kwargs)  # type: ignore[return-value]

SwinUnetPhoto

SwinUnetPhoto(
    scale: Literal[1, 2, 4] = 2,
    noise: Literal[-1, 0, 1, 2, 3] = -1,
    backend: Any | None = None,
    tiles: int | tuple[int, int] | None = None,
    tilesize: int | tuple[int, int] | None = None,
    overlap: int | tuple[int, int] | None = None,
    max_instances: int = 2,
    *,
    kernel: KernelLike = Catrom,
    scaler: ScalerLike | None = None,
    shifter: KernelLike | None = None,
    **kwargs: Any
)

Bases: BaseWaifu2xRGB

Swin-Unet model trained on photographic content.

Example usage:

from vsscale import Waifu2x

doubled = Waifu2x.SwinUnetPhoto().scale(clip, clip.width * 2, clip.height * 2)

Parameters:

  • scale

    (Literal[1, 2, 4], default: 2 ) –

    Upscaling factor. 1 = no uspcaling, 2 = 2x, 4 = 4x.

  • noise

    (Literal[-1, 0, 1, 2, 3], default: -1 ) –

    Noise reduction level. -1 = none, 0 = low, 1 = medium, 2 = high, 3 = highest.

  • backend

    (Any | None, default: None ) –

    The backend to be used with the vs-mlrt framework. If set to None, the most suitable backend will be automatically selected, prioritizing fp16 support.

  • tiles

    (int | tuple[int, int] | None, default: None ) –

    Whether to split the image into multiple tiles. This can help reduce VRAM usage, but note that the model's behavior may vary when they are used.

  • tilesize

    (int | tuple[int, int] | None, default: None ) –

    The size of each tile when splitting the image (if tiles are enabled).

  • overlap

    (int | tuple[int, int] | None, default: None ) –

    The size of overlap between tiles.

  • max_instances

    (int, default: 2 ) –

    Maximum instances to spawn when scaling a variable resolution clip.

  • kernel

    (KernelLike, default: Catrom ) –

    Base kernel to be used for certain scaling/shifting/resampling operations. Defaults to Catrom.

  • scaler

    (ScalerLike | None, default: None ) –

    Scaler used for scaling operations. Defaults to kernel.

  • shifter

    (KernelLike | None, default: None ) –

    Kernel used for shifting operations. Defaults to kernel.

  • **kwargs

    (Any, default: {} ) –

    Additional arguments to pass to the backend. See the vsmlrt backend's docstring for more details.

Classes:

Methods:

  • calc_tilesize

    Reimplementation of vsmlrt.calc_tilesize helper function

  • ensure_obj

    Ensure that the input is a scaler instance, resolving it if necessary.

  • from_param

    Resolve and return a scaler type from a given input (string, type, or instance).

  • get_scale_args

    Generate the keyword arguments used for scaling.

  • inference
  • kernel_radius

    Return the effective kernel radius for the scaler.

  • multi

    Deprecated alias for supersample.

  • postprocess_clip
  • preprocess_clip
  • pretty_string

    Cached property returning a user-friendly string representation.

  • scale

    Scale the given clip using the ONNX model.

  • supersample

    Supersample a clip by a given scaling factor.

Attributes:

Source code
568
569
570
571
572
573
574
575
576
577
578
579
580
581
582
583
584
585
586
587
588
589
590
591
592
593
594
595
596
597
598
599
600
601
602
603
604
def __init__(
    self,
    scale: Literal[1, 2, 4] = 2,
    noise: Literal[-1, 0, 1, 2, 3] = -1,
    backend: Any | None = None,
    tiles: int | tuple[int, int] | None = None,
    tilesize: int | tuple[int, int] | None = None,
    overlap: int | tuple[int, int] | None = None,
    max_instances: int = 2,
    *,
    kernel: KernelLike = Catrom,
    scaler: ScalerLike | None = None,
    shifter: KernelLike | None = None,
    **kwargs: Any
) -> None:
    """
    :param scale:           Upscaling factor. 1 = no uspcaling, 2 = 2x, 4 = 4x.
    :param noise:           Noise reduction level. -1 = none, 0 = low, 1 = medium, 2 = high, 3 = highest.
    :param backend:         The backend to be used with the vs-mlrt framework.
                            If set to None, the most suitable backend will be automatically selected, prioritizing fp16 support.
    :param tiles:           Whether to split the image into multiple tiles.
                            This can help reduce VRAM usage, but note that the model's behavior may vary when they are used.
    :param tilesize:        The size of each tile when splitting the image (if tiles are enabled).
    :param overlap:         The size of overlap between tiles.
    :param max_instances:   Maximum instances to spawn when scaling a variable resolution clip.
    :param kernel:          Base kernel to be used for certain scaling/shifting/resampling operations.
                            Defaults to Catrom.
    :param scaler:          Scaler used for scaling operations. Defaults to kernel.
    :param shifter:         Kernel used for shifting operations. Defaults to kernel.
    :param **kwargs:        Additional arguments to pass to the backend.
                            See the vsmlrt backend's docstring for more details.
    """
    self.scale_w2x = scale
    self.noise = noise
    super().__init__(
        None, backend, tiles, tilesize, overlap, max_instances, kernel=kernel, scaler=scaler, shifter=shifter, **kwargs
    )

backend instance-attribute

backend = autoselect_backend(**_default_args | kwargs)

kernel instance-attribute

kernel = ensure_obj(kernel, __class__)

kwargs instance-attribute

kwargs: dict[str, Any] = kwargs

Arguments passed to the implemented funcs or internal scale function.

max_instances instance-attribute

max_instances = max_instances

model instance-attribute

model = str(resolve())

noise instance-attribute

noise: Literal[-1, 0, 1, 2, 3] = noise

overlap instance-attribute

overlap = overlap

overlap_h instance-attribute

overlap_h = 8

overlap_w instance-attribute

overlap_w = 8

scale_function instance-attribute

scale_function: Callable[..., VideoNode]

Scale function called internally when performing scaling operations.

scale_w2x instance-attribute

scale_w2x: Literal[1, 2, 4] = scale

scaler instance-attribute

scaler = ensure_obj(scaler or kernel, __class__)

shifter instance-attribute

shifter = ensure_obj(shifter or kernel, __class__)

tiles instance-attribute

tiles = tiles

tilesize instance-attribute

tilesize = tilesize

cached_property

cached_property(func: Callable[Concatenate[_BaseScalerT, P], T_co])

Bases: cached_property[T_co]

Read only version of functools.cached_property.

Source code
265
def __init__(self, func: Callable[Concatenate[_BaseScalerT, P], T_co]) -> None: ...

calc_tilesize

calc_tilesize(
    clip: VideoNode, **kwargs: Any
) -> tuple[tuple[int, int], tuple[int, int]]

Reimplementation of vsmlrt.calc_tilesize helper function

Source code
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
def calc_tilesize(self, clip: vs.VideoNode, **kwargs: Any) -> tuple[tuple[int, int], tuple[int, int]]:
    """Reimplementation of vsmlrt.calc_tilesize helper function"""

    from vsmlrt import calc_tilesize

    kwargs = dict(
        tiles=self.tiles,
        tilesize=self.tilesize,
        width=clip.width,
        height=clip.height,
        multiple=1,
        overlap_w=self.overlap_w,
        overlap_h=self.overlap_h,
    ) | kwargs

    return calc_tilesize(**kwargs)

ensure_obj classmethod

ensure_obj(
    scaler: str | type[Self] | Self | None = None,
    /,
    func_except: FuncExceptT | None = None,
) -> Self

Ensure that the input is a scaler instance, resolving it if necessary.

Parameters:

  • scaler
    (str | type[Self] | Self | None, default: None ) –

    Scaler identifier (string, class, or instance).

  • func_except
    (FuncExceptT | None, default: None ) –

    Function returned for custom error handling.

Returns:

  • Self

    Scaler instance.

Source code
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
@classmethod
def ensure_obj(
    cls,
    scaler: str | type[Self] | Self | None = None,
    /,
    func_except: FuncExceptT | None = None,
) -> Self:
    """
    Ensure that the input is a scaler instance, resolving it if necessary.

    :param scaler:          Scaler identifier (string, class, or instance).
    :param func_except:     Function returned for custom error handling.
    :return:                Scaler instance.
    """
    return _base_ensure_obj(cls, scaler, func_except)

from_param classmethod

from_param(
    scaler: str | type[Self] | Self | None = None,
    /,
    func_except: FuncExceptT | None = None,
) -> type[Self]

Resolve and return a scaler type from a given input (string, type, or instance).

Parameters:

  • scaler
    (str | type[Self] | Self | None, default: None ) –

    Scaler identifier (string, class, or instance).

  • func_except
    (FuncExceptT | None, default: None ) –

    Function returned for custom error handling.

Returns:

  • type[Self]

    Resolved scaler type.

Source code
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
@classmethod
def from_param(
    cls,
    scaler: str | type[Self] | Self | None = None,
    /,
    func_except: FuncExceptT | None = None,
) -> type[Self]:
    """
    Resolve and return a scaler type from a given input (string, type, or instance).

    :param scaler:          Scaler identifier (string, class, or instance).
    :param func_except:     Function returned for custom error handling.
    :return:                Resolved scaler type.
    """
    return _base_from_param(cls, scaler, cls._err_class, func_except)

get_scale_args

get_scale_args(
    clip: VideoNode,
    shift: tuple[TopShift, LeftShift] = (0, 0),
    width: int | None = None,
    height: int | None = None,
    **kwargs: Any
) -> dict[str, Any]

Generate the keyword arguments used for scaling.

Parameters:

  • clip
    (VideoNode) –

    The source clip.

  • shift
    (tuple[TopShift, LeftShift], default: (0, 0) ) –

    Subpixel shift (top, left).

  • width
    (int | None, default: None ) –

    Target width.

  • height
    (int | None, default: None ) –

    Target height.

  • kwargs
    (Any, default: {} ) –

    Extra parameters to merge.

Returns:

  • dict[str, Any]

    Final dictionary of keyword arguments for the scale function.

Source code
466
467
468
469
470
471
472
473
474
475
476
477
478
479
480
481
482
483
484
def get_scale_args(
    self,
    clip: vs.VideoNode,
    shift: tuple[TopShift, LeftShift] = (0, 0),
    width: int | None = None,
    height: int | None = None,
    **kwargs: Any,
) -> dict[str, Any]:
    """
    Generate the keyword arguments used for scaling.

    :param clip:    The source clip.
    :param shift:   Subpixel shift (top, left).
    :param width:   Target width.
    :param height:  Target height.
    :param kwargs:  Extra parameters to merge.
    :return:        Final dictionary of keyword arguments for the scale function.
    """
    return dict(width=width, height=height, src_top=shift[0], src_left=shift[1]) | self.kwargs | kwargs

inference

inference(
    clip: ConstantFormatVideoNode, **kwargs: Any
) -> ConstantFormatVideoNode
Source code
606
607
608
609
610
611
612
613
614
615
616
617
618
619
620
def inference(self, clip: ConstantFormatVideoNode, **kwargs: Any) -> ConstantFormatVideoNode:
    from vsmlrt import Waifu2x as mlrt_Waifu2x
    from vsmlrt import Waifu2xModel

    return mlrt_Waifu2x(
        clip,
        self.noise,
        self.scale_w2x,
        self.tiles,
        self.tilesize,
        self.overlap,
        Waifu2xModel(self._model),
        self.backend,
        **kwargs
    )

kernel_radius

kernel_radius() -> int

Return the effective kernel radius for the scaler.

Returns:

  • int

    Kernel radius.

Raises:

  • CustomNotImplementedError

    If no kernel radius is defined.

Source code
347
348
349
350
351
352
353
354
355
@cached_property
def kernel_radius(self) -> int:
    """
    Return the effective kernel radius for the scaler.

    :raises CustomNotImplementedError:  If no kernel radius is defined.
    :return:                            Kernel radius.
    """
    ...

multi

multi(
    clip: VideoNodeT,
    multi: float = 2.0,
    shift: tuple[TopShift, LeftShift] = (0, 0),
    **kwargs: Any
) -> VideoNodeT

Deprecated alias for supersample.

Parameters:

  • clip
    (VideoNodeT) –

    The source clip.

  • multi
    (float, default: 2.0 ) –

    Supersampling factor.

  • shift
    (tuple[TopShift, LeftShift], default: (0, 0) ) –

    Subpixel shift (top, left) applied during scaling.

  • kwargs
    (Any, default: {} ) –

    Additional arguments forwarded to the scale function.

Returns:

Source code
451
452
453
454
455
456
457
458
459
460
461
462
463
464
@deprecated('The "multi" method is deprecated. Use "supersample" instead.', category=DeprecationWarning)
def multi(
    self, clip: VideoNodeT, multi: float = 2.0, shift: tuple[TopShift, LeftShift] = (0, 0), **kwargs: Any
) -> VideoNodeT:
    """
    Deprecated alias for `supersample`.

    :param clip:    The source clip.
    :param multi:   Supersampling factor.
    :param shift:   Subpixel shift (top, left) applied during scaling.
    :param kwargs:  Additional arguments forwarded to the scale function.
    :return:        The supersampled clip.
    """
    return self.supersample(clip, multi, shift, **kwargs)

postprocess_clip

postprocess_clip(
    clip: VideoNode, input_clip: VideoNode, **kwargs: Any
) -> ConstantFormatVideoNode
Source code
628
629
630
631
632
633
634
635
def postprocess_clip(self, clip: vs.VideoNode, input_clip: vs.VideoNode, **kwargs: Any) -> ConstantFormatVideoNode:
    assert check_variable_format(clip, self.__class__)

    if clip.format != get_video_format(input_clip):
        kwargs = dict(dither_type=DitherType.ORDERED) | kwargs
        clip = self.kernel.resample(clip, input_clip, Matrix.from_video(input_clip, func=self.__class__), **kwargs)

    return clip

preprocess_clip

preprocess_clip(clip: VideoNode, **kwargs: Any) -> ConstantFormatVideoNode
Source code
624
625
626
def preprocess_clip(self, clip: vs.VideoNode, **kwargs: Any) -> ConstantFormatVideoNode:
    clip = self.kernel.resample(clip, vs.RGBH if self.backend.fp16 else vs.RGBS, Matrix.RGB)
    return limiter(clip, func=self.__class__)

pretty_string

pretty_string() -> str

Cached property returning a user-friendly string representation.

Returns:

  • str

    Pretty-printed string with arguments.

Source code
368
369
370
371
372
373
374
375
@cached_property
def pretty_string(self) -> str:
    """
    Cached property returning a user-friendly string representation.

    :return: Pretty-printed string with arguments.
    """
    return self._pretty_string()

scale

scale(
    clip: VideoNode,
    width: int | None = None,
    height: int | None = None,
    shift: tuple[float, float] = (0, 0),
    **kwargs: Any
) -> ConstantFormatVideoNode

Scale the given clip using the ONNX model.

Parameters:

  • clip
    (VideoNode) –

    The input clip to be scaled.

  • width
    (int | None, default: None ) –

    The target width for scaling. If None, the width of the input clip will be used.

  • height
    (int | None, default: None ) –

    The target height for scaling. If None, the height of the input clip will be used.

  • shift
    (tuple[float, float], default: (0, 0) ) –

    A tuple representing the shift values for the x and y axes.

  • **kwargs
    (Any, default: {} ) –

    Additional arguments to be passed to the preprocess_clip, postprocess_clip, inference, and _final_scale methods. Use the prefix preprocess_ or postprocess_ to pass an argument to the respective method. Use the prefix inference_ to pass an argument to the inference method.

Returns:

  • ConstantFormatVideoNode

    The scaled clip.

Source code
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
def scale(
    self,
    clip: vs.VideoNode,
    width: int | None = None,
    height: int | None = None,
    shift: tuple[float, float] = (0, 0),
    **kwargs: Any
) -> ConstantFormatVideoNode:
    """
    Scale the given clip using the ONNX model.

    :param clip:        The input clip to be scaled.
    :param width:       The target width for scaling. If None, the width of the input clip will be used.
    :param height:      The target height for scaling. If None, the height of the input clip will be used.
    :param shift:       A tuple representing the shift values for the x and y axes.
    :param **kwargs:    Additional arguments to be passed to the `preprocess_clip`, `postprocess_clip`,
                        `inference`, and `_final_scale` methods.
                        Use the prefix `preprocess_` or `postprocess_` to pass an argument to the respective method.
                        Use the prefix `inference_` to pass an argument to the inference method.

    :return:            The scaled clip.
    """
    from vsmlrt import Backend

    assert check_variable_format(clip, self.__class__)

    width, height = self._wh_norm(clip, width, height)

    preprocess_kwargs = dict[str, Any]()
    postprocess_kwargs = dict[str, Any]()
    inference_kwargs = dict[str, Any]()

    for k in kwargs.copy():
        for prefix, ckwargs in zip(
            ("preprocess_", "postprocess_", "inference_"),
            (preprocess_kwargs, postprocess_kwargs, inference_kwargs)
        ):
            if k.startswith(prefix):
                ckwargs[k.removeprefix(prefix)] = kwargs.pop(k)
                break

    wclip = self.preprocess_clip(clip, **preprocess_kwargs)

    if 0 not in {clip.width, clip.height}:
        scaled = self.inference(wclip, **inference_kwargs)
    else:
        if not isinstance(self.backend, Backend.TRT):
            raise CustomValueError(
                "Variable resolution clips can only be processed with TRT Backend!", self.__class__, self.backend
            )

        warning(f"{self.__class__.__name__}: Variable resolution clip detected!")

        if self.backend.static_shape:
            warning("static_shape is True, setting it to False...")
            self.backend.static_shape = False

        if not self.backend.max_shapes:
            warning("max_shapes is None, setting it to (1936, 1088). You may want to adjust it...")
            self.backend.max_shapes = (1936, 1088)

        if not self.backend.opt_shapes:
            warning("opt_shapes is None, setting it to (64, 64). You may want to adjust it...")
            self.backend.opt_shapes = (64, 64)

        scaled = ProcessVariableResClip[ConstantFormatVideoNode].from_func(
            wclip, lambda c: self.inference(c, **inference_kwargs), False, wclip.format, self.max_instances
        )

    scaled = self.postprocess_clip(scaled, clip, **postprocess_kwargs)

    return self._finish_scale(scaled, clip, width, height, shift, **kwargs)

supersample

supersample(
    clip: VideoNodeT,
    rfactor: float = 2.0,
    shift: tuple[TopShift, LeftShift] = (0, 0),
    **kwargs: Any
) -> VideoNodeT

Supersample a clip by a given scaling factor.

Parameters:

  • clip
    (VideoNodeT) –

    The source clip.

  • rfactor
    (float, default: 2.0 ) –

    Scaling factor for supersampling.

  • shift
    (tuple[TopShift, LeftShift], default: (0, 0) ) –

    Subpixel shift (top, left) applied during scaling.

  • kwargs
    (Any, default: {} ) –

    Additional arguments forwarded to the scale function.

Returns:

Raises:

  • CustomValueError

    If resulting resolution is non-positive.

Source code
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
def supersample(
    self, clip: VideoNodeT, rfactor: float = 2.0, shift: tuple[TopShift, LeftShift] = (0, 0), **kwargs: Any
) -> VideoNodeT:
    """
    Supersample a clip by a given scaling factor.

    :param clip:                The source clip.
    :param rfactor:             Scaling factor for supersampling.
    :param shift:               Subpixel shift (top, left) applied during scaling.
    :param kwargs:              Additional arguments forwarded to the scale function.
    :raises CustomValueError:   If resulting resolution is non-positive.
    :return:                    The supersampled clip.
    """
    assert check_variable_resolution(clip, self.supersample)

    dst_width, dst_height = ceil(clip.width * rfactor), ceil(clip.height * rfactor)

    if max(dst_width, dst_height) <= 0.0:
        raise CustomValueError(
            'Multiplying the resolution by "rfactor" must result in a positive resolution!',
            self.supersample,
            rfactor,
        )

    return self.scale(clip, dst_width, dst_height, shift, **kwargs)  # type: ignore[return-value]

SwinUnetPhotoV2

SwinUnetPhotoV2(
    scale: Literal[1, 2, 4] = 2,
    noise: Literal[-1, 0, 1, 2, 3] = -1,
    backend: Any | None = None,
    tiles: int | tuple[int, int] | None = None,
    tilesize: int | tuple[int, int] | None = None,
    overlap: int | tuple[int, int] | None = None,
    max_instances: int = 2,
    *,
    kernel: KernelLike = Catrom,
    scaler: ScalerLike | None = None,
    shifter: KernelLike | None = None,
    **kwargs: Any
)

Bases: BaseWaifu2xRGB

Improved Swin-Unet model for photos (v2).

Example usage:

from vsscale import Waifu2x

doubled = Waifu2x.SwinUnetPhotoV2().scale(clip, clip.width * 2, clip.height * 2)

Parameters:

  • scale

    (Literal[1, 2, 4], default: 2 ) –

    Upscaling factor. 1 = no uspcaling, 2 = 2x, 4 = 4x.

  • noise

    (Literal[-1, 0, 1, 2, 3], default: -1 ) –

    Noise reduction level. -1 = none, 0 = low, 1 = medium, 2 = high, 3 = highest.

  • backend

    (Any | None, default: None ) –

    The backend to be used with the vs-mlrt framework. If set to None, the most suitable backend will be automatically selected, prioritizing fp16 support.

  • tiles

    (int | tuple[int, int] | None, default: None ) –

    Whether to split the image into multiple tiles. This can help reduce VRAM usage, but note that the model's behavior may vary when they are used.

  • tilesize

    (int | tuple[int, int] | None, default: None ) –

    The size of each tile when splitting the image (if tiles are enabled).

  • overlap

    (int | tuple[int, int] | None, default: None ) –

    The size of overlap between tiles.

  • max_instances

    (int, default: 2 ) –

    Maximum instances to spawn when scaling a variable resolution clip.

  • kernel

    (KernelLike, default: Catrom ) –

    Base kernel to be used for certain scaling/shifting/resampling operations. Defaults to Catrom.

  • scaler

    (ScalerLike | None, default: None ) –

    Scaler used for scaling operations. Defaults to kernel.

  • shifter

    (KernelLike | None, default: None ) –

    Kernel used for shifting operations. Defaults to kernel.

  • **kwargs

    (Any, default: {} ) –

    Additional arguments to pass to the backend. See the vsmlrt backend's docstring for more details.

Classes:

Methods:

  • calc_tilesize

    Reimplementation of vsmlrt.calc_tilesize helper function

  • ensure_obj

    Ensure that the input is a scaler instance, resolving it if necessary.

  • from_param

    Resolve and return a scaler type from a given input (string, type, or instance).

  • get_scale_args

    Generate the keyword arguments used for scaling.

  • inference
  • kernel_radius

    Return the effective kernel radius for the scaler.

  • multi

    Deprecated alias for supersample.

  • postprocess_clip
  • preprocess_clip
  • pretty_string

    Cached property returning a user-friendly string representation.

  • scale

    Scale the given clip using the ONNX model.

  • supersample

    Supersample a clip by a given scaling factor.

Attributes:

Source code
568
569
570
571
572
573
574
575
576
577
578
579
580
581
582
583
584
585
586
587
588
589
590
591
592
593
594
595
596
597
598
599
600
601
602
603
604
def __init__(
    self,
    scale: Literal[1, 2, 4] = 2,
    noise: Literal[-1, 0, 1, 2, 3] = -1,
    backend: Any | None = None,
    tiles: int | tuple[int, int] | None = None,
    tilesize: int | tuple[int, int] | None = None,
    overlap: int | tuple[int, int] | None = None,
    max_instances: int = 2,
    *,
    kernel: KernelLike = Catrom,
    scaler: ScalerLike | None = None,
    shifter: KernelLike | None = None,
    **kwargs: Any
) -> None:
    """
    :param scale:           Upscaling factor. 1 = no uspcaling, 2 = 2x, 4 = 4x.
    :param noise:           Noise reduction level. -1 = none, 0 = low, 1 = medium, 2 = high, 3 = highest.
    :param backend:         The backend to be used with the vs-mlrt framework.
                            If set to None, the most suitable backend will be automatically selected, prioritizing fp16 support.
    :param tiles:           Whether to split the image into multiple tiles.
                            This can help reduce VRAM usage, but note that the model's behavior may vary when they are used.
    :param tilesize:        The size of each tile when splitting the image (if tiles are enabled).
    :param overlap:         The size of overlap between tiles.
    :param max_instances:   Maximum instances to spawn when scaling a variable resolution clip.
    :param kernel:          Base kernel to be used for certain scaling/shifting/resampling operations.
                            Defaults to Catrom.
    :param scaler:          Scaler used for scaling operations. Defaults to kernel.
    :param shifter:         Kernel used for shifting operations. Defaults to kernel.
    :param **kwargs:        Additional arguments to pass to the backend.
                            See the vsmlrt backend's docstring for more details.
    """
    self.scale_w2x = scale
    self.noise = noise
    super().__init__(
        None, backend, tiles, tilesize, overlap, max_instances, kernel=kernel, scaler=scaler, shifter=shifter, **kwargs
    )

backend instance-attribute

backend = autoselect_backend(**_default_args | kwargs)

kernel instance-attribute

kernel = ensure_obj(kernel, __class__)

kwargs instance-attribute

kwargs: dict[str, Any] = kwargs

Arguments passed to the implemented funcs or internal scale function.

max_instances instance-attribute

max_instances = max_instances

model instance-attribute

model = str(resolve())

noise instance-attribute

noise: Literal[-1, 0, 1, 2, 3] = noise

overlap instance-attribute

overlap = overlap

overlap_h instance-attribute

overlap_h = 8

overlap_w instance-attribute

overlap_w = 8

scale_function instance-attribute

scale_function: Callable[..., VideoNode]

Scale function called internally when performing scaling operations.

scale_w2x instance-attribute

scale_w2x: Literal[1, 2, 4] = scale

scaler instance-attribute

scaler = ensure_obj(scaler or kernel, __class__)

shifter instance-attribute

shifter = ensure_obj(shifter or kernel, __class__)

tiles instance-attribute

tiles = tiles

tilesize instance-attribute

tilesize = tilesize

cached_property

cached_property(func: Callable[Concatenate[_BaseScalerT, P], T_co])

Bases: cached_property[T_co]

Read only version of functools.cached_property.

Source code
265
def __init__(self, func: Callable[Concatenate[_BaseScalerT, P], T_co]) -> None: ...

calc_tilesize

calc_tilesize(
    clip: VideoNode, **kwargs: Any
) -> tuple[tuple[int, int], tuple[int, int]]

Reimplementation of vsmlrt.calc_tilesize helper function

Source code
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
def calc_tilesize(self, clip: vs.VideoNode, **kwargs: Any) -> tuple[tuple[int, int], tuple[int, int]]:
    """Reimplementation of vsmlrt.calc_tilesize helper function"""

    from vsmlrt import calc_tilesize

    kwargs = dict(
        tiles=self.tiles,
        tilesize=self.tilesize,
        width=clip.width,
        height=clip.height,
        multiple=1,
        overlap_w=self.overlap_w,
        overlap_h=self.overlap_h,
    ) | kwargs

    return calc_tilesize(**kwargs)

ensure_obj classmethod

ensure_obj(
    scaler: str | type[Self] | Self | None = None,
    /,
    func_except: FuncExceptT | None = None,
) -> Self

Ensure that the input is a scaler instance, resolving it if necessary.

Parameters:

  • scaler
    (str | type[Self] | Self | None, default: None ) –

    Scaler identifier (string, class, or instance).

  • func_except
    (FuncExceptT | None, default: None ) –

    Function returned for custom error handling.

Returns:

  • Self

    Scaler instance.

Source code
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
@classmethod
def ensure_obj(
    cls,
    scaler: str | type[Self] | Self | None = None,
    /,
    func_except: FuncExceptT | None = None,
) -> Self:
    """
    Ensure that the input is a scaler instance, resolving it if necessary.

    :param scaler:          Scaler identifier (string, class, or instance).
    :param func_except:     Function returned for custom error handling.
    :return:                Scaler instance.
    """
    return _base_ensure_obj(cls, scaler, func_except)

from_param classmethod

from_param(
    scaler: str | type[Self] | Self | None = None,
    /,
    func_except: FuncExceptT | None = None,
) -> type[Self]

Resolve and return a scaler type from a given input (string, type, or instance).

Parameters:

  • scaler
    (str | type[Self] | Self | None, default: None ) –

    Scaler identifier (string, class, or instance).

  • func_except
    (FuncExceptT | None, default: None ) –

    Function returned for custom error handling.

Returns:

  • type[Self]

    Resolved scaler type.

Source code
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
@classmethod
def from_param(
    cls,
    scaler: str | type[Self] | Self | None = None,
    /,
    func_except: FuncExceptT | None = None,
) -> type[Self]:
    """
    Resolve and return a scaler type from a given input (string, type, or instance).

    :param scaler:          Scaler identifier (string, class, or instance).
    :param func_except:     Function returned for custom error handling.
    :return:                Resolved scaler type.
    """
    return _base_from_param(cls, scaler, cls._err_class, func_except)

get_scale_args

get_scale_args(
    clip: VideoNode,
    shift: tuple[TopShift, LeftShift] = (0, 0),
    width: int | None = None,
    height: int | None = None,
    **kwargs: Any
) -> dict[str, Any]

Generate the keyword arguments used for scaling.

Parameters:

  • clip
    (VideoNode) –

    The source clip.

  • shift
    (tuple[TopShift, LeftShift], default: (0, 0) ) –

    Subpixel shift (top, left).

  • width
    (int | None, default: None ) –

    Target width.

  • height
    (int | None, default: None ) –

    Target height.

  • kwargs
    (Any, default: {} ) –

    Extra parameters to merge.

Returns:

  • dict[str, Any]

    Final dictionary of keyword arguments for the scale function.

Source code
466
467
468
469
470
471
472
473
474
475
476
477
478
479
480
481
482
483
484
def get_scale_args(
    self,
    clip: vs.VideoNode,
    shift: tuple[TopShift, LeftShift] = (0, 0),
    width: int | None = None,
    height: int | None = None,
    **kwargs: Any,
) -> dict[str, Any]:
    """
    Generate the keyword arguments used for scaling.

    :param clip:    The source clip.
    :param shift:   Subpixel shift (top, left).
    :param width:   Target width.
    :param height:  Target height.
    :param kwargs:  Extra parameters to merge.
    :return:        Final dictionary of keyword arguments for the scale function.
    """
    return dict(width=width, height=height, src_top=shift[0], src_left=shift[1]) | self.kwargs | kwargs

inference

inference(
    clip: ConstantFormatVideoNode, **kwargs: Any
) -> ConstantFormatVideoNode
Source code
606
607
608
609
610
611
612
613
614
615
616
617
618
619
620
def inference(self, clip: ConstantFormatVideoNode, **kwargs: Any) -> ConstantFormatVideoNode:
    from vsmlrt import Waifu2x as mlrt_Waifu2x
    from vsmlrt import Waifu2xModel

    return mlrt_Waifu2x(
        clip,
        self.noise,
        self.scale_w2x,
        self.tiles,
        self.tilesize,
        self.overlap,
        Waifu2xModel(self._model),
        self.backend,
        **kwargs
    )

kernel_radius

kernel_radius() -> int

Return the effective kernel radius for the scaler.

Returns:

  • int

    Kernel radius.

Raises:

  • CustomNotImplementedError

    If no kernel radius is defined.

Source code
347
348
349
350
351
352
353
354
355
@cached_property
def kernel_radius(self) -> int:
    """
    Return the effective kernel radius for the scaler.

    :raises CustomNotImplementedError:  If no kernel radius is defined.
    :return:                            Kernel radius.
    """
    ...

multi

multi(
    clip: VideoNodeT,
    multi: float = 2.0,
    shift: tuple[TopShift, LeftShift] = (0, 0),
    **kwargs: Any
) -> VideoNodeT

Deprecated alias for supersample.

Parameters:

  • clip
    (VideoNodeT) –

    The source clip.

  • multi
    (float, default: 2.0 ) –

    Supersampling factor.

  • shift
    (tuple[TopShift, LeftShift], default: (0, 0) ) –

    Subpixel shift (top, left) applied during scaling.

  • kwargs
    (Any, default: {} ) –

    Additional arguments forwarded to the scale function.

Returns:

Source code
451
452
453
454
455
456
457
458
459
460
461
462
463
464
@deprecated('The "multi" method is deprecated. Use "supersample" instead.', category=DeprecationWarning)
def multi(
    self, clip: VideoNodeT, multi: float = 2.0, shift: tuple[TopShift, LeftShift] = (0, 0), **kwargs: Any
) -> VideoNodeT:
    """
    Deprecated alias for `supersample`.

    :param clip:    The source clip.
    :param multi:   Supersampling factor.
    :param shift:   Subpixel shift (top, left) applied during scaling.
    :param kwargs:  Additional arguments forwarded to the scale function.
    :return:        The supersampled clip.
    """
    return self.supersample(clip, multi, shift, **kwargs)

postprocess_clip

postprocess_clip(
    clip: VideoNode, input_clip: VideoNode, **kwargs: Any
) -> ConstantFormatVideoNode
Source code
628
629
630
631
632
633
634
635
def postprocess_clip(self, clip: vs.VideoNode, input_clip: vs.VideoNode, **kwargs: Any) -> ConstantFormatVideoNode:
    assert check_variable_format(clip, self.__class__)

    if clip.format != get_video_format(input_clip):
        kwargs = dict(dither_type=DitherType.ORDERED) | kwargs
        clip = self.kernel.resample(clip, input_clip, Matrix.from_video(input_clip, func=self.__class__), **kwargs)

    return clip

preprocess_clip

preprocess_clip(clip: VideoNode, **kwargs: Any) -> ConstantFormatVideoNode
Source code
624
625
626
def preprocess_clip(self, clip: vs.VideoNode, **kwargs: Any) -> ConstantFormatVideoNode:
    clip = self.kernel.resample(clip, vs.RGBH if self.backend.fp16 else vs.RGBS, Matrix.RGB)
    return limiter(clip, func=self.__class__)

pretty_string

pretty_string() -> str

Cached property returning a user-friendly string representation.

Returns:

  • str

    Pretty-printed string with arguments.

Source code
368
369
370
371
372
373
374
375
@cached_property
def pretty_string(self) -> str:
    """
    Cached property returning a user-friendly string representation.

    :return: Pretty-printed string with arguments.
    """
    return self._pretty_string()

scale

scale(
    clip: VideoNode,
    width: int | None = None,
    height: int | None = None,
    shift: tuple[float, float] = (0, 0),
    **kwargs: Any
) -> ConstantFormatVideoNode

Scale the given clip using the ONNX model.

Parameters:

  • clip
    (VideoNode) –

    The input clip to be scaled.

  • width
    (int | None, default: None ) –

    The target width for scaling. If None, the width of the input clip will be used.

  • height
    (int | None, default: None ) –

    The target height for scaling. If None, the height of the input clip will be used.

  • shift
    (tuple[float, float], default: (0, 0) ) –

    A tuple representing the shift values for the x and y axes.

  • **kwargs
    (Any, default: {} ) –

    Additional arguments to be passed to the preprocess_clip, postprocess_clip, inference, and _final_scale methods. Use the prefix preprocess_ or postprocess_ to pass an argument to the respective method. Use the prefix inference_ to pass an argument to the inference method.

Returns:

  • ConstantFormatVideoNode

    The scaled clip.

Source code
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
def scale(
    self,
    clip: vs.VideoNode,
    width: int | None = None,
    height: int | None = None,
    shift: tuple[float, float] = (0, 0),
    **kwargs: Any
) -> ConstantFormatVideoNode:
    """
    Scale the given clip using the ONNX model.

    :param clip:        The input clip to be scaled.
    :param width:       The target width for scaling. If None, the width of the input clip will be used.
    :param height:      The target height for scaling. If None, the height of the input clip will be used.
    :param shift:       A tuple representing the shift values for the x and y axes.
    :param **kwargs:    Additional arguments to be passed to the `preprocess_clip`, `postprocess_clip`,
                        `inference`, and `_final_scale` methods.
                        Use the prefix `preprocess_` or `postprocess_` to pass an argument to the respective method.
                        Use the prefix `inference_` to pass an argument to the inference method.

    :return:            The scaled clip.
    """
    from vsmlrt import Backend

    assert check_variable_format(clip, self.__class__)

    width, height = self._wh_norm(clip, width, height)

    preprocess_kwargs = dict[str, Any]()
    postprocess_kwargs = dict[str, Any]()
    inference_kwargs = dict[str, Any]()

    for k in kwargs.copy():
        for prefix, ckwargs in zip(
            ("preprocess_", "postprocess_", "inference_"),
            (preprocess_kwargs, postprocess_kwargs, inference_kwargs)
        ):
            if k.startswith(prefix):
                ckwargs[k.removeprefix(prefix)] = kwargs.pop(k)
                break

    wclip = self.preprocess_clip(clip, **preprocess_kwargs)

    if 0 not in {clip.width, clip.height}:
        scaled = self.inference(wclip, **inference_kwargs)
    else:
        if not isinstance(self.backend, Backend.TRT):
            raise CustomValueError(
                "Variable resolution clips can only be processed with TRT Backend!", self.__class__, self.backend
            )

        warning(f"{self.__class__.__name__}: Variable resolution clip detected!")

        if self.backend.static_shape:
            warning("static_shape is True, setting it to False...")
            self.backend.static_shape = False

        if not self.backend.max_shapes:
            warning("max_shapes is None, setting it to (1936, 1088). You may want to adjust it...")
            self.backend.max_shapes = (1936, 1088)

        if not self.backend.opt_shapes:
            warning("opt_shapes is None, setting it to (64, 64). You may want to adjust it...")
            self.backend.opt_shapes = (64, 64)

        scaled = ProcessVariableResClip[ConstantFormatVideoNode].from_func(
            wclip, lambda c: self.inference(c, **inference_kwargs), False, wclip.format, self.max_instances
        )

    scaled = self.postprocess_clip(scaled, clip, **postprocess_kwargs)

    return self._finish_scale(scaled, clip, width, height, shift, **kwargs)

supersample

supersample(
    clip: VideoNodeT,
    rfactor: float = 2.0,
    shift: tuple[TopShift, LeftShift] = (0, 0),
    **kwargs: Any
) -> VideoNodeT

Supersample a clip by a given scaling factor.

Parameters:

  • clip
    (VideoNodeT) –

    The source clip.

  • rfactor
    (float, default: 2.0 ) –

    Scaling factor for supersampling.

  • shift
    (tuple[TopShift, LeftShift], default: (0, 0) ) –

    Subpixel shift (top, left) applied during scaling.

  • kwargs
    (Any, default: {} ) –

    Additional arguments forwarded to the scale function.

Returns:

Raises:

  • CustomValueError

    If resulting resolution is non-positive.

Source code
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
def supersample(
    self, clip: VideoNodeT, rfactor: float = 2.0, shift: tuple[TopShift, LeftShift] = (0, 0), **kwargs: Any
) -> VideoNodeT:
    """
    Supersample a clip by a given scaling factor.

    :param clip:                The source clip.
    :param rfactor:             Scaling factor for supersampling.
    :param shift:               Subpixel shift (top, left) applied during scaling.
    :param kwargs:              Additional arguments forwarded to the scale function.
    :raises CustomValueError:   If resulting resolution is non-positive.
    :return:                    The supersampled clip.
    """
    assert check_variable_resolution(clip, self.supersample)

    dst_width, dst_height = ceil(clip.width * rfactor), ceil(clip.height * rfactor)

    if max(dst_width, dst_height) <= 0.0:
        raise CustomValueError(
            'Multiplying the resolution by "rfactor" must result in a positive resolution!',
            self.supersample,
            rfactor,
        )

    return self.scale(clip, dst_width, dst_height, shift, **kwargs)  # type: ignore[return-value]

UpConv7AnimeStyleArt

UpConv7AnimeStyleArt(
    scale: Literal[1, 2, 4] = 2,
    noise: Literal[-1, 0, 1, 2, 3] = -1,
    backend: Any | None = None,
    tiles: int | tuple[int, int] | None = None,
    tilesize: int | tuple[int, int] | None = None,
    overlap: int | tuple[int, int] | None = None,
    max_instances: int = 2,
    *,
    kernel: KernelLike = Catrom,
    scaler: ScalerLike | None = None,
    shifter: KernelLike | None = None,
    **kwargs: Any
)

Bases: BaseWaifu2xRGB

UpConv7 model variant optimized for anime-style images.

Example usage:

from vsscale import Waifu2x

doubled = Waifu2x.UpConv7AnimeStyleArt().scale(clip, clip.width * 2, clip.height * 2)

Parameters:

  • scale

    (Literal[1, 2, 4], default: 2 ) –

    Upscaling factor. 1 = no uspcaling, 2 = 2x, 4 = 4x.

  • noise

    (Literal[-1, 0, 1, 2, 3], default: -1 ) –

    Noise reduction level. -1 = none, 0 = low, 1 = medium, 2 = high, 3 = highest.

  • backend

    (Any | None, default: None ) –

    The backend to be used with the vs-mlrt framework. If set to None, the most suitable backend will be automatically selected, prioritizing fp16 support.

  • tiles

    (int | tuple[int, int] | None, default: None ) –

    Whether to split the image into multiple tiles. This can help reduce VRAM usage, but note that the model's behavior may vary when they are used.

  • tilesize

    (int | tuple[int, int] | None, default: None ) –

    The size of each tile when splitting the image (if tiles are enabled).

  • overlap

    (int | tuple[int, int] | None, default: None ) –

    The size of overlap between tiles.

  • max_instances

    (int, default: 2 ) –

    Maximum instances to spawn when scaling a variable resolution clip.

  • kernel

    (KernelLike, default: Catrom ) –

    Base kernel to be used for certain scaling/shifting/resampling operations. Defaults to Catrom.

  • scaler

    (ScalerLike | None, default: None ) –

    Scaler used for scaling operations. Defaults to kernel.

  • shifter

    (KernelLike | None, default: None ) –

    Kernel used for shifting operations. Defaults to kernel.

  • **kwargs

    (Any, default: {} ) –

    Additional arguments to pass to the backend. See the vsmlrt backend's docstring for more details.

Classes:

Methods:

  • calc_tilesize

    Reimplementation of vsmlrt.calc_tilesize helper function

  • ensure_obj

    Ensure that the input is a scaler instance, resolving it if necessary.

  • from_param

    Resolve and return a scaler type from a given input (string, type, or instance).

  • get_scale_args

    Generate the keyword arguments used for scaling.

  • inference
  • kernel_radius

    Return the effective kernel radius for the scaler.

  • multi

    Deprecated alias for supersample.

  • postprocess_clip
  • preprocess_clip
  • pretty_string

    Cached property returning a user-friendly string representation.

  • scale

    Scale the given clip using the ONNX model.

  • supersample

    Supersample a clip by a given scaling factor.

Attributes:

Source code
568
569
570
571
572
573
574
575
576
577
578
579
580
581
582
583
584
585
586
587
588
589
590
591
592
593
594
595
596
597
598
599
600
601
602
603
604
def __init__(
    self,
    scale: Literal[1, 2, 4] = 2,
    noise: Literal[-1, 0, 1, 2, 3] = -1,
    backend: Any | None = None,
    tiles: int | tuple[int, int] | None = None,
    tilesize: int | tuple[int, int] | None = None,
    overlap: int | tuple[int, int] | None = None,
    max_instances: int = 2,
    *,
    kernel: KernelLike = Catrom,
    scaler: ScalerLike | None = None,
    shifter: KernelLike | None = None,
    **kwargs: Any
) -> None:
    """
    :param scale:           Upscaling factor. 1 = no uspcaling, 2 = 2x, 4 = 4x.
    :param noise:           Noise reduction level. -1 = none, 0 = low, 1 = medium, 2 = high, 3 = highest.
    :param backend:         The backend to be used with the vs-mlrt framework.
                            If set to None, the most suitable backend will be automatically selected, prioritizing fp16 support.
    :param tiles:           Whether to split the image into multiple tiles.
                            This can help reduce VRAM usage, but note that the model's behavior may vary when they are used.
    :param tilesize:        The size of each tile when splitting the image (if tiles are enabled).
    :param overlap:         The size of overlap between tiles.
    :param max_instances:   Maximum instances to spawn when scaling a variable resolution clip.
    :param kernel:          Base kernel to be used for certain scaling/shifting/resampling operations.
                            Defaults to Catrom.
    :param scaler:          Scaler used for scaling operations. Defaults to kernel.
    :param shifter:         Kernel used for shifting operations. Defaults to kernel.
    :param **kwargs:        Additional arguments to pass to the backend.
                            See the vsmlrt backend's docstring for more details.
    """
    self.scale_w2x = scale
    self.noise = noise
    super().__init__(
        None, backend, tiles, tilesize, overlap, max_instances, kernel=kernel, scaler=scaler, shifter=shifter, **kwargs
    )

backend instance-attribute

backend = autoselect_backend(**_default_args | kwargs)

kernel instance-attribute

kernel = ensure_obj(kernel, __class__)

kwargs instance-attribute

kwargs: dict[str, Any] = kwargs

Arguments passed to the implemented funcs or internal scale function.

max_instances instance-attribute

max_instances = max_instances

model instance-attribute

model = str(resolve())

noise instance-attribute

noise: Literal[-1, 0, 1, 2, 3] = noise

overlap instance-attribute

overlap = overlap

overlap_h instance-attribute

overlap_h = 8

overlap_w instance-attribute

overlap_w = 8

scale_function instance-attribute

scale_function: Callable[..., VideoNode]

Scale function called internally when performing scaling operations.

scale_w2x instance-attribute

scale_w2x: Literal[1, 2, 4] = scale

scaler instance-attribute

scaler = ensure_obj(scaler or kernel, __class__)

shifter instance-attribute

shifter = ensure_obj(shifter or kernel, __class__)

tiles instance-attribute

tiles = tiles

tilesize instance-attribute

tilesize = tilesize

cached_property

cached_property(func: Callable[Concatenate[_BaseScalerT, P], T_co])

Bases: cached_property[T_co]

Read only version of functools.cached_property.

Source code
265
def __init__(self, func: Callable[Concatenate[_BaseScalerT, P], T_co]) -> None: ...

calc_tilesize

calc_tilesize(
    clip: VideoNode, **kwargs: Any
) -> tuple[tuple[int, int], tuple[int, int]]

Reimplementation of vsmlrt.calc_tilesize helper function

Source code
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
def calc_tilesize(self, clip: vs.VideoNode, **kwargs: Any) -> tuple[tuple[int, int], tuple[int, int]]:
    """Reimplementation of vsmlrt.calc_tilesize helper function"""

    from vsmlrt import calc_tilesize

    kwargs = dict(
        tiles=self.tiles,
        tilesize=self.tilesize,
        width=clip.width,
        height=clip.height,
        multiple=1,
        overlap_w=self.overlap_w,
        overlap_h=self.overlap_h,
    ) | kwargs

    return calc_tilesize(**kwargs)

ensure_obj classmethod

ensure_obj(
    scaler: str | type[Self] | Self | None = None,
    /,
    func_except: FuncExceptT | None = None,
) -> Self

Ensure that the input is a scaler instance, resolving it if necessary.

Parameters:

  • scaler
    (str | type[Self] | Self | None, default: None ) –

    Scaler identifier (string, class, or instance).

  • func_except
    (FuncExceptT | None, default: None ) –

    Function returned for custom error handling.

Returns:

  • Self

    Scaler instance.

Source code
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
@classmethod
def ensure_obj(
    cls,
    scaler: str | type[Self] | Self | None = None,
    /,
    func_except: FuncExceptT | None = None,
) -> Self:
    """
    Ensure that the input is a scaler instance, resolving it if necessary.

    :param scaler:          Scaler identifier (string, class, or instance).
    :param func_except:     Function returned for custom error handling.
    :return:                Scaler instance.
    """
    return _base_ensure_obj(cls, scaler, func_except)

from_param classmethod

from_param(
    scaler: str | type[Self] | Self | None = None,
    /,
    func_except: FuncExceptT | None = None,
) -> type[Self]

Resolve and return a scaler type from a given input (string, type, or instance).

Parameters:

  • scaler
    (str | type[Self] | Self | None, default: None ) –

    Scaler identifier (string, class, or instance).

  • func_except
    (FuncExceptT | None, default: None ) –

    Function returned for custom error handling.

Returns:

  • type[Self]

    Resolved scaler type.

Source code
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
@classmethod
def from_param(
    cls,
    scaler: str | type[Self] | Self | None = None,
    /,
    func_except: FuncExceptT | None = None,
) -> type[Self]:
    """
    Resolve and return a scaler type from a given input (string, type, or instance).

    :param scaler:          Scaler identifier (string, class, or instance).
    :param func_except:     Function returned for custom error handling.
    :return:                Resolved scaler type.
    """
    return _base_from_param(cls, scaler, cls._err_class, func_except)

get_scale_args

get_scale_args(
    clip: VideoNode,
    shift: tuple[TopShift, LeftShift] = (0, 0),
    width: int | None = None,
    height: int | None = None,
    **kwargs: Any
) -> dict[str, Any]

Generate the keyword arguments used for scaling.

Parameters:

  • clip
    (VideoNode) –

    The source clip.

  • shift
    (tuple[TopShift, LeftShift], default: (0, 0) ) –

    Subpixel shift (top, left).

  • width
    (int | None, default: None ) –

    Target width.

  • height
    (int | None, default: None ) –

    Target height.

  • kwargs
    (Any, default: {} ) –

    Extra parameters to merge.

Returns:

  • dict[str, Any]

    Final dictionary of keyword arguments for the scale function.

Source code
466
467
468
469
470
471
472
473
474
475
476
477
478
479
480
481
482
483
484
def get_scale_args(
    self,
    clip: vs.VideoNode,
    shift: tuple[TopShift, LeftShift] = (0, 0),
    width: int | None = None,
    height: int | None = None,
    **kwargs: Any,
) -> dict[str, Any]:
    """
    Generate the keyword arguments used for scaling.

    :param clip:    The source clip.
    :param shift:   Subpixel shift (top, left).
    :param width:   Target width.
    :param height:  Target height.
    :param kwargs:  Extra parameters to merge.
    :return:        Final dictionary of keyword arguments for the scale function.
    """
    return dict(width=width, height=height, src_top=shift[0], src_left=shift[1]) | self.kwargs | kwargs

inference

inference(
    clip: ConstantFormatVideoNode, **kwargs: Any
) -> ConstantFormatVideoNode
Source code
606
607
608
609
610
611
612
613
614
615
616
617
618
619
620
def inference(self, clip: ConstantFormatVideoNode, **kwargs: Any) -> ConstantFormatVideoNode:
    from vsmlrt import Waifu2x as mlrt_Waifu2x
    from vsmlrt import Waifu2xModel

    return mlrt_Waifu2x(
        clip,
        self.noise,
        self.scale_w2x,
        self.tiles,
        self.tilesize,
        self.overlap,
        Waifu2xModel(self._model),
        self.backend,
        **kwargs
    )

kernel_radius

kernel_radius() -> int

Return the effective kernel radius for the scaler.

Returns:

  • int

    Kernel radius.

Raises:

  • CustomNotImplementedError

    If no kernel radius is defined.

Source code
347
348
349
350
351
352
353
354
355
@cached_property
def kernel_radius(self) -> int:
    """
    Return the effective kernel radius for the scaler.

    :raises CustomNotImplementedError:  If no kernel radius is defined.
    :return:                            Kernel radius.
    """
    ...

multi

multi(
    clip: VideoNodeT,
    multi: float = 2.0,
    shift: tuple[TopShift, LeftShift] = (0, 0),
    **kwargs: Any
) -> VideoNodeT

Deprecated alias for supersample.

Parameters:

  • clip
    (VideoNodeT) –

    The source clip.

  • multi
    (float, default: 2.0 ) –

    Supersampling factor.

  • shift
    (tuple[TopShift, LeftShift], default: (0, 0) ) –

    Subpixel shift (top, left) applied during scaling.

  • kwargs
    (Any, default: {} ) –

    Additional arguments forwarded to the scale function.

Returns:

Source code
451
452
453
454
455
456
457
458
459
460
461
462
463
464
@deprecated('The "multi" method is deprecated. Use "supersample" instead.', category=DeprecationWarning)
def multi(
    self, clip: VideoNodeT, multi: float = 2.0, shift: tuple[TopShift, LeftShift] = (0, 0), **kwargs: Any
) -> VideoNodeT:
    """
    Deprecated alias for `supersample`.

    :param clip:    The source clip.
    :param multi:   Supersampling factor.
    :param shift:   Subpixel shift (top, left) applied during scaling.
    :param kwargs:  Additional arguments forwarded to the scale function.
    :return:        The supersampled clip.
    """
    return self.supersample(clip, multi, shift, **kwargs)

postprocess_clip

postprocess_clip(
    clip: VideoNode, input_clip: VideoNode, **kwargs: Any
) -> ConstantFormatVideoNode
Source code
628
629
630
631
632
633
634
635
def postprocess_clip(self, clip: vs.VideoNode, input_clip: vs.VideoNode, **kwargs: Any) -> ConstantFormatVideoNode:
    assert check_variable_format(clip, self.__class__)

    if clip.format != get_video_format(input_clip):
        kwargs = dict(dither_type=DitherType.ORDERED) | kwargs
        clip = self.kernel.resample(clip, input_clip, Matrix.from_video(input_clip, func=self.__class__), **kwargs)

    return clip

preprocess_clip

preprocess_clip(clip: VideoNode, **kwargs: Any) -> ConstantFormatVideoNode
Source code
624
625
626
def preprocess_clip(self, clip: vs.VideoNode, **kwargs: Any) -> ConstantFormatVideoNode:
    clip = self.kernel.resample(clip, vs.RGBH if self.backend.fp16 else vs.RGBS, Matrix.RGB)
    return limiter(clip, func=self.__class__)

pretty_string

pretty_string() -> str

Cached property returning a user-friendly string representation.

Returns:

  • str

    Pretty-printed string with arguments.

Source code
368
369
370
371
372
373
374
375
@cached_property
def pretty_string(self) -> str:
    """
    Cached property returning a user-friendly string representation.

    :return: Pretty-printed string with arguments.
    """
    return self._pretty_string()

scale

scale(
    clip: VideoNode,
    width: int | None = None,
    height: int | None = None,
    shift: tuple[float, float] = (0, 0),
    **kwargs: Any
) -> ConstantFormatVideoNode

Scale the given clip using the ONNX model.

Parameters:

  • clip
    (VideoNode) –

    The input clip to be scaled.

  • width
    (int | None, default: None ) –

    The target width for scaling. If None, the width of the input clip will be used.

  • height
    (int | None, default: None ) –

    The target height for scaling. If None, the height of the input clip will be used.

  • shift
    (tuple[float, float], default: (0, 0) ) –

    A tuple representing the shift values for the x and y axes.

  • **kwargs
    (Any, default: {} ) –

    Additional arguments to be passed to the preprocess_clip, postprocess_clip, inference, and _final_scale methods. Use the prefix preprocess_ or postprocess_ to pass an argument to the respective method. Use the prefix inference_ to pass an argument to the inference method.

Returns:

  • ConstantFormatVideoNode

    The scaled clip.

Source code
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
def scale(
    self,
    clip: vs.VideoNode,
    width: int | None = None,
    height: int | None = None,
    shift: tuple[float, float] = (0, 0),
    **kwargs: Any
) -> ConstantFormatVideoNode:
    """
    Scale the given clip using the ONNX model.

    :param clip:        The input clip to be scaled.
    :param width:       The target width for scaling. If None, the width of the input clip will be used.
    :param height:      The target height for scaling. If None, the height of the input clip will be used.
    :param shift:       A tuple representing the shift values for the x and y axes.
    :param **kwargs:    Additional arguments to be passed to the `preprocess_clip`, `postprocess_clip`,
                        `inference`, and `_final_scale` methods.
                        Use the prefix `preprocess_` or `postprocess_` to pass an argument to the respective method.
                        Use the prefix `inference_` to pass an argument to the inference method.

    :return:            The scaled clip.
    """
    from vsmlrt import Backend

    assert check_variable_format(clip, self.__class__)

    width, height = self._wh_norm(clip, width, height)

    preprocess_kwargs = dict[str, Any]()
    postprocess_kwargs = dict[str, Any]()
    inference_kwargs = dict[str, Any]()

    for k in kwargs.copy():
        for prefix, ckwargs in zip(
            ("preprocess_", "postprocess_", "inference_"),
            (preprocess_kwargs, postprocess_kwargs, inference_kwargs)
        ):
            if k.startswith(prefix):
                ckwargs[k.removeprefix(prefix)] = kwargs.pop(k)
                break

    wclip = self.preprocess_clip(clip, **preprocess_kwargs)

    if 0 not in {clip.width, clip.height}:
        scaled = self.inference(wclip, **inference_kwargs)
    else:
        if not isinstance(self.backend, Backend.TRT):
            raise CustomValueError(
                "Variable resolution clips can only be processed with TRT Backend!", self.__class__, self.backend
            )

        warning(f"{self.__class__.__name__}: Variable resolution clip detected!")

        if self.backend.static_shape:
            warning("static_shape is True, setting it to False...")
            self.backend.static_shape = False

        if not self.backend.max_shapes:
            warning("max_shapes is None, setting it to (1936, 1088). You may want to adjust it...")
            self.backend.max_shapes = (1936, 1088)

        if not self.backend.opt_shapes:
            warning("opt_shapes is None, setting it to (64, 64). You may want to adjust it...")
            self.backend.opt_shapes = (64, 64)

        scaled = ProcessVariableResClip[ConstantFormatVideoNode].from_func(
            wclip, lambda c: self.inference(c, **inference_kwargs), False, wclip.format, self.max_instances
        )

    scaled = self.postprocess_clip(scaled, clip, **postprocess_kwargs)

    return self._finish_scale(scaled, clip, width, height, shift, **kwargs)

supersample

supersample(
    clip: VideoNodeT,
    rfactor: float = 2.0,
    shift: tuple[TopShift, LeftShift] = (0, 0),
    **kwargs: Any
) -> VideoNodeT

Supersample a clip by a given scaling factor.

Parameters:

  • clip
    (VideoNodeT) –

    The source clip.

  • rfactor
    (float, default: 2.0 ) –

    Scaling factor for supersampling.

  • shift
    (tuple[TopShift, LeftShift], default: (0, 0) ) –

    Subpixel shift (top, left) applied during scaling.

  • kwargs
    (Any, default: {} ) –

    Additional arguments forwarded to the scale function.

Returns:

Raises:

  • CustomValueError

    If resulting resolution is non-positive.

Source code
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
def supersample(
    self, clip: VideoNodeT, rfactor: float = 2.0, shift: tuple[TopShift, LeftShift] = (0, 0), **kwargs: Any
) -> VideoNodeT:
    """
    Supersample a clip by a given scaling factor.

    :param clip:                The source clip.
    :param rfactor:             Scaling factor for supersampling.
    :param shift:               Subpixel shift (top, left) applied during scaling.
    :param kwargs:              Additional arguments forwarded to the scale function.
    :raises CustomValueError:   If resulting resolution is non-positive.
    :return:                    The supersampled clip.
    """
    assert check_variable_resolution(clip, self.supersample)

    dst_width, dst_height = ceil(clip.width * rfactor), ceil(clip.height * rfactor)

    if max(dst_width, dst_height) <= 0.0:
        raise CustomValueError(
            'Multiplying the resolution by "rfactor" must result in a positive resolution!',
            self.supersample,
            rfactor,
        )

    return self.scale(clip, dst_width, dst_height, shift, **kwargs)  # type: ignore[return-value]

UpConv7Photo

UpConv7Photo(
    scale: Literal[1, 2, 4] = 2,
    noise: Literal[-1, 0, 1, 2, 3] = -1,
    backend: Any | None = None,
    tiles: int | tuple[int, int] | None = None,
    tilesize: int | tuple[int, int] | None = None,
    overlap: int | tuple[int, int] | None = None,
    max_instances: int = 2,
    *,
    kernel: KernelLike = Catrom,
    scaler: ScalerLike | None = None,
    shifter: KernelLike | None = None,
    **kwargs: Any
)

Bases: BaseWaifu2xRGB

UpConv7 model variant optimized for photographic images.

Example usage:

from vsscale import Waifu2x

doubled = Waifu2x.UpConv7Photo().scale(clip, clip.width * 2, clip.height * 2)

Parameters:

  • scale

    (Literal[1, 2, 4], default: 2 ) –

    Upscaling factor. 1 = no uspcaling, 2 = 2x, 4 = 4x.

  • noise

    (Literal[-1, 0, 1, 2, 3], default: -1 ) –

    Noise reduction level. -1 = none, 0 = low, 1 = medium, 2 = high, 3 = highest.

  • backend

    (Any | None, default: None ) –

    The backend to be used with the vs-mlrt framework. If set to None, the most suitable backend will be automatically selected, prioritizing fp16 support.

  • tiles

    (int | tuple[int, int] | None, default: None ) –

    Whether to split the image into multiple tiles. This can help reduce VRAM usage, but note that the model's behavior may vary when they are used.

  • tilesize

    (int | tuple[int, int] | None, default: None ) –

    The size of each tile when splitting the image (if tiles are enabled).

  • overlap

    (int | tuple[int, int] | None, default: None ) –

    The size of overlap between tiles.

  • max_instances

    (int, default: 2 ) –

    Maximum instances to spawn when scaling a variable resolution clip.

  • kernel

    (KernelLike, default: Catrom ) –

    Base kernel to be used for certain scaling/shifting/resampling operations. Defaults to Catrom.

  • scaler

    (ScalerLike | None, default: None ) –

    Scaler used for scaling operations. Defaults to kernel.

  • shifter

    (KernelLike | None, default: None ) –

    Kernel used for shifting operations. Defaults to kernel.

  • **kwargs

    (Any, default: {} ) –

    Additional arguments to pass to the backend. See the vsmlrt backend's docstring for more details.

Classes:

Methods:

  • calc_tilesize

    Reimplementation of vsmlrt.calc_tilesize helper function

  • ensure_obj

    Ensure that the input is a scaler instance, resolving it if necessary.

  • from_param

    Resolve and return a scaler type from a given input (string, type, or instance).

  • get_scale_args

    Generate the keyword arguments used for scaling.

  • inference
  • kernel_radius

    Return the effective kernel radius for the scaler.

  • multi

    Deprecated alias for supersample.

  • postprocess_clip
  • preprocess_clip
  • pretty_string

    Cached property returning a user-friendly string representation.

  • scale

    Scale the given clip using the ONNX model.

  • supersample

    Supersample a clip by a given scaling factor.

Attributes:

Source code
568
569
570
571
572
573
574
575
576
577
578
579
580
581
582
583
584
585
586
587
588
589
590
591
592
593
594
595
596
597
598
599
600
601
602
603
604
def __init__(
    self,
    scale: Literal[1, 2, 4] = 2,
    noise: Literal[-1, 0, 1, 2, 3] = -1,
    backend: Any | None = None,
    tiles: int | tuple[int, int] | None = None,
    tilesize: int | tuple[int, int] | None = None,
    overlap: int | tuple[int, int] | None = None,
    max_instances: int = 2,
    *,
    kernel: KernelLike = Catrom,
    scaler: ScalerLike | None = None,
    shifter: KernelLike | None = None,
    **kwargs: Any
) -> None:
    """
    :param scale:           Upscaling factor. 1 = no uspcaling, 2 = 2x, 4 = 4x.
    :param noise:           Noise reduction level. -1 = none, 0 = low, 1 = medium, 2 = high, 3 = highest.
    :param backend:         The backend to be used with the vs-mlrt framework.
                            If set to None, the most suitable backend will be automatically selected, prioritizing fp16 support.
    :param tiles:           Whether to split the image into multiple tiles.
                            This can help reduce VRAM usage, but note that the model's behavior may vary when they are used.
    :param tilesize:        The size of each tile when splitting the image (if tiles are enabled).
    :param overlap:         The size of overlap between tiles.
    :param max_instances:   Maximum instances to spawn when scaling a variable resolution clip.
    :param kernel:          Base kernel to be used for certain scaling/shifting/resampling operations.
                            Defaults to Catrom.
    :param scaler:          Scaler used for scaling operations. Defaults to kernel.
    :param shifter:         Kernel used for shifting operations. Defaults to kernel.
    :param **kwargs:        Additional arguments to pass to the backend.
                            See the vsmlrt backend's docstring for more details.
    """
    self.scale_w2x = scale
    self.noise = noise
    super().__init__(
        None, backend, tiles, tilesize, overlap, max_instances, kernel=kernel, scaler=scaler, shifter=shifter, **kwargs
    )

backend instance-attribute

backend = autoselect_backend(**_default_args | kwargs)

kernel instance-attribute

kernel = ensure_obj(kernel, __class__)

kwargs instance-attribute

kwargs: dict[str, Any] = kwargs

Arguments passed to the implemented funcs or internal scale function.

max_instances instance-attribute

max_instances = max_instances

model instance-attribute

model = str(resolve())

noise instance-attribute

noise: Literal[-1, 0, 1, 2, 3] = noise

overlap instance-attribute

overlap = overlap

overlap_h instance-attribute

overlap_h = 8

overlap_w instance-attribute

overlap_w = 8

scale_function instance-attribute

scale_function: Callable[..., VideoNode]

Scale function called internally when performing scaling operations.

scale_w2x instance-attribute

scale_w2x: Literal[1, 2, 4] = scale

scaler instance-attribute

scaler = ensure_obj(scaler or kernel, __class__)

shifter instance-attribute

shifter = ensure_obj(shifter or kernel, __class__)

tiles instance-attribute

tiles = tiles

tilesize instance-attribute

tilesize = tilesize

cached_property

cached_property(func: Callable[Concatenate[_BaseScalerT, P], T_co])

Bases: cached_property[T_co]

Read only version of functools.cached_property.

Source code
265
def __init__(self, func: Callable[Concatenate[_BaseScalerT, P], T_co]) -> None: ...

calc_tilesize

calc_tilesize(
    clip: VideoNode, **kwargs: Any
) -> tuple[tuple[int, int], tuple[int, int]]

Reimplementation of vsmlrt.calc_tilesize helper function

Source code
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
def calc_tilesize(self, clip: vs.VideoNode, **kwargs: Any) -> tuple[tuple[int, int], tuple[int, int]]:
    """Reimplementation of vsmlrt.calc_tilesize helper function"""

    from vsmlrt import calc_tilesize

    kwargs = dict(
        tiles=self.tiles,
        tilesize=self.tilesize,
        width=clip.width,
        height=clip.height,
        multiple=1,
        overlap_w=self.overlap_w,
        overlap_h=self.overlap_h,
    ) | kwargs

    return calc_tilesize(**kwargs)

ensure_obj classmethod

ensure_obj(
    scaler: str | type[Self] | Self | None = None,
    /,
    func_except: FuncExceptT | None = None,
) -> Self

Ensure that the input is a scaler instance, resolving it if necessary.

Parameters:

  • scaler
    (str | type[Self] | Self | None, default: None ) –

    Scaler identifier (string, class, or instance).

  • func_except
    (FuncExceptT | None, default: None ) –

    Function returned for custom error handling.

Returns:

  • Self

    Scaler instance.

Source code
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
@classmethod
def ensure_obj(
    cls,
    scaler: str | type[Self] | Self | None = None,
    /,
    func_except: FuncExceptT | None = None,
) -> Self:
    """
    Ensure that the input is a scaler instance, resolving it if necessary.

    :param scaler:          Scaler identifier (string, class, or instance).
    :param func_except:     Function returned for custom error handling.
    :return:                Scaler instance.
    """
    return _base_ensure_obj(cls, scaler, func_except)

from_param classmethod

from_param(
    scaler: str | type[Self] | Self | None = None,
    /,
    func_except: FuncExceptT | None = None,
) -> type[Self]

Resolve and return a scaler type from a given input (string, type, or instance).

Parameters:

  • scaler
    (str | type[Self] | Self | None, default: None ) –

    Scaler identifier (string, class, or instance).

  • func_except
    (FuncExceptT | None, default: None ) –

    Function returned for custom error handling.

Returns:

  • type[Self]

    Resolved scaler type.

Source code
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
@classmethod
def from_param(
    cls,
    scaler: str | type[Self] | Self | None = None,
    /,
    func_except: FuncExceptT | None = None,
) -> type[Self]:
    """
    Resolve and return a scaler type from a given input (string, type, or instance).

    :param scaler:          Scaler identifier (string, class, or instance).
    :param func_except:     Function returned for custom error handling.
    :return:                Resolved scaler type.
    """
    return _base_from_param(cls, scaler, cls._err_class, func_except)

get_scale_args

get_scale_args(
    clip: VideoNode,
    shift: tuple[TopShift, LeftShift] = (0, 0),
    width: int | None = None,
    height: int | None = None,
    **kwargs: Any
) -> dict[str, Any]

Generate the keyword arguments used for scaling.

Parameters:

  • clip
    (VideoNode) –

    The source clip.

  • shift
    (tuple[TopShift, LeftShift], default: (0, 0) ) –

    Subpixel shift (top, left).

  • width
    (int | None, default: None ) –

    Target width.

  • height
    (int | None, default: None ) –

    Target height.

  • kwargs
    (Any, default: {} ) –

    Extra parameters to merge.

Returns:

  • dict[str, Any]

    Final dictionary of keyword arguments for the scale function.

Source code
466
467
468
469
470
471
472
473
474
475
476
477
478
479
480
481
482
483
484
def get_scale_args(
    self,
    clip: vs.VideoNode,
    shift: tuple[TopShift, LeftShift] = (0, 0),
    width: int | None = None,
    height: int | None = None,
    **kwargs: Any,
) -> dict[str, Any]:
    """
    Generate the keyword arguments used for scaling.

    :param clip:    The source clip.
    :param shift:   Subpixel shift (top, left).
    :param width:   Target width.
    :param height:  Target height.
    :param kwargs:  Extra parameters to merge.
    :return:        Final dictionary of keyword arguments for the scale function.
    """
    return dict(width=width, height=height, src_top=shift[0], src_left=shift[1]) | self.kwargs | kwargs

inference

inference(
    clip: ConstantFormatVideoNode, **kwargs: Any
) -> ConstantFormatVideoNode
Source code
606
607
608
609
610
611
612
613
614
615
616
617
618
619
620
def inference(self, clip: ConstantFormatVideoNode, **kwargs: Any) -> ConstantFormatVideoNode:
    from vsmlrt import Waifu2x as mlrt_Waifu2x
    from vsmlrt import Waifu2xModel

    return mlrt_Waifu2x(
        clip,
        self.noise,
        self.scale_w2x,
        self.tiles,
        self.tilesize,
        self.overlap,
        Waifu2xModel(self._model),
        self.backend,
        **kwargs
    )

kernel_radius

kernel_radius() -> int

Return the effective kernel radius for the scaler.

Returns:

  • int

    Kernel radius.

Raises:

  • CustomNotImplementedError

    If no kernel radius is defined.

Source code
347
348
349
350
351
352
353
354
355
@cached_property
def kernel_radius(self) -> int:
    """
    Return the effective kernel radius for the scaler.

    :raises CustomNotImplementedError:  If no kernel radius is defined.
    :return:                            Kernel radius.
    """
    ...

multi

multi(
    clip: VideoNodeT,
    multi: float = 2.0,
    shift: tuple[TopShift, LeftShift] = (0, 0),
    **kwargs: Any
) -> VideoNodeT

Deprecated alias for supersample.

Parameters:

  • clip
    (VideoNodeT) –

    The source clip.

  • multi
    (float, default: 2.0 ) –

    Supersampling factor.

  • shift
    (tuple[TopShift, LeftShift], default: (0, 0) ) –

    Subpixel shift (top, left) applied during scaling.

  • kwargs
    (Any, default: {} ) –

    Additional arguments forwarded to the scale function.

Returns:

Source code
451
452
453
454
455
456
457
458
459
460
461
462
463
464
@deprecated('The "multi" method is deprecated. Use "supersample" instead.', category=DeprecationWarning)
def multi(
    self, clip: VideoNodeT, multi: float = 2.0, shift: tuple[TopShift, LeftShift] = (0, 0), **kwargs: Any
) -> VideoNodeT:
    """
    Deprecated alias for `supersample`.

    :param clip:    The source clip.
    :param multi:   Supersampling factor.
    :param shift:   Subpixel shift (top, left) applied during scaling.
    :param kwargs:  Additional arguments forwarded to the scale function.
    :return:        The supersampled clip.
    """
    return self.supersample(clip, multi, shift, **kwargs)

postprocess_clip

postprocess_clip(
    clip: VideoNode, input_clip: VideoNode, **kwargs: Any
) -> ConstantFormatVideoNode
Source code
628
629
630
631
632
633
634
635
def postprocess_clip(self, clip: vs.VideoNode, input_clip: vs.VideoNode, **kwargs: Any) -> ConstantFormatVideoNode:
    assert check_variable_format(clip, self.__class__)

    if clip.format != get_video_format(input_clip):
        kwargs = dict(dither_type=DitherType.ORDERED) | kwargs
        clip = self.kernel.resample(clip, input_clip, Matrix.from_video(input_clip, func=self.__class__), **kwargs)

    return clip

preprocess_clip

preprocess_clip(clip: VideoNode, **kwargs: Any) -> ConstantFormatVideoNode
Source code
624
625
626
def preprocess_clip(self, clip: vs.VideoNode, **kwargs: Any) -> ConstantFormatVideoNode:
    clip = self.kernel.resample(clip, vs.RGBH if self.backend.fp16 else vs.RGBS, Matrix.RGB)
    return limiter(clip, func=self.__class__)

pretty_string

pretty_string() -> str

Cached property returning a user-friendly string representation.

Returns:

  • str

    Pretty-printed string with arguments.

Source code
368
369
370
371
372
373
374
375
@cached_property
def pretty_string(self) -> str:
    """
    Cached property returning a user-friendly string representation.

    :return: Pretty-printed string with arguments.
    """
    return self._pretty_string()

scale

scale(
    clip: VideoNode,
    width: int | None = None,
    height: int | None = None,
    shift: tuple[float, float] = (0, 0),
    **kwargs: Any
) -> ConstantFormatVideoNode

Scale the given clip using the ONNX model.

Parameters:

  • clip
    (VideoNode) –

    The input clip to be scaled.

  • width
    (int | None, default: None ) –

    The target width for scaling. If None, the width of the input clip will be used.

  • height
    (int | None, default: None ) –

    The target height for scaling. If None, the height of the input clip will be used.

  • shift
    (tuple[float, float], default: (0, 0) ) –

    A tuple representing the shift values for the x and y axes.

  • **kwargs
    (Any, default: {} ) –

    Additional arguments to be passed to the preprocess_clip, postprocess_clip, inference, and _final_scale methods. Use the prefix preprocess_ or postprocess_ to pass an argument to the respective method. Use the prefix inference_ to pass an argument to the inference method.

Returns:

  • ConstantFormatVideoNode

    The scaled clip.

Source code
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
def scale(
    self,
    clip: vs.VideoNode,
    width: int | None = None,
    height: int | None = None,
    shift: tuple[float, float] = (0, 0),
    **kwargs: Any
) -> ConstantFormatVideoNode:
    """
    Scale the given clip using the ONNX model.

    :param clip:        The input clip to be scaled.
    :param width:       The target width for scaling. If None, the width of the input clip will be used.
    :param height:      The target height for scaling. If None, the height of the input clip will be used.
    :param shift:       A tuple representing the shift values for the x and y axes.
    :param **kwargs:    Additional arguments to be passed to the `preprocess_clip`, `postprocess_clip`,
                        `inference`, and `_final_scale` methods.
                        Use the prefix `preprocess_` or `postprocess_` to pass an argument to the respective method.
                        Use the prefix `inference_` to pass an argument to the inference method.

    :return:            The scaled clip.
    """
    from vsmlrt import Backend

    assert check_variable_format(clip, self.__class__)

    width, height = self._wh_norm(clip, width, height)

    preprocess_kwargs = dict[str, Any]()
    postprocess_kwargs = dict[str, Any]()
    inference_kwargs = dict[str, Any]()

    for k in kwargs.copy():
        for prefix, ckwargs in zip(
            ("preprocess_", "postprocess_", "inference_"),
            (preprocess_kwargs, postprocess_kwargs, inference_kwargs)
        ):
            if k.startswith(prefix):
                ckwargs[k.removeprefix(prefix)] = kwargs.pop(k)
                break

    wclip = self.preprocess_clip(clip, **preprocess_kwargs)

    if 0 not in {clip.width, clip.height}:
        scaled = self.inference(wclip, **inference_kwargs)
    else:
        if not isinstance(self.backend, Backend.TRT):
            raise CustomValueError(
                "Variable resolution clips can only be processed with TRT Backend!", self.__class__, self.backend
            )

        warning(f"{self.__class__.__name__}: Variable resolution clip detected!")

        if self.backend.static_shape:
            warning("static_shape is True, setting it to False...")
            self.backend.static_shape = False

        if not self.backend.max_shapes:
            warning("max_shapes is None, setting it to (1936, 1088). You may want to adjust it...")
            self.backend.max_shapes = (1936, 1088)

        if not self.backend.opt_shapes:
            warning("opt_shapes is None, setting it to (64, 64). You may want to adjust it...")
            self.backend.opt_shapes = (64, 64)

        scaled = ProcessVariableResClip[ConstantFormatVideoNode].from_func(
            wclip, lambda c: self.inference(c, **inference_kwargs), False, wclip.format, self.max_instances
        )

    scaled = self.postprocess_clip(scaled, clip, **postprocess_kwargs)

    return self._finish_scale(scaled, clip, width, height, shift, **kwargs)

supersample

supersample(
    clip: VideoNodeT,
    rfactor: float = 2.0,
    shift: tuple[TopShift, LeftShift] = (0, 0),
    **kwargs: Any
) -> VideoNodeT

Supersample a clip by a given scaling factor.

Parameters:

  • clip
    (VideoNodeT) –

    The source clip.

  • rfactor
    (float, default: 2.0 ) –

    Scaling factor for supersampling.

  • shift
    (tuple[TopShift, LeftShift], default: (0, 0) ) –

    Subpixel shift (top, left) applied during scaling.

  • kwargs
    (Any, default: {} ) –

    Additional arguments forwarded to the scale function.

Returns:

Raises:

  • CustomValueError

    If resulting resolution is non-positive.

Source code
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
def supersample(
    self, clip: VideoNodeT, rfactor: float = 2.0, shift: tuple[TopShift, LeftShift] = (0, 0), **kwargs: Any
) -> VideoNodeT:
    """
    Supersample a clip by a given scaling factor.

    :param clip:                The source clip.
    :param rfactor:             Scaling factor for supersampling.
    :param shift:               Subpixel shift (top, left) applied during scaling.
    :param kwargs:              Additional arguments forwarded to the scale function.
    :raises CustomValueError:   If resulting resolution is non-positive.
    :return:                    The supersampled clip.
    """
    assert check_variable_resolution(clip, self.supersample)

    dst_width, dst_height = ceil(clip.width * rfactor), ceil(clip.height * rfactor)

    if max(dst_width, dst_height) <= 0.0:
        raise CustomValueError(
            'Multiplying the resolution by "rfactor" must result in a positive resolution!',
            self.supersample,
            rfactor,
        )

    return self.scale(clip, dst_width, dst_height, shift, **kwargs)  # type: ignore[return-value]

UpResNet10

UpResNet10(
    scale: Literal[1, 2, 4] = 2,
    noise: Literal[-1, 0, 1, 2, 3] = -1,
    backend: Any | None = None,
    tiles: int | tuple[int, int] | None = None,
    tilesize: int | tuple[int, int] | None = None,
    overlap: int | tuple[int, int] | None = None,
    max_instances: int = 2,
    *,
    kernel: KernelLike = Catrom,
    scaler: ScalerLike | None = None,
    shifter: KernelLike | None = None,
    **kwargs: Any
)

Bases: BaseWaifu2xRGB

UpResNet10 model offering a balance of speed and quality.

Example usage:

from vsscale import Waifu2x

doubled = Waifu2x.UpResNet10().scale(clip, clip.width * 2, clip.height * 2)

Parameters:

  • scale

    (Literal[1, 2, 4], default: 2 ) –

    Upscaling factor. 1 = no uspcaling, 2 = 2x, 4 = 4x.

  • noise

    (Literal[-1, 0, 1, 2, 3], default: -1 ) –

    Noise reduction level. -1 = none, 0 = low, 1 = medium, 2 = high, 3 = highest.

  • backend

    (Any | None, default: None ) –

    The backend to be used with the vs-mlrt framework. If set to None, the most suitable backend will be automatically selected, prioritizing fp16 support.

  • tiles

    (int | tuple[int, int] | None, default: None ) –

    Whether to split the image into multiple tiles. This can help reduce VRAM usage, but note that the model's behavior may vary when they are used.

  • tilesize

    (int | tuple[int, int] | None, default: None ) –

    The size of each tile when splitting the image (if tiles are enabled).

  • overlap

    (int | tuple[int, int] | None, default: None ) –

    The size of overlap between tiles.

  • max_instances

    (int, default: 2 ) –

    Maximum instances to spawn when scaling a variable resolution clip.

  • kernel

    (KernelLike, default: Catrom ) –

    Base kernel to be used for certain scaling/shifting/resampling operations. Defaults to Catrom.

  • scaler

    (ScalerLike | None, default: None ) –

    Scaler used for scaling operations. Defaults to kernel.

  • shifter

    (KernelLike | None, default: None ) –

    Kernel used for shifting operations. Defaults to kernel.

  • **kwargs

    (Any, default: {} ) –

    Additional arguments to pass to the backend. See the vsmlrt backend's docstring for more details.

Classes:

Methods:

  • calc_tilesize

    Reimplementation of vsmlrt.calc_tilesize helper function

  • ensure_obj

    Ensure that the input is a scaler instance, resolving it if necessary.

  • from_param

    Resolve and return a scaler type from a given input (string, type, or instance).

  • get_scale_args

    Generate the keyword arguments used for scaling.

  • inference
  • kernel_radius

    Return the effective kernel radius for the scaler.

  • multi

    Deprecated alias for supersample.

  • postprocess_clip
  • preprocess_clip
  • pretty_string

    Cached property returning a user-friendly string representation.

  • scale

    Scale the given clip using the ONNX model.

  • supersample

    Supersample a clip by a given scaling factor.

Attributes:

Source code
568
569
570
571
572
573
574
575
576
577
578
579
580
581
582
583
584
585
586
587
588
589
590
591
592
593
594
595
596
597
598
599
600
601
602
603
604
def __init__(
    self,
    scale: Literal[1, 2, 4] = 2,
    noise: Literal[-1, 0, 1, 2, 3] = -1,
    backend: Any | None = None,
    tiles: int | tuple[int, int] | None = None,
    tilesize: int | tuple[int, int] | None = None,
    overlap: int | tuple[int, int] | None = None,
    max_instances: int = 2,
    *,
    kernel: KernelLike = Catrom,
    scaler: ScalerLike | None = None,
    shifter: KernelLike | None = None,
    **kwargs: Any
) -> None:
    """
    :param scale:           Upscaling factor. 1 = no uspcaling, 2 = 2x, 4 = 4x.
    :param noise:           Noise reduction level. -1 = none, 0 = low, 1 = medium, 2 = high, 3 = highest.
    :param backend:         The backend to be used with the vs-mlrt framework.
                            If set to None, the most suitable backend will be automatically selected, prioritizing fp16 support.
    :param tiles:           Whether to split the image into multiple tiles.
                            This can help reduce VRAM usage, but note that the model's behavior may vary when they are used.
    :param tilesize:        The size of each tile when splitting the image (if tiles are enabled).
    :param overlap:         The size of overlap between tiles.
    :param max_instances:   Maximum instances to spawn when scaling a variable resolution clip.
    :param kernel:          Base kernel to be used for certain scaling/shifting/resampling operations.
                            Defaults to Catrom.
    :param scaler:          Scaler used for scaling operations. Defaults to kernel.
    :param shifter:         Kernel used for shifting operations. Defaults to kernel.
    :param **kwargs:        Additional arguments to pass to the backend.
                            See the vsmlrt backend's docstring for more details.
    """
    self.scale_w2x = scale
    self.noise = noise
    super().__init__(
        None, backend, tiles, tilesize, overlap, max_instances, kernel=kernel, scaler=scaler, shifter=shifter, **kwargs
    )

backend instance-attribute

backend = autoselect_backend(**_default_args | kwargs)

kernel instance-attribute

kernel = ensure_obj(kernel, __class__)

kwargs instance-attribute

kwargs: dict[str, Any] = kwargs

Arguments passed to the implemented funcs or internal scale function.

max_instances instance-attribute

max_instances = max_instances

model instance-attribute

model = str(resolve())

noise instance-attribute

noise: Literal[-1, 0, 1, 2, 3] = noise

overlap instance-attribute

overlap = overlap

overlap_h instance-attribute

overlap_h = 8

overlap_w instance-attribute

overlap_w = 8

scale_function instance-attribute

scale_function: Callable[..., VideoNode]

Scale function called internally when performing scaling operations.

scale_w2x instance-attribute

scale_w2x: Literal[1, 2, 4] = scale

scaler instance-attribute

scaler = ensure_obj(scaler or kernel, __class__)

shifter instance-attribute

shifter = ensure_obj(shifter or kernel, __class__)

tiles instance-attribute

tiles = tiles

tilesize instance-attribute

tilesize = tilesize

cached_property

cached_property(func: Callable[Concatenate[_BaseScalerT, P], T_co])

Bases: cached_property[T_co]

Read only version of functools.cached_property.

Source code
265
def __init__(self, func: Callable[Concatenate[_BaseScalerT, P], T_co]) -> None: ...

calc_tilesize

calc_tilesize(
    clip: VideoNode, **kwargs: Any
) -> tuple[tuple[int, int], tuple[int, int]]

Reimplementation of vsmlrt.calc_tilesize helper function

Source code
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
def calc_tilesize(self, clip: vs.VideoNode, **kwargs: Any) -> tuple[tuple[int, int], tuple[int, int]]:
    """Reimplementation of vsmlrt.calc_tilesize helper function"""

    from vsmlrt import calc_tilesize

    kwargs = dict(
        tiles=self.tiles,
        tilesize=self.tilesize,
        width=clip.width,
        height=clip.height,
        multiple=1,
        overlap_w=self.overlap_w,
        overlap_h=self.overlap_h,
    ) | kwargs

    return calc_tilesize(**kwargs)

ensure_obj classmethod

ensure_obj(
    scaler: str | type[Self] | Self | None = None,
    /,
    func_except: FuncExceptT | None = None,
) -> Self

Ensure that the input is a scaler instance, resolving it if necessary.

Parameters:

  • scaler
    (str | type[Self] | Self | None, default: None ) –

    Scaler identifier (string, class, or instance).

  • func_except
    (FuncExceptT | None, default: None ) –

    Function returned for custom error handling.

Returns:

  • Self

    Scaler instance.

Source code
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
@classmethod
def ensure_obj(
    cls,
    scaler: str | type[Self] | Self | None = None,
    /,
    func_except: FuncExceptT | None = None,
) -> Self:
    """
    Ensure that the input is a scaler instance, resolving it if necessary.

    :param scaler:          Scaler identifier (string, class, or instance).
    :param func_except:     Function returned for custom error handling.
    :return:                Scaler instance.
    """
    return _base_ensure_obj(cls, scaler, func_except)

from_param classmethod

from_param(
    scaler: str | type[Self] | Self | None = None,
    /,
    func_except: FuncExceptT | None = None,
) -> type[Self]

Resolve and return a scaler type from a given input (string, type, or instance).

Parameters:

  • scaler
    (str | type[Self] | Self | None, default: None ) –

    Scaler identifier (string, class, or instance).

  • func_except
    (FuncExceptT | None, default: None ) –

    Function returned for custom error handling.

Returns:

  • type[Self]

    Resolved scaler type.

Source code
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
@classmethod
def from_param(
    cls,
    scaler: str | type[Self] | Self | None = None,
    /,
    func_except: FuncExceptT | None = None,
) -> type[Self]:
    """
    Resolve and return a scaler type from a given input (string, type, or instance).

    :param scaler:          Scaler identifier (string, class, or instance).
    :param func_except:     Function returned for custom error handling.
    :return:                Resolved scaler type.
    """
    return _base_from_param(cls, scaler, cls._err_class, func_except)

get_scale_args

get_scale_args(
    clip: VideoNode,
    shift: tuple[TopShift, LeftShift] = (0, 0),
    width: int | None = None,
    height: int | None = None,
    **kwargs: Any
) -> dict[str, Any]

Generate the keyword arguments used for scaling.

Parameters:

  • clip
    (VideoNode) –

    The source clip.

  • shift
    (tuple[TopShift, LeftShift], default: (0, 0) ) –

    Subpixel shift (top, left).

  • width
    (int | None, default: None ) –

    Target width.

  • height
    (int | None, default: None ) –

    Target height.

  • kwargs
    (Any, default: {} ) –

    Extra parameters to merge.

Returns:

  • dict[str, Any]

    Final dictionary of keyword arguments for the scale function.

Source code
466
467
468
469
470
471
472
473
474
475
476
477
478
479
480
481
482
483
484
def get_scale_args(
    self,
    clip: vs.VideoNode,
    shift: tuple[TopShift, LeftShift] = (0, 0),
    width: int | None = None,
    height: int | None = None,
    **kwargs: Any,
) -> dict[str, Any]:
    """
    Generate the keyword arguments used for scaling.

    :param clip:    The source clip.
    :param shift:   Subpixel shift (top, left).
    :param width:   Target width.
    :param height:  Target height.
    :param kwargs:  Extra parameters to merge.
    :return:        Final dictionary of keyword arguments for the scale function.
    """
    return dict(width=width, height=height, src_top=shift[0], src_left=shift[1]) | self.kwargs | kwargs

inference

inference(
    clip: ConstantFormatVideoNode, **kwargs: Any
) -> ConstantFormatVideoNode
Source code
606
607
608
609
610
611
612
613
614
615
616
617
618
619
620
def inference(self, clip: ConstantFormatVideoNode, **kwargs: Any) -> ConstantFormatVideoNode:
    from vsmlrt import Waifu2x as mlrt_Waifu2x
    from vsmlrt import Waifu2xModel

    return mlrt_Waifu2x(
        clip,
        self.noise,
        self.scale_w2x,
        self.tiles,
        self.tilesize,
        self.overlap,
        Waifu2xModel(self._model),
        self.backend,
        **kwargs
    )

kernel_radius

kernel_radius() -> int

Return the effective kernel radius for the scaler.

Returns:

  • int

    Kernel radius.

Raises:

  • CustomNotImplementedError

    If no kernel radius is defined.

Source code
347
348
349
350
351
352
353
354
355
@cached_property
def kernel_radius(self) -> int:
    """
    Return the effective kernel radius for the scaler.

    :raises CustomNotImplementedError:  If no kernel radius is defined.
    :return:                            Kernel radius.
    """
    ...

multi

multi(
    clip: VideoNodeT,
    multi: float = 2.0,
    shift: tuple[TopShift, LeftShift] = (0, 0),
    **kwargs: Any
) -> VideoNodeT

Deprecated alias for supersample.

Parameters:

  • clip
    (VideoNodeT) –

    The source clip.

  • multi
    (float, default: 2.0 ) –

    Supersampling factor.

  • shift
    (tuple[TopShift, LeftShift], default: (0, 0) ) –

    Subpixel shift (top, left) applied during scaling.

  • kwargs
    (Any, default: {} ) –

    Additional arguments forwarded to the scale function.

Returns:

Source code
451
452
453
454
455
456
457
458
459
460
461
462
463
464
@deprecated('The "multi" method is deprecated. Use "supersample" instead.', category=DeprecationWarning)
def multi(
    self, clip: VideoNodeT, multi: float = 2.0, shift: tuple[TopShift, LeftShift] = (0, 0), **kwargs: Any
) -> VideoNodeT:
    """
    Deprecated alias for `supersample`.

    :param clip:    The source clip.
    :param multi:   Supersampling factor.
    :param shift:   Subpixel shift (top, left) applied during scaling.
    :param kwargs:  Additional arguments forwarded to the scale function.
    :return:        The supersampled clip.
    """
    return self.supersample(clip, multi, shift, **kwargs)

postprocess_clip

postprocess_clip(
    clip: VideoNode, input_clip: VideoNode, **kwargs: Any
) -> ConstantFormatVideoNode
Source code
628
629
630
631
632
633
634
635
def postprocess_clip(self, clip: vs.VideoNode, input_clip: vs.VideoNode, **kwargs: Any) -> ConstantFormatVideoNode:
    assert check_variable_format(clip, self.__class__)

    if clip.format != get_video_format(input_clip):
        kwargs = dict(dither_type=DitherType.ORDERED) | kwargs
        clip = self.kernel.resample(clip, input_clip, Matrix.from_video(input_clip, func=self.__class__), **kwargs)

    return clip

preprocess_clip

preprocess_clip(clip: VideoNode, **kwargs: Any) -> ConstantFormatVideoNode
Source code
624
625
626
def preprocess_clip(self, clip: vs.VideoNode, **kwargs: Any) -> ConstantFormatVideoNode:
    clip = self.kernel.resample(clip, vs.RGBH if self.backend.fp16 else vs.RGBS, Matrix.RGB)
    return limiter(clip, func=self.__class__)

pretty_string

pretty_string() -> str

Cached property returning a user-friendly string representation.

Returns:

  • str

    Pretty-printed string with arguments.

Source code
368
369
370
371
372
373
374
375
@cached_property
def pretty_string(self) -> str:
    """
    Cached property returning a user-friendly string representation.

    :return: Pretty-printed string with arguments.
    """
    return self._pretty_string()

scale

scale(
    clip: VideoNode,
    width: int | None = None,
    height: int | None = None,
    shift: tuple[float, float] = (0, 0),
    **kwargs: Any
) -> ConstantFormatVideoNode

Scale the given clip using the ONNX model.

Parameters:

  • clip
    (VideoNode) –

    The input clip to be scaled.

  • width
    (int | None, default: None ) –

    The target width for scaling. If None, the width of the input clip will be used.

  • height
    (int | None, default: None ) –

    The target height for scaling. If None, the height of the input clip will be used.

  • shift
    (tuple[float, float], default: (0, 0) ) –

    A tuple representing the shift values for the x and y axes.

  • **kwargs
    (Any, default: {} ) –

    Additional arguments to be passed to the preprocess_clip, postprocess_clip, inference, and _final_scale methods. Use the prefix preprocess_ or postprocess_ to pass an argument to the respective method. Use the prefix inference_ to pass an argument to the inference method.

Returns:

  • ConstantFormatVideoNode

    The scaled clip.

Source code
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
def scale(
    self,
    clip: vs.VideoNode,
    width: int | None = None,
    height: int | None = None,
    shift: tuple[float, float] = (0, 0),
    **kwargs: Any
) -> ConstantFormatVideoNode:
    """
    Scale the given clip using the ONNX model.

    :param clip:        The input clip to be scaled.
    :param width:       The target width for scaling. If None, the width of the input clip will be used.
    :param height:      The target height for scaling. If None, the height of the input clip will be used.
    :param shift:       A tuple representing the shift values for the x and y axes.
    :param **kwargs:    Additional arguments to be passed to the `preprocess_clip`, `postprocess_clip`,
                        `inference`, and `_final_scale` methods.
                        Use the prefix `preprocess_` or `postprocess_` to pass an argument to the respective method.
                        Use the prefix `inference_` to pass an argument to the inference method.

    :return:            The scaled clip.
    """
    from vsmlrt import Backend

    assert check_variable_format(clip, self.__class__)

    width, height = self._wh_norm(clip, width, height)

    preprocess_kwargs = dict[str, Any]()
    postprocess_kwargs = dict[str, Any]()
    inference_kwargs = dict[str, Any]()

    for k in kwargs.copy():
        for prefix, ckwargs in zip(
            ("preprocess_", "postprocess_", "inference_"),
            (preprocess_kwargs, postprocess_kwargs, inference_kwargs)
        ):
            if k.startswith(prefix):
                ckwargs[k.removeprefix(prefix)] = kwargs.pop(k)
                break

    wclip = self.preprocess_clip(clip, **preprocess_kwargs)

    if 0 not in {clip.width, clip.height}:
        scaled = self.inference(wclip, **inference_kwargs)
    else:
        if not isinstance(self.backend, Backend.TRT):
            raise CustomValueError(
                "Variable resolution clips can only be processed with TRT Backend!", self.__class__, self.backend
            )

        warning(f"{self.__class__.__name__}: Variable resolution clip detected!")

        if self.backend.static_shape:
            warning("static_shape is True, setting it to False...")
            self.backend.static_shape = False

        if not self.backend.max_shapes:
            warning("max_shapes is None, setting it to (1936, 1088). You may want to adjust it...")
            self.backend.max_shapes = (1936, 1088)

        if not self.backend.opt_shapes:
            warning("opt_shapes is None, setting it to (64, 64). You may want to adjust it...")
            self.backend.opt_shapes = (64, 64)

        scaled = ProcessVariableResClip[ConstantFormatVideoNode].from_func(
            wclip, lambda c: self.inference(c, **inference_kwargs), False, wclip.format, self.max_instances
        )

    scaled = self.postprocess_clip(scaled, clip, **postprocess_kwargs)

    return self._finish_scale(scaled, clip, width, height, shift, **kwargs)

supersample

supersample(
    clip: VideoNodeT,
    rfactor: float = 2.0,
    shift: tuple[TopShift, LeftShift] = (0, 0),
    **kwargs: Any
) -> VideoNodeT

Supersample a clip by a given scaling factor.

Parameters:

  • clip
    (VideoNodeT) –

    The source clip.

  • rfactor
    (float, default: 2.0 ) –

    Scaling factor for supersampling.

  • shift
    (tuple[TopShift, LeftShift], default: (0, 0) ) –

    Subpixel shift (top, left) applied during scaling.

  • kwargs
    (Any, default: {} ) –

    Additional arguments forwarded to the scale function.

Returns:

Raises:

  • CustomValueError

    If resulting resolution is non-positive.

Source code
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
def supersample(
    self, clip: VideoNodeT, rfactor: float = 2.0, shift: tuple[TopShift, LeftShift] = (0, 0), **kwargs: Any
) -> VideoNodeT:
    """
    Supersample a clip by a given scaling factor.

    :param clip:                The source clip.
    :param rfactor:             Scaling factor for supersampling.
    :param shift:               Subpixel shift (top, left) applied during scaling.
    :param kwargs:              Additional arguments forwarded to the scale function.
    :raises CustomValueError:   If resulting resolution is non-positive.
    :return:                    The supersampled clip.
    """
    assert check_variable_resolution(clip, self.supersample)

    dst_width, dst_height = ceil(clip.width * rfactor), ceil(clip.height * rfactor)

    if max(dst_width, dst_height) <= 0.0:
        raise CustomValueError(
            'Multiplying the resolution by "rfactor" must result in a positive resolution!',
            self.supersample,
            rfactor,
        )

    return self.scale(clip, dst_width, dst_height, shift, **kwargs)  # type: ignore[return-value]

cached_property

cached_property(func: Callable[Concatenate[_BaseScalerT, P], T_co])

Bases: cached_property[T_co]

Read only version of functools.cached_property.

Source code
265
def __init__(self, func: Callable[Concatenate[_BaseScalerT, P], T_co]) -> None: ...

calc_tilesize

calc_tilesize(
    clip: VideoNode, **kwargs: Any
) -> tuple[tuple[int, int], tuple[int, int]]

Reimplementation of vsmlrt.calc_tilesize helper function

Source code
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
def calc_tilesize(self, clip: vs.VideoNode, **kwargs: Any) -> tuple[tuple[int, int], tuple[int, int]]:
    """Reimplementation of vsmlrt.calc_tilesize helper function"""

    from vsmlrt import calc_tilesize

    kwargs = dict(
        tiles=self.tiles,
        tilesize=self.tilesize,
        width=clip.width,
        height=clip.height,
        multiple=1,
        overlap_w=self.overlap_w,
        overlap_h=self.overlap_h,
    ) | kwargs

    return calc_tilesize(**kwargs)

ensure_obj classmethod

ensure_obj(
    scaler: str | type[Self] | Self | None = None,
    /,
    func_except: FuncExceptT | None = None,
) -> Self

Ensure that the input is a scaler instance, resolving it if necessary.

Parameters:

  • scaler

    (str | type[Self] | Self | None, default: None ) –

    Scaler identifier (string, class, or instance).

  • func_except

    (FuncExceptT | None, default: None ) –

    Function returned for custom error handling.

Returns:

  • Self

    Scaler instance.

Source code
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
@classmethod
def ensure_obj(
    cls,
    scaler: str | type[Self] | Self | None = None,
    /,
    func_except: FuncExceptT | None = None,
) -> Self:
    """
    Ensure that the input is a scaler instance, resolving it if necessary.

    :param scaler:          Scaler identifier (string, class, or instance).
    :param func_except:     Function returned for custom error handling.
    :return:                Scaler instance.
    """
    return _base_ensure_obj(cls, scaler, func_except)

from_param classmethod

from_param(
    scaler: str | type[Self] | Self | None = None,
    /,
    func_except: FuncExceptT | None = None,
) -> type[Self]

Resolve and return a scaler type from a given input (string, type, or instance).

Parameters:

  • scaler

    (str | type[Self] | Self | None, default: None ) –

    Scaler identifier (string, class, or instance).

  • func_except

    (FuncExceptT | None, default: None ) –

    Function returned for custom error handling.

Returns:

  • type[Self]

    Resolved scaler type.

Source code
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
@classmethod
def from_param(
    cls,
    scaler: str | type[Self] | Self | None = None,
    /,
    func_except: FuncExceptT | None = None,
) -> type[Self]:
    """
    Resolve and return a scaler type from a given input (string, type, or instance).

    :param scaler:          Scaler identifier (string, class, or instance).
    :param func_except:     Function returned for custom error handling.
    :return:                Resolved scaler type.
    """
    return _base_from_param(cls, scaler, cls._err_class, func_except)

get_scale_args

get_scale_args(
    clip: VideoNode,
    shift: tuple[TopShift, LeftShift] = (0, 0),
    width: int | None = None,
    height: int | None = None,
    **kwargs: Any
) -> dict[str, Any]

Generate the keyword arguments used for scaling.

Parameters:

  • clip

    (VideoNode) –

    The source clip.

  • shift

    (tuple[TopShift, LeftShift], default: (0, 0) ) –

    Subpixel shift (top, left).

  • width

    (int | None, default: None ) –

    Target width.

  • height

    (int | None, default: None ) –

    Target height.

  • kwargs

    (Any, default: {} ) –

    Extra parameters to merge.

Returns:

  • dict[str, Any]

    Final dictionary of keyword arguments for the scale function.

Source code
466
467
468
469
470
471
472
473
474
475
476
477
478
479
480
481
482
483
484
def get_scale_args(
    self,
    clip: vs.VideoNode,
    shift: tuple[TopShift, LeftShift] = (0, 0),
    width: int | None = None,
    height: int | None = None,
    **kwargs: Any,
) -> dict[str, Any]:
    """
    Generate the keyword arguments used for scaling.

    :param clip:    The source clip.
    :param shift:   Subpixel shift (top, left).
    :param width:   Target width.
    :param height:  Target height.
    :param kwargs:  Extra parameters to merge.
    :return:        Final dictionary of keyword arguments for the scale function.
    """
    return dict(width=width, height=height, src_top=shift[0], src_left=shift[1]) | self.kwargs | kwargs

inference

inference(
    clip: ConstantFormatVideoNode, **kwargs: Any
) -> ConstantFormatVideoNode
Source code
675
676
677
678
679
680
681
682
683
684
685
def inference(self, clip: ConstantFormatVideoNode, **kwargs: Any) -> ConstantFormatVideoNode:
    # Cunet model ruins image borders, so we need to pad it before upscale and crop it after.
    if kwargs.pop("no_pad", False):
        return super().inference(clip, **kwargs)

    with padder.ctx(16, 4) as pad:
        padded = pad.MIRROR(clip)
        scaled = super().inference(padded, **kwargs)
        cropped = pad.CROP(scaled)

    return cropped

kernel_radius

kernel_radius() -> int

Return the effective kernel radius for the scaler.

Returns:

  • int

    Kernel radius.

Raises:

  • CustomNotImplementedError

    If no kernel radius is defined.

Source code
347
348
349
350
351
352
353
354
355
@cached_property
def kernel_radius(self) -> int:
    """
    Return the effective kernel radius for the scaler.

    :raises CustomNotImplementedError:  If no kernel radius is defined.
    :return:                            Kernel radius.
    """
    ...

multi

multi(
    clip: VideoNodeT,
    multi: float = 2.0,
    shift: tuple[TopShift, LeftShift] = (0, 0),
    **kwargs: Any
) -> VideoNodeT

Deprecated alias for supersample.

Parameters:

  • clip

    (VideoNodeT) –

    The source clip.

  • multi

    (float, default: 2.0 ) –

    Supersampling factor.

  • shift

    (tuple[TopShift, LeftShift], default: (0, 0) ) –

    Subpixel shift (top, left) applied during scaling.

  • kwargs

    (Any, default: {} ) –

    Additional arguments forwarded to the scale function.

Returns:

Source code
451
452
453
454
455
456
457
458
459
460
461
462
463
464
@deprecated('The "multi" method is deprecated. Use "supersample" instead.', category=DeprecationWarning)
def multi(
    self, clip: VideoNodeT, multi: float = 2.0, shift: tuple[TopShift, LeftShift] = (0, 0), **kwargs: Any
) -> VideoNodeT:
    """
    Deprecated alias for `supersample`.

    :param clip:    The source clip.
    :param multi:   Supersampling factor.
    :param shift:   Subpixel shift (top, left) applied during scaling.
    :param kwargs:  Additional arguments forwarded to the scale function.
    :return:        The supersampled clip.
    """
    return self.supersample(clip, multi, shift, **kwargs)

postprocess_clip

postprocess_clip(
    clip: VideoNode, input_clip: VideoNode, **kwargs: Any
) -> ConstantFormatVideoNode
Source code
687
688
689
690
691
692
693
694
695
696
697
698
699
def postprocess_clip(self, clip: vs.VideoNode, input_clip: vs.VideoNode, **kwargs: Any) -> ConstantFormatVideoNode:
    # Cunet model also has a tint issue but it is not constant
    # It leaves flat areas alone but tints detailed areas.
    # Since most people will use Cunet to rescale details, the tint fix is enabled by default.
    if kwargs.pop("no_tint_fix", False):
        return super().postprocess_clip(clip, input_clip, **kwargs)

    tint_fix = norm_expr(
        clip, 'x 0.5 255 / + 0 1 clamp',
        planes=0 if get_video_format(input_clip).color_family is vs.GRAY else None,
        func="Waifu2x." + self.__class__.__name__
    )
    return super().postprocess_clip(tint_fix, input_clip, **kwargs)

preprocess_clip

preprocess_clip(clip: VideoNode, **kwargs: Any) -> ConstantFormatVideoNode
Source code
624
625
626
def preprocess_clip(self, clip: vs.VideoNode, **kwargs: Any) -> ConstantFormatVideoNode:
    clip = self.kernel.resample(clip, vs.RGBH if self.backend.fp16 else vs.RGBS, Matrix.RGB)
    return limiter(clip, func=self.__class__)

pretty_string

pretty_string() -> str

Cached property returning a user-friendly string representation.

Returns:

  • str

    Pretty-printed string with arguments.

Source code
368
369
370
371
372
373
374
375
@cached_property
def pretty_string(self) -> str:
    """
    Cached property returning a user-friendly string representation.

    :return: Pretty-printed string with arguments.
    """
    return self._pretty_string()

scale

scale(
    clip: VideoNode,
    width: int | None = None,
    height: int | None = None,
    shift: tuple[float, float] = (0, 0),
    **kwargs: Any
) -> ConstantFormatVideoNode

Scale the given clip using the ONNX model.

Parameters:

  • clip

    (VideoNode) –

    The input clip to be scaled.

  • width

    (int | None, default: None ) –

    The target width for scaling. If None, the width of the input clip will be used.

  • height

    (int | None, default: None ) –

    The target height for scaling. If None, the height of the input clip will be used.

  • shift

    (tuple[float, float], default: (0, 0) ) –

    A tuple representing the shift values for the x and y axes.

  • **kwargs

    (Any, default: {} ) –

    Additional arguments to be passed to the preprocess_clip, postprocess_clip, inference, and _final_scale methods. Use the prefix preprocess_ or postprocess_ to pass an argument to the respective method. Use the prefix inference_ to pass an argument to the inference method. Additional Notes for the Cunet model: - The model can cause artifacts around the image edges. To mitigate this, mirrored padding is applied to the image before inference. This behavior can be disabled by setting inference_no_pad=True. - A tint issue is also present but it is not constant. It leaves flat areas alone but tints detailed areas. Since most people will use Cunet to rescale details, the tint fix is enabled by default. This behavior can be disabled with postprocess_no_tint_fix=True

Returns:

  • ConstantFormatVideoNode

    The scaled clip.

Source code
643
644
645
646
647
648
649
650
651
652
653
654
655
656
657
658
659
660
661
662
663
664
665
666
667
668
669
670
671
672
673
def scale(
    self,
    clip: vs.VideoNode,
    width: int | None = None,
    height: int | None = None,
    shift: tuple[float, float] = (0, 0),
    **kwargs: Any
) -> ConstantFormatVideoNode:
    """
    Scale the given clip using the ONNX model.

    :param clip:        The input clip to be scaled.
    :param width:       The target width for scaling. If None, the width of the input clip will be used.
    :param height:      The target height for scaling. If None, the height of the input clip will be used.
    :param shift:       A tuple representing the shift values for the x and y axes.
    :param **kwargs:    Additional arguments to be passed to the `preprocess_clip`, `postprocess_clip`,
                        `inference`, and `_final_scale` methods.
                        Use the prefix `preprocess_` or `postprocess_` to pass an argument to the respective method.
                        Use the prefix `inference_` to pass an argument to the inference method.

                        Additional Notes for the Cunet model:
                        - The model can cause artifacts around the image edges.
                        To mitigate this, mirrored padding is applied to the image before inference.  
                        This behavior can be disabled by setting `inference_no_pad=True`.
                        - A tint issue is also present but it is not constant. It leaves flat areas alone but tints detailed areas.
                        Since most people will use Cunet to rescale details, the tint fix is enabled by default.
                        This behavior can be disabled with `postprocess_no_tint_fix=True`

    :return:            The scaled clip.
    """
    ...

supersample

supersample(
    clip: VideoNodeT,
    rfactor: float = 2.0,
    shift: tuple[TopShift, LeftShift] = (0, 0),
    **kwargs: Any
) -> VideoNodeT

Supersample a clip by a given scaling factor.

Parameters:

  • clip

    (VideoNodeT) –

    The source clip.

  • rfactor

    (float, default: 2.0 ) –

    Scaling factor for supersampling.

  • shift

    (tuple[TopShift, LeftShift], default: (0, 0) ) –

    Subpixel shift (top, left) applied during scaling.

  • kwargs

    (Any, default: {} ) –

    Additional arguments forwarded to the scale function.

Returns:

Raises:

  • CustomValueError

    If resulting resolution is non-positive.

Source code
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
def supersample(
    self, clip: VideoNodeT, rfactor: float = 2.0, shift: tuple[TopShift, LeftShift] = (0, 0), **kwargs: Any
) -> VideoNodeT:
    """
    Supersample a clip by a given scaling factor.

    :param clip:                The source clip.
    :param rfactor:             Scaling factor for supersampling.
    :param shift:               Subpixel shift (top, left) applied during scaling.
    :param kwargs:              Additional arguments forwarded to the scale function.
    :raises CustomValueError:   If resulting resolution is non-positive.
    :return:                    The supersampled clip.
    """
    assert check_variable_resolution(clip, self.supersample)

    dst_width, dst_height = ceil(clip.width * rfactor), ceil(clip.height * rfactor)

    if max(dst_width, dst_height) <= 0.0:
        raise CustomValueError(
            'Multiplying the resolution by "rfactor" must result in a positive resolution!',
            self.supersample,
            rfactor,
        )

    return self.scale(clip, dst_width, dst_height, shift, **kwargs)  # type: ignore[return-value]

autoselect_backend

autoselect_backend(**kwargs: Any) -> Any

Try to select the best backend for the current system. If the system has an NVIDIA GPU: TRT > CUDA (ORT) > Vulkan > OpenVINO GPU Else: DirectML (D3D12) > MIGraphX > Vulkan > CPU (ORT) > CPU OpenVINO

Parameters:

  • kwargs

    (Any, default: {} ) –

    Additional arguments to pass to the backend.

Returns:

  • Any

    The selected backend.

Source code
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
def autoselect_backend(**kwargs: Any) -> Any:
    """
    Try to select the best backend for the current system.
    If the system has an NVIDIA GPU: TRT > CUDA (ORT) > Vulkan > OpenVINO GPU
    Else: DirectML (D3D12) > MIGraphX > Vulkan > CPU (ORT) > CPU OpenVINO

    :param kwargs:        Additional arguments to pass to the backend.
    :return:              The selected backend.
    """
    import os

    from vsmlrt import Backend

    backend: Any

    if get_nvidia_version():
        if hasattr(core, "trt"):
            backend = Backend.TRT
        elif hasattr(core, "ort"):
            backend = Backend.ORT_CUDA
        elif hasattr(core, "ncnn"):
            backend = Backend.NCNN_VK
        else:
            backend = Backend.OV_GPU
    else:
        if hasattr(core, "ort") and os.name == "nt":
            backend = Backend.ORT_DML
        elif hasattr(core, "migx"):
            backend = Backend.MIGX
        elif hasattr(core, "ncnn"):
            backend = Backend.NCNN_VK
        elif hasattr(core, "ort"):
            backend = Backend.ORT_CPU
        else:
            backend = Backend.OV_CPU

    return backend(**_clean_keywords(kwargs, backend))